Creation of OSD on some HDDs fails
hesse
2 Posts
July 20, 2022, 10:25 amQuote from hesse on July 20, 2022, 10:25 amHello,
i have a newly installed PetaSAN cluster which i am experimenting with at the moment. Everything went smooth so far, at least until i tried to create OSDs on some Samsung HD204UI HDDs which were used in a proprietary NAS system before. Each of these HDDs fails when i try to create a OSD:
[2022-07-19 17:08:03,413][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm prepare --bluestore --data /dev/sdi1
[2022-07-19 17:08:03,417][ceph_volume.process][INFO ] Running command: /bin/lsblk -plno KNAME,NAME,TYPE
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda3 /dev/sda3 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda4 /dev/sda4 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda5 /dev/sda5 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdd /dev/sdd disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdd1 /dev/sdd1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sde /dev/sde disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sde1 /dev/sde1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdf /dev/sdf disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdf1 /dev/sdf1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdg /dev/sdg disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdg1 /dev/sdg1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdh /dev/sdh disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdh1 /dev/sdh1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdi /dev/sdi disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdi1 /dev/sdi1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sr0 /dev/sr0 rom
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--f46cec17--3400--4015--aa12--51c2fcadb14a-osd--block--036d5091--fae3--4a7a--a434--b43de8fdfbfa lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-1 /dev/mapper/ceph--b002686e--c356--4a24--967f--ff14e57bfc91-osd--block--8ae3ef48--3e13--41e3--adf0--66dce4259906 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-2 /dev/mapper/ceph--38934150--3c1f--4b77--81f5--455136b7d241-osd--block--69928c07--50d4--4c44--96b0--a86c4d10c93d lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-3 /dev/mapper/ceph--1fdb48a4--f98b--4c5b--9f5a--d364d07a2c2f-osd--block--2c7350d5--258e--4c40--afde--938c80f1a340 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-4 /dev/mapper/ceph--9a1362e5--63c7--47bd--be4d--e809b8540d90-osd--block--43b25e1b--af01--4355--be3f--e49934b5bb0b lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-5 /dev/mapper/ceph--32f23898--0eed--48be--aa2f--933f7edf4ad6-osd--block--1eac47a2--5fae--4bcc--ab4c--c7da9e7cccf2 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-6 /dev/mapper/ceph--a868c1ad--b4f5--46c0--8092--8f22371cacbe-osd--block--9eabe228--68a7--45a8--a9b0--5fddd2d3f182 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1 /dev/nvme0n1 disk
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p1 /dev/nvme0n1p1 part
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p2 /dev/nvme0n1p2 part
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p3 /dev/nvme0n1p3 part
[2022-07-19 17:08:03,449][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdi1 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-07-19 17:08:03,605][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdi1
[2022-07-19 17:08:03,610][ceph_volume.process][INFO ] stdout NAME="sdi1" KNAME="sdi1" MAJ:MIN="8:129" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="1.8T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdi" PARTLABEL="ceph-data"
[2022-07-19 17:08:03,611][ceph_volume.process][INFO ] Running command: /sbin/blkid -p /dev/sdi1
[2022-07-19 17:08:03,620][ceph_volume.process][INFO ] stdout /dev/sdi1: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="3d06051e-1fac-4fbb-a88a-fd7c3be7c33f" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="3907027087" PART_ENTRY_DISK="8:128"
[2022-07-19 17:08:03,621][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdi1
[2022-07-19 17:08:03,705][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdi1".
[2022-07-19 17:08:03,705][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdi1
[2022-07-19 17:08:03,736][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdi1: (2) No such file or directory
[2022-07-19 17:08:03,737][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdi1
[2022-07-19 17:08:03,768][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdi1: (2) No such file or directory
[2022-07-19 17:08:03,769][ceph_volume.process][INFO ] Running command: /bin/udevadm info --query=property /dev/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:08.0/0000:03:00.0/host1/port-1:3/end_device-1:3/target1:0:3/1:0:3:0/block/sdi/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout PARTN=1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout MINOR=129
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=1838389808
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_VENDOR=ATA
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_MODEL=SAMSUNG_HD204UI
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HD204UI\x20
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_REVISION=0001
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=S2H7J1AZA06302
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_VENDOR=S2H7J1AZA06302
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REG=50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_MODEL=SAMSUNG_HD204UI
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=SAMSUNG\x20HD204UI\x20
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_REVISION=0001
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_WWN=0x50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_BUS=ata
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_ATA=1
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SERIAL=SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=S2H7J1AZA06302
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:03:00.0-sas-phy3-lun-0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_03_00_0-sas-phy3-lun-0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=0d0b001f-5df1-4535-822b-1f9df11e95f2
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-data
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=3d06051e-1fac-4fbb-a88a-fd7c3be7c33f
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=1
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=2048
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=3907027087
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:128
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/scsi-350024e900433173c-part1 /dev/disk/by-id/scsi-1ATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/scsi-SATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-partlabel/ceph-data /dev/disk/by-path/pci-0000:03:00.0-sas-phy3-lun-0-part1 /dev/disk/by-id/ata-SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/scsi-0ATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/wwn-0x50024e900433173c-part1 /dev/disk/by-partuuid/3d06051e-1fac-4fbb-a88a-fd7c3be7c33f
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-07-19 17:08:03,777][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2022-07-19 17:08:03,778][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-07-19 17:08:03,800][ceph_volume.process][INFO ] stdout AQDTyNZij82eLxAAbQp87qn+jzaQCRBtIwH4dA==
[2022-07-19 17:08:03,801][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e94e8fed-4c6e-49c6-a1ab-07825b9af02a
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stdout 18
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:03.934+0200 7fbb8e661700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:03.934+0200 7fbb8e661700 -1 AuthRegistry(0x7fbb880595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-07-19 17:08:04,442][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdi1
[2022-07-19 17:08:04,447][ceph_volume.process][INFO ] stdout NAME="sdi1" KNAME="sdi1" MAJ:MIN="8:129" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="1.8T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdi" PARTLABEL="ceph-data"
[2022-07-19 17:08:04,447][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2022-07-19 17:08:04,447][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdi1
[2022-07-19 17:08:04,533][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdi1".
[2022-07-19 17:08:04,533][ceph_volume.process][INFO ] Running command: /sbin/vgcreate --force --yes ceph-7e5badc1-e48f-487e-8d6a-6e5cc9c04646 /dev/sdi1
[2022-07-19 17:08:04,593][ceph_volume.process][INFO ] stderr Cannot use /dev/sdi1: device is an md component
Command requires all devices to be found.
[2022-07-19 17:08:04,653][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
block_lv = self.prepare_data_device('block', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 943, in create_lv
vg = create_vg(device, name_prefix='ceph')
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 646, in create_vg
process.run([
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2022-07-19 17:08:04,654][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-07-19 17:08:04,655][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.18 --yes-i-really-mean-it
[2022-07-19 17:08:04,806][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:04.790+0200 7f2a1414b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-07-19T17:08:04.790+0200 7f2a1414b700 -1 AuthRegistry(0x7f2a0c0595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-07-19 17:08:05,286][ceph_volume.process][INFO ] stderr purged osd.18
[2022-07-19 17:08:05,299][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
block_lv = self.prepare_data_device('block', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 943, in create_lv
vg = create_vg(device, name_prefix='ceph')
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 646, in create_vg
process.run([
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
I already tried wiping them and creating a new partition table, but without success. Is there anything more i can try, or is there some hardware reason i cannot use these HDDs?
Hello,
i have a newly installed PetaSAN cluster which i am experimenting with at the moment. Everything went smooth so far, at least until i tried to create OSDs on some Samsung HD204UI HDDs which were used in a proprietary NAS system before. Each of these HDDs fails when i try to create a OSD:
[2022-07-19 17:08:03,413][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm prepare --bluestore --data /dev/sdi1
[2022-07-19 17:08:03,417][ceph_volume.process][INFO ] Running command: /bin/lsblk -plno KNAME,NAME,TYPE
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda3 /dev/sda3 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda4 /dev/sda4 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda5 /dev/sda5 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdd /dev/sdd disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdd1 /dev/sdd1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sde /dev/sde disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sde1 /dev/sde1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdf /dev/sdf disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdf1 /dev/sdf1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdg /dev/sdg disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdg1 /dev/sdg1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdh /dev/sdh disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdh1 /dev/sdh1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdi /dev/sdi disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdi1 /dev/sdi1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sr0 /dev/sr0 rom
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--f46cec17--3400--4015--aa12--51c2fcadb14a-osd--block--036d5091--fae3--4a7a--a434--b43de8fdfbfa lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-1 /dev/mapper/ceph--b002686e--c356--4a24--967f--ff14e57bfc91-osd--block--8ae3ef48--3e13--41e3--adf0--66dce4259906 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-2 /dev/mapper/ceph--38934150--3c1f--4b77--81f5--455136b7d241-osd--block--69928c07--50d4--4c44--96b0--a86c4d10c93d lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-3 /dev/mapper/ceph--1fdb48a4--f98b--4c5b--9f5a--d364d07a2c2f-osd--block--2c7350d5--258e--4c40--afde--938c80f1a340 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-4 /dev/mapper/ceph--9a1362e5--63c7--47bd--be4d--e809b8540d90-osd--block--43b25e1b--af01--4355--be3f--e49934b5bb0b lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-5 /dev/mapper/ceph--32f23898--0eed--48be--aa2f--933f7edf4ad6-osd--block--1eac47a2--5fae--4bcc--ab4c--c7da9e7cccf2 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-6 /dev/mapper/ceph--a868c1ad--b4f5--46c0--8092--8f22371cacbe-osd--block--9eabe228--68a7--45a8--a9b0--5fddd2d3f182 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1 /dev/nvme0n1 disk
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p1 /dev/nvme0n1p1 part
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p2 /dev/nvme0n1p2 part
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p3 /dev/nvme0n1p3 part
[2022-07-19 17:08:03,449][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdi1 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-07-19 17:08:03,605][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdi1
[2022-07-19 17:08:03,610][ceph_volume.process][INFO ] stdout NAME="sdi1" KNAME="sdi1" MAJ:MIN="8:129" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="1.8T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdi" PARTLABEL="ceph-data"
[2022-07-19 17:08:03,611][ceph_volume.process][INFO ] Running command: /sbin/blkid -p /dev/sdi1
[2022-07-19 17:08:03,620][ceph_volume.process][INFO ] stdout /dev/sdi1: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="3d06051e-1fac-4fbb-a88a-fd7c3be7c33f" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="3907027087" PART_ENTRY_DISK="8:128"
[2022-07-19 17:08:03,621][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdi1
[2022-07-19 17:08:03,705][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdi1".
[2022-07-19 17:08:03,705][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdi1
[2022-07-19 17:08:03,736][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdi1: (2) No such file or directory
[2022-07-19 17:08:03,737][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdi1
[2022-07-19 17:08:03,768][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdi1: (2) No such file or directory
[2022-07-19 17:08:03,769][ceph_volume.process][INFO ] Running command: /bin/udevadm info --query=property /dev/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:08.0/0000:03:00.0/host1/port-1:3/end_device-1:3/target1:0:3/1:0:3:0/block/sdi/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout PARTN=1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout MINOR=129
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=1838389808
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_VENDOR=ATA
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_MODEL=SAMSUNG_HD204UI
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HD204UI\x20
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_REVISION=0001
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=S2H7J1AZA06302
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_VENDOR=S2H7J1AZA06302
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REG=50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_MODEL=SAMSUNG_HD204UI
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=SAMSUNG\x20HD204UI\x20
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_REVISION=0001
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_WWN=0x50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_BUS=ata
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_ATA=1
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SERIAL=SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=S2H7J1AZA06302
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:03:00.0-sas-phy3-lun-0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_03_00_0-sas-phy3-lun-0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=0d0b001f-5df1-4535-822b-1f9df11e95f2
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-data
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=3d06051e-1fac-4fbb-a88a-fd7c3be7c33f
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=1
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=2048
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=3907027087
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:128
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/scsi-350024e900433173c-part1 /dev/disk/by-id/scsi-1ATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/scsi-SATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-partlabel/ceph-data /dev/disk/by-path/pci-0000:03:00.0-sas-phy3-lun-0-part1 /dev/disk/by-id/ata-SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/scsi-0ATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/wwn-0x50024e900433173c-part1 /dev/disk/by-partuuid/3d06051e-1fac-4fbb-a88a-fd7c3be7c33f
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-07-19 17:08:03,777][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2022-07-19 17:08:03,778][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-07-19 17:08:03,800][ceph_volume.process][INFO ] stdout AQDTyNZij82eLxAAbQp87qn+jzaQCRBtIwH4dA==
[2022-07-19 17:08:03,801][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e94e8fed-4c6e-49c6-a1ab-07825b9af02a
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stdout 18
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:03.934+0200 7fbb8e661700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:03.934+0200 7fbb8e661700 -1 AuthRegistry(0x7fbb880595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-07-19 17:08:04,442][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdi1
[2022-07-19 17:08:04,447][ceph_volume.process][INFO ] stdout NAME="sdi1" KNAME="sdi1" MAJ:MIN="8:129" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="1.8T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdi" PARTLABEL="ceph-data"
[2022-07-19 17:08:04,447][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2022-07-19 17:08:04,447][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdi1
[2022-07-19 17:08:04,533][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdi1".
[2022-07-19 17:08:04,533][ceph_volume.process][INFO ] Running command: /sbin/vgcreate --force --yes ceph-7e5badc1-e48f-487e-8d6a-6e5cc9c04646 /dev/sdi1
[2022-07-19 17:08:04,593][ceph_volume.process][INFO ] stderr Cannot use /dev/sdi1: device is an md component
Command requires all devices to be found.
[2022-07-19 17:08:04,653][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
block_lv = self.prepare_data_device('block', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 943, in create_lv
vg = create_vg(device, name_prefix='ceph')
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 646, in create_vg
process.run([
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2022-07-19 17:08:04,654][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-07-19 17:08:04,655][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.18 --yes-i-really-mean-it
[2022-07-19 17:08:04,806][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:04.790+0200 7f2a1414b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-07-19T17:08:04.790+0200 7f2a1414b700 -1 AuthRegistry(0x7f2a0c0595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-07-19 17:08:05,286][ceph_volume.process][INFO ] stderr purged osd.18
[2022-07-19 17:08:05,299][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
block_lv = self.prepare_data_device('block', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 943, in create_lv
vg = create_vg(device, name_prefix='ceph')
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 646, in create_vg
process.run([
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
I already tried wiping them and creating a new partition table, but without success. Is there anything more i can try, or is there some hardware reason i cannot use these HDDs?
admin
2,930 Posts
July 20, 2022, 4:29 pmQuote from admin on July 20, 2022, 4:29 pmI already tried wiping them and creating a new partition table, but without success. Is there anything more i can try, or is there some hardware reason i cannot use these HDDs?
try the wipefs and ceph-volume lvm zap also if there were existing lvm volumes try to wipe them using lvm commands.
if the disks were used outside of Linux, sometimes the wipe commands do not work ( specially from FreeBSD system) try to dd zero on first and last 1G on disk. Else try to wipe them from their original environment.
If the disks are bad, you could see kernel errors in dmesg
I already tried wiping them and creating a new partition table, but without success. Is there anything more i can try, or is there some hardware reason i cannot use these HDDs?
try the wipefs and ceph-volume lvm zap also if there were existing lvm volumes try to wipe them using lvm commands.
if the disks were used outside of Linux, sometimes the wipe commands do not work ( specially from FreeBSD system) try to dd zero on first and last 1G on disk. Else try to wipe them from their original environment.
If the disks are bad, you could see kernel errors in dmesg
hesse
2 Posts
July 25, 2022, 11:12 amQuote from hesse on July 25, 2022, 11:12 amThank you for the hint - overwriting the last part of the disk did the trick. If anyone else has problems like this, here is the bash snippet i used:
#what to wipe
TOWIPE=/dev/sdX
#wipe first 1G
dd if=/dev/zero of=$TOWIPE bs=1M count=1024
#wipe last 1G
dd if=/dev/zero of=$TOWIPE bs=1M seek=$(((`blockdev --getsz $TOWIPE` * 512 / 1024 / 1024) - 1023)) count=1024
It will give an error like "no space left on device" because it actually tries to write after end of the disk, but when i tried it with writing right up to the end, it didn't work for me.
Thank you for the hint - overwriting the last part of the disk did the trick. If anyone else has problems like this, here is the bash snippet i used:
#what to wipe
TOWIPE=/dev/sdX
#wipe first 1G
dd if=/dev/zero of=$TOWIPE bs=1M count=1024
#wipe last 1G
dd if=/dev/zero of=$TOWIPE bs=1M seek=$(((`blockdev --getsz $TOWIPE` * 512 / 1024 / 1024) - 1023)) count=1024
It will give an error like "no space left on device" because it actually tries to write after end of the disk, but when i tried it with writing right up to the end, it didn't work for me.
Creation of OSD on some HDDs fails
hesse
2 Posts
Quote from hesse on July 20, 2022, 10:25 amHello,
i have a newly installed PetaSAN cluster which i am experimenting with at the moment. Everything went smooth so far, at least until i tried to create OSDs on some Samsung HD204UI HDDs which were used in a proprietary NAS system before. Each of these HDDs fails when i try to create a OSD:
[2022-07-19 17:08:03,413][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm prepare --bluestore --data /dev/sdi1
[2022-07-19 17:08:03,417][ceph_volume.process][INFO ] Running command: /bin/lsblk -plno KNAME,NAME,TYPE
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda3 /dev/sda3 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda4 /dev/sda4 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda5 /dev/sda5 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdd /dev/sdd disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdd1 /dev/sdd1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sde /dev/sde disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sde1 /dev/sde1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdf /dev/sdf disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdf1 /dev/sdf1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdg /dev/sdg disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdg1 /dev/sdg1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdh /dev/sdh disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdh1 /dev/sdh1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdi /dev/sdi disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdi1 /dev/sdi1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sr0 /dev/sr0 rom
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--f46cec17--3400--4015--aa12--51c2fcadb14a-osd--block--036d5091--fae3--4a7a--a434--b43de8fdfbfa lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-1 /dev/mapper/ceph--b002686e--c356--4a24--967f--ff14e57bfc91-osd--block--8ae3ef48--3e13--41e3--adf0--66dce4259906 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-2 /dev/mapper/ceph--38934150--3c1f--4b77--81f5--455136b7d241-osd--block--69928c07--50d4--4c44--96b0--a86c4d10c93d lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-3 /dev/mapper/ceph--1fdb48a4--f98b--4c5b--9f5a--d364d07a2c2f-osd--block--2c7350d5--258e--4c40--afde--938c80f1a340 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-4 /dev/mapper/ceph--9a1362e5--63c7--47bd--be4d--e809b8540d90-osd--block--43b25e1b--af01--4355--be3f--e49934b5bb0b lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-5 /dev/mapper/ceph--32f23898--0eed--48be--aa2f--933f7edf4ad6-osd--block--1eac47a2--5fae--4bcc--ab4c--c7da9e7cccf2 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-6 /dev/mapper/ceph--a868c1ad--b4f5--46c0--8092--8f22371cacbe-osd--block--9eabe228--68a7--45a8--a9b0--5fddd2d3f182 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1 /dev/nvme0n1 disk
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p1 /dev/nvme0n1p1 part
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p2 /dev/nvme0n1p2 part
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p3 /dev/nvme0n1p3 part
[2022-07-19 17:08:03,449][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdi1 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-07-19 17:08:03,605][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdi1
[2022-07-19 17:08:03,610][ceph_volume.process][INFO ] stdout NAME="sdi1" KNAME="sdi1" MAJ:MIN="8:129" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="1.8T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdi" PARTLABEL="ceph-data"
[2022-07-19 17:08:03,611][ceph_volume.process][INFO ] Running command: /sbin/blkid -p /dev/sdi1
[2022-07-19 17:08:03,620][ceph_volume.process][INFO ] stdout /dev/sdi1: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="3d06051e-1fac-4fbb-a88a-fd7c3be7c33f" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="3907027087" PART_ENTRY_DISK="8:128"
[2022-07-19 17:08:03,621][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdi1
[2022-07-19 17:08:03,705][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdi1".
[2022-07-19 17:08:03,705][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdi1
[2022-07-19 17:08:03,736][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdi1: (2) No such file or directory
[2022-07-19 17:08:03,737][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdi1
[2022-07-19 17:08:03,768][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdi1: (2) No such file or directory
[2022-07-19 17:08:03,769][ceph_volume.process][INFO ] Running command: /bin/udevadm info --query=property /dev/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:08.0/0000:03:00.0/host1/port-1:3/end_device-1:3/target1:0:3/1:0:3:0/block/sdi/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout PARTN=1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout MINOR=129
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=1838389808
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_VENDOR=ATA
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_MODEL=SAMSUNG_HD204UI
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HD204UI\x20
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_REVISION=0001
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=S2H7J1AZA06302
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_VENDOR=S2H7J1AZA06302
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REG=50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_MODEL=SAMSUNG_HD204UI
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=SAMSUNG\x20HD204UI\x20
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_REVISION=0001
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_WWN=0x50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_BUS=ata
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_ATA=1
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SERIAL=SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=S2H7J1AZA06302
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:03:00.0-sas-phy3-lun-0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_03_00_0-sas-phy3-lun-0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=0d0b001f-5df1-4535-822b-1f9df11e95f2
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-data
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=3d06051e-1fac-4fbb-a88a-fd7c3be7c33f
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=1
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=2048
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=3907027087
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:128
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/scsi-350024e900433173c-part1 /dev/disk/by-id/scsi-1ATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/scsi-SATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-partlabel/ceph-data /dev/disk/by-path/pci-0000:03:00.0-sas-phy3-lun-0-part1 /dev/disk/by-id/ata-SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/scsi-0ATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/wwn-0x50024e900433173c-part1 /dev/disk/by-partuuid/3d06051e-1fac-4fbb-a88a-fd7c3be7c33f
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-07-19 17:08:03,777][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2022-07-19 17:08:03,778][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-07-19 17:08:03,800][ceph_volume.process][INFO ] stdout AQDTyNZij82eLxAAbQp87qn+jzaQCRBtIwH4dA==
[2022-07-19 17:08:03,801][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e94e8fed-4c6e-49c6-a1ab-07825b9af02a
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stdout 18
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:03.934+0200 7fbb8e661700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:03.934+0200 7fbb8e661700 -1 AuthRegistry(0x7fbb880595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-07-19 17:08:04,442][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdi1
[2022-07-19 17:08:04,447][ceph_volume.process][INFO ] stdout NAME="sdi1" KNAME="sdi1" MAJ:MIN="8:129" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="1.8T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdi" PARTLABEL="ceph-data"
[2022-07-19 17:08:04,447][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2022-07-19 17:08:04,447][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdi1
[2022-07-19 17:08:04,533][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdi1".
[2022-07-19 17:08:04,533][ceph_volume.process][INFO ] Running command: /sbin/vgcreate --force --yes ceph-7e5badc1-e48f-487e-8d6a-6e5cc9c04646 /dev/sdi1
[2022-07-19 17:08:04,593][ceph_volume.process][INFO ] stderr Cannot use /dev/sdi1: device is an md component
Command requires all devices to be found.
[2022-07-19 17:08:04,653][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
block_lv = self.prepare_data_device('block', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 943, in create_lv
vg = create_vg(device, name_prefix='ceph')
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 646, in create_vg
process.run([
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2022-07-19 17:08:04,654][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-07-19 17:08:04,655][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.18 --yes-i-really-mean-it
[2022-07-19 17:08:04,806][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:04.790+0200 7f2a1414b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-07-19T17:08:04.790+0200 7f2a1414b700 -1 AuthRegistry(0x7f2a0c0595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-07-19 17:08:05,286][ceph_volume.process][INFO ] stderr purged osd.18
[2022-07-19 17:08:05,299][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
block_lv = self.prepare_data_device('block', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 943, in create_lv
vg = create_vg(device, name_prefix='ceph')
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 646, in create_vg
process.run([
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5I already tried wiping them and creating a new partition table, but without success. Is there anything more i can try, or is there some hardware reason i cannot use these HDDs?
Hello,
i have a newly installed PetaSAN cluster which i am experimenting with at the moment. Everything went smooth so far, at least until i tried to create OSDs on some Samsung HD204UI HDDs which were used in a proprietary NAS system before. Each of these HDDs fails when i try to create a OSD:
[2022-07-19 17:08:03,413][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm prepare --bluestore --data /dev/sdi1
[2022-07-19 17:08:03,417][ceph_volume.process][INFO ] Running command: /bin/lsblk -plno KNAME,NAME,TYPE
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda3 /dev/sda3 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda4 /dev/sda4 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sda5 /dev/sda5 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-07-19 17:08:03,424][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdd /dev/sdd disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdd1 /dev/sdd1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sde /dev/sde disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sde1 /dev/sde1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdf /dev/sdf disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdf1 /dev/sdf1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdg /dev/sdg disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdg1 /dev/sdg1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdh /dev/sdh disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdh1 /dev/sdh1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdi /dev/sdi disk
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sdi1 /dev/sdi1 part
[2022-07-19 17:08:03,425][ceph_volume.process][INFO ] stdout /dev/sr0 /dev/sr0 rom
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--f46cec17--3400--4015--aa12--51c2fcadb14a-osd--block--036d5091--fae3--4a7a--a434--b43de8fdfbfa lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-1 /dev/mapper/ceph--b002686e--c356--4a24--967f--ff14e57bfc91-osd--block--8ae3ef48--3e13--41e3--adf0--66dce4259906 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-2 /dev/mapper/ceph--38934150--3c1f--4b77--81f5--455136b7d241-osd--block--69928c07--50d4--4c44--96b0--a86c4d10c93d lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-3 /dev/mapper/ceph--1fdb48a4--f98b--4c5b--9f5a--d364d07a2c2f-osd--block--2c7350d5--258e--4c40--afde--938c80f1a340 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-4 /dev/mapper/ceph--9a1362e5--63c7--47bd--be4d--e809b8540d90-osd--block--43b25e1b--af01--4355--be3f--e49934b5bb0b lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-5 /dev/mapper/ceph--32f23898--0eed--48be--aa2f--933f7edf4ad6-osd--block--1eac47a2--5fae--4bcc--ab4c--c7da9e7cccf2 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/dm-6 /dev/mapper/ceph--a868c1ad--b4f5--46c0--8092--8f22371cacbe-osd--block--9eabe228--68a7--45a8--a9b0--5fddd2d3f182 lvm
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1 /dev/nvme0n1 disk
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p1 /dev/nvme0n1p1 part
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p2 /dev/nvme0n1p2 part
[2022-07-19 17:08:03,426][ceph_volume.process][INFO ] stdout /dev/nvme0n1p3 /dev/nvme0n1p3 part
[2022-07-19 17:08:03,449][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdi1 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-07-19 17:08:03,605][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdi1
[2022-07-19 17:08:03,610][ceph_volume.process][INFO ] stdout NAME="sdi1" KNAME="sdi1" MAJ:MIN="8:129" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="1.8T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdi" PARTLABEL="ceph-data"
[2022-07-19 17:08:03,611][ceph_volume.process][INFO ] Running command: /sbin/blkid -p /dev/sdi1
[2022-07-19 17:08:03,620][ceph_volume.process][INFO ] stdout /dev/sdi1: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="3d06051e-1fac-4fbb-a88a-fd7c3be7c33f" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="3907027087" PART_ENTRY_DISK="8:128"
[2022-07-19 17:08:03,621][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdi1
[2022-07-19 17:08:03,705][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdi1".
[2022-07-19 17:08:03,705][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdi1
[2022-07-19 17:08:03,736][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdi1: (2) No such file or directory
[2022-07-19 17:08:03,737][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdi1
[2022-07-19 17:08:03,768][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdi1: (2) No such file or directory
[2022-07-19 17:08:03,769][ceph_volume.process][INFO ] Running command: /bin/udevadm info --query=property /dev/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:08.0/0000:03:00.0/host1/port-1:3/end_device-1:3/target1:0:3/1:0:3:0/block/sdi/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdi1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout PARTN=1
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-07-19 17:08:03,773][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout MINOR=129
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=1838389808
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_VENDOR=ATA
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_MODEL=SAMSUNG_HD204UI
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=SAMSUNG\x20HD204UI\x20
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_REVISION=0001
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=S2H7J1AZA06302
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_VENDOR=S2H7J1AZA06302
[2022-07-19 17:08:03,774][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_T10=ATA_SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_ATA=SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REG=50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_MODEL=SAMSUNG_HD204UI
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=SAMSUNG\x20HD204UI\x20
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_REVISION=0001
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_WWN=0x50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x50024e900433173c
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_BUS=ata
[2022-07-19 17:08:03,775][ceph_volume.process][INFO ] stdout ID_ATA=1
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SERIAL=SAMSUNG_HD204UI_S2H7J1AZA06302
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=S2H7J1AZA06302
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:03:00.0-sas-phy3-lun-0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_03_00_0-sas-phy3-lun-0
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=0d0b001f-5df1-4535-822b-1f9df11e95f2
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-data
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=3d06051e-1fac-4fbb-a88a-fd7c3be7c33f
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2022-07-19 17:08:03,776][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=1
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=2048
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=3907027087
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:128
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/scsi-350024e900433173c-part1 /dev/disk/by-id/scsi-1ATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/scsi-SATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-partlabel/ceph-data /dev/disk/by-path/pci-0000:03:00.0-sas-phy3-lun-0-part1 /dev/disk/by-id/ata-SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/scsi-0ATA_SAMSUNG_HD204UI_S2H7J1AZA06302-part1 /dev/disk/by-id/wwn-0x50024e900433173c-part1 /dev/disk/by-partuuid/3d06051e-1fac-4fbb-a88a-fd7c3be7c33f
[2022-07-19 17:08:03,777][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-07-19 17:08:03,777][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2022-07-19 17:08:03,778][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-07-19 17:08:03,800][ceph_volume.process][INFO ] stdout AQDTyNZij82eLxAAbQp87qn+jzaQCRBtIwH4dA==
[2022-07-19 17:08:03,801][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e94e8fed-4c6e-49c6-a1ab-07825b9af02a
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stdout 18
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:03.934+0200 7fbb8e661700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-07-19 17:08:04,441][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:03.934+0200 7fbb8e661700 -1 AuthRegistry(0x7fbb880595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-07-19 17:08:04,442][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdi1
[2022-07-19 17:08:04,447][ceph_volume.process][INFO ] stdout NAME="sdi1" KNAME="sdi1" MAJ:MIN="8:129" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="1.8T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sdi" PARTLABEL="ceph-data"
[2022-07-19 17:08:04,447][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2022-07-19 17:08:04,447][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdi1
[2022-07-19 17:08:04,533][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdi1".
[2022-07-19 17:08:04,533][ceph_volume.process][INFO ] Running command: /sbin/vgcreate --force --yes ceph-7e5badc1-e48f-487e-8d6a-6e5cc9c04646 /dev/sdi1
[2022-07-19 17:08:04,593][ceph_volume.process][INFO ] stderr Cannot use /dev/sdi1: device is an md component
Command requires all devices to be found.
[2022-07-19 17:08:04,653][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
block_lv = self.prepare_data_device('block', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 943, in create_lv
vg = create_vg(device, name_prefix='ceph')
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 646, in create_vg
process.run([
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2022-07-19 17:08:04,654][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-07-19 17:08:04,655][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.18 --yes-i-really-mean-it
[2022-07-19 17:08:04,806][ceph_volume.process][INFO ] stderr 2022-07-19T17:08:04.790+0200 7f2a1414b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-07-19T17:08:04.790+0200 7f2a1414b700 -1 AuthRegistry(0x7f2a0c0595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-07-19 17:08:05,286][ceph_volume.process][INFO ] stderr purged osd.18
[2022-07-19 17:08:05,299][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
block_lv = self.prepare_data_device('block', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 943, in create_lv
vg = create_vg(device, name_prefix='ceph')
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 646, in create_vg
process.run([
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
I already tried wiping them and creating a new partition table, but without success. Is there anything more i can try, or is there some hardware reason i cannot use these HDDs?
admin
2,930 Posts
Quote from admin on July 20, 2022, 4:29 pmI already tried wiping them and creating a new partition table, but without success. Is there anything more i can try, or is there some hardware reason i cannot use these HDDs?
try the wipefs and ceph-volume lvm zap also if there were existing lvm volumes try to wipe them using lvm commands.
if the disks were used outside of Linux, sometimes the wipe commands do not work ( specially from FreeBSD system) try to dd zero on first and last 1G on disk. Else try to wipe them from their original environment.
If the disks are bad, you could see kernel errors in dmesg
I already tried wiping them and creating a new partition table, but without success. Is there anything more i can try, or is there some hardware reason i cannot use these HDDs?
try the wipefs and ceph-volume lvm zap also if there were existing lvm volumes try to wipe them using lvm commands.
if the disks were used outside of Linux, sometimes the wipe commands do not work ( specially from FreeBSD system) try to dd zero on first and last 1G on disk. Else try to wipe them from their original environment.
If the disks are bad, you could see kernel errors in dmesg
hesse
2 Posts
Quote from hesse on July 25, 2022, 11:12 amThank you for the hint - overwriting the last part of the disk did the trick. If anyone else has problems like this, here is the bash snippet i used:
#what to wipe
TOWIPE=/dev/sdX
#wipe first 1G
dd if=/dev/zero of=$TOWIPE bs=1M count=1024
#wipe last 1G
dd if=/dev/zero of=$TOWIPE bs=1M seek=$(((`blockdev --getsz $TOWIPE` * 512 / 1024 / 1024) - 1023)) count=1024It will give an error like "no space left on device" because it actually tries to write after end of the disk, but when i tried it with writing right up to the end, it didn't work for me.
Thank you for the hint - overwriting the last part of the disk did the trick. If anyone else has problems like this, here is the bash snippet i used:
#what to wipe
TOWIPE=/dev/sdX
#wipe first 1G
dd if=/dev/zero of=$TOWIPE bs=1M count=1024
#wipe last 1G
dd if=/dev/zero of=$TOWIPE bs=1M seek=$(((`blockdev --getsz $TOWIPE` * 512 / 1024 / 1024) - 1023)) count=1024
It will give an error like "no space left on device" because it actually tries to write after end of the disk, but when i tried it with writing right up to the end, it didn't work for me.