Unable to create osd
Pages: 1 2
setantatx
7 Posts
May 24, 2022, 4:30 pmQuote from setantatx on May 24, 2022, 4:30 pmHi All
I'm new to this project and I have already come across few issues. My setup contains 3 nodes 12 x 2TB drives each setup as Raid5 with LSI controller. Initial configuration worked fine for me however after I've messed up some configuration I have decided to start from the scratch. However fresh install attempt failed on creating OSD. I was also trying to create OSD from web gui, which changed usage status of the drive from none to mounted, but doesn't seams be available as OSD drive.
As I've mentioned this is all new to me and I would appreciate any help on this matter.
24/05/2022 17:00:17 ERROR Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
24/05/2022 17:00:13 INFO Starting : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
24/05/2022 17:00:10 INFO Executing udevadm settle --timeout 30
24/05/2022 17:00:10 INFO Calling udevadm on sda device
24/05/2022 17:00:10 INFO Executing partprobe /dev/sda
24/05/2022 17:00:10 INFO Calling partprobe on sda device
24/05/2022 17:00:09 INFO Creating data partition num 2 size 19072000MB on /dev/sda
24/05/2022 17:00:09 INFO Start prepare filestore OSD : sda
24/05/2022 17:00:09 INFO User didn't select a cache for disk sda.
24/05/2022 17:00:06 INFO Executing udevadm settle --timeout 30
24/05/2022 17:00:06 INFO Calling udevadm on sda device
24/05/2022 17:00:06 INFO Executing partprobe /dev/sda
24/05/2022 17:00:06 INFO Calling partprobe on sda device
24/05/2022 17:00:05 INFO Creating journal partition num 1 size 20480MB on /dev/sda
24/05/2022 17:00:04 INFO User didn't select a journal for disk sda, so the journal will be on the same disk.
24/05/2022 17:00:01 INFO Executing : partprobe /dev/sda
24/05/2022 17:00:01 INFO Executing : parted -s /dev/sda mklabel gpt
24/05/2022 17:00:01 INFO Executing : dd if=/dev/zero of=/dev/sda bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
Hi All
I'm new to this project and I have already come across few issues. My setup contains 3 nodes 12 x 2TB drives each setup as Raid5 with LSI controller. Initial configuration worked fine for me however after I've messed up some configuration I have decided to start from the scratch. However fresh install attempt failed on creating OSD. I was also trying to create OSD from web gui, which changed usage status of the drive from none to mounted, but doesn't seams be available as OSD drive.
As I've mentioned this is all new to me and I would appreciate any help on this matter.
24/05/2022 17:00:17 ERROR Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
24/05/2022 17:00:13 INFO Starting : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
24/05/2022 17:00:10 INFO Executing udevadm settle --timeout 30
24/05/2022 17:00:10 INFO Calling udevadm on sda device
24/05/2022 17:00:10 INFO Executing partprobe /dev/sda
24/05/2022 17:00:10 INFO Calling partprobe on sda device
24/05/2022 17:00:09 INFO Creating data partition num 2 size 19072000MB on /dev/sda
24/05/2022 17:00:09 INFO Start prepare filestore OSD : sda
24/05/2022 17:00:09 INFO User didn't select a cache for disk sda.
24/05/2022 17:00:06 INFO Executing udevadm settle --timeout 30
24/05/2022 17:00:06 INFO Calling udevadm on sda device
24/05/2022 17:00:06 INFO Executing partprobe /dev/sda
24/05/2022 17:00:06 INFO Calling partprobe on sda device
24/05/2022 17:00:05 INFO Creating journal partition num 1 size 20480MB on /dev/sda
24/05/2022 17:00:04 INFO User didn't select a journal for disk sda, so the journal will be on the same disk.
24/05/2022 17:00:01 INFO Executing : partprobe /dev/sda
24/05/2022 17:00:01 INFO Executing : parted -s /dev/sda mklabel gpt
24/05/2022 17:00:01 INFO Executing : dd if=/dev/zero of=/dev/sda bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
admin
2,930 Posts
May 25, 2022, 8:18 amQuote from admin on May 25, 2022, 8:18 amThe ceph-volumeprepare command failed, you could get its logs from :
/opt/petasan/log/ceph-volume.log
Also you seem to be using filestore than bluestore engine, if you dont have a special reason for this, recommend you use default bluestor.
The ceph-volumeprepare command failed, you could get its logs from :
/opt/petasan/log/ceph-volume.log
Also you seem to be using filestore than bluestore engine, if you dont have a special reason for this, recommend you use default bluestor.
Last edited on May 25, 2022, 8:19 am by admin · #2
setantatx
7 Posts
May 25, 2022, 3:42 pmQuote from setantatx on May 25, 2022, 3:42 pmto be honest the type of the engine was picked by default configuration during installation process. Apology for my ignorance, I should know what the different between those 2 is but I haven't got that far.
Please find my ceph-volume.log bellow
[2022-05-23 15:11:13,271][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data ps-e51bc678-0e20-462a-b537-493485d 153ee-wc-osd.0/main --journal /dev/sda1
[2022-05-23 15:11:13,275][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/sdc2 /dev/sdc2 part
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main lvm
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-1 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-cache_cvol lvm
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-2 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main_wcorig lvm
[2022-05-23 15:11:13,294][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:13,361][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:13,361][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/ma in
[2022-05-23 15:11:13,455][ceph_volume.process][INFO ] stderr unable to read label for /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main: (2) No such file or directo ry
[2022-05-23 15:11:13,456][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/ma in
[2022-05-23 15:11:13,485][ceph_volume.process][INFO ] stderr unable to read label for /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main: (2) No such file or directo ry
[2022-05-23 15:11:13,485][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVPATH=/devices/virtual/ block/dm-0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVNAME=/dev/dm-0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout MAJOR=254
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout MINOR=0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=48176660 44
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_DISABLE_LIBRARY_F ALLBACK_FLAG=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_PRIMARY_SOURCE_FL AG=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_ACTIVATION=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_NAME=ps--e51bc678--0e2 0--462a--b537--493485d153ee--wc--osd.0-main
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UUID=LVM-kEm7AJau7LtL3 yx2B5Ar3aWeuAyhuOxOUDViz0ihtft3rP00lVWsGtl0w22HqoY0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_SUSPENDED=0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_RULES=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_RULES_VSN=2
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_VG_NAME=ps-e51bc678-0e 20-462a-b537-493485d153ee-wc-osd.0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_LV_NAME=main
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_TABLE_STATE=LIVE
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_STATE=ACTIVE
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/ dm-name-ps--e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main /dev/disk/by-id/dm -uuid-LVM-kEm7AJau7LtL3yx2B5Ar3aWeuAyhuOxOUDViz0ihtft3rP00lVWsGtl0w22HqoY0 /dev/ps-e51b c678-0e20-462a-b537-493485d153ee-wc-osd.0/main /dev/mapper/ps--e51bc678--0e20--462a--b5 37--493485d153ee--wc--osd.0-main
[2022-05-23 15:11:13,491][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-23 15:11:13,491][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:13,545][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:13,545][ceph_volume.api.lvm][WARNING] device is not part of ceph: </d ev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main>
[2022-05-23 15:11:13,546][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-23 15:11:13,567][ceph_volume.process][INFO ] stdout AQABlotiXS23IRAAYIxkLyUad ewf5fY4RYVbgA==
[2022-05-23 15:11:13,567][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new b486d663-8fa0-4af2-af12-07d9a917f452
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stdout 0
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:13.692+0 100 7f0808ffa700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:13.692+0 100 7f0808ffa700 -1 AuthRegistry(0x7f08040595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-23 15:11:14,113][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-23 15:11:14,113][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-23 15:11:14,119][ceph_volume.process][INFO ] stdout c081bde1-8e12-43f5-aeaa-5 291fbbabf8f
[2022-05-23 15:11:14,119][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:14,177][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:14,177][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.osd_fsid=b486d663-8fa0-4af2-af12-07d9a917f452 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee --addtag ceph.cluster_n ame=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=c081bde1-8e12-43f5-aeaa-5291fbbabf8f --addtag cep h.journal_device=/dev/sda1 --addtag ceph.data_device=/dev/ps-e51bc678-0e20-462a-b537-49 3485d153ee-wc-osd.0/main --addtag ceph.data_uuid=UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:14,265][ceph_volume.process][INFO ] stdout Logical volume ps-e51bc67 8-0e20-462a-b537-493485d153ee-wc-osd.0/main changed.
[2022-05-23 15:11:14,266][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-23 15:11:14,286][ceph_volume.process][INFO ] stdout AQAClotinPUCERAA5p/tE3iZe oBJJzKnHFYOYQ==
[2022-05-23 15:11:14,288][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:20,843][ceph_volume.process][INFO ] stdout meta-data=/dev/ps-e51bc67 8-0e20-462a-b537-493485d153ee-wc-osd.0/main isize=2048 agcount=20, agsize=268435455 b lks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=5365431296, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-23 15:11:20,844][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main /var/lib/ceph/osd/ceph-0
[2022-05-23 15:11:20,878][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-23 15:11:20,878][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-23 15:11:20,880][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-23 15:11:20,880][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-23 15:11:21,019][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:21.004+0 100 7fc33e3ba700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-23T15:11:21.004+0100 7fc33e3ba700 -1 AuthRegistry(0x7fc3380595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-23 15:11:21,409][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-23 15:11:21,421][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 13:36:17,183][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
[2022-05-24 13:36:17,186][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-24 13:36:17,193][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-05-24 13:36:17,193][ceph_volume.process][INFO ] stdout /dev/sdc2 /dev/sdc2 part
[2022-05-24 13:36:17,201][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 13:36:17,373][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 13:36:17,378][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="18. 2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" L OG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-data"
[2022-05-24 13:36:17,378][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sda2
[2022-05-24 13:36:17,384][ceph_volume.process][INFO ] stdout /dev/sda2: PART_ENTRY_SCH EME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="076b9d21-1e26-4751-b602-2b108d59 e8df" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART _ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="39017510879" PART_ENTRY_DISK="8:0"
[2022-05-24 13:36:17,385][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 13:36:17,437][ceph_volume.process][INFO ] stderr Failed to find physical v olume "/dev/sda2".
[2022-05-24 13:36:17,438][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 13:36:17,531][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 13:36:17,531][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 13:36:17,560][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 13:36:17,561][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000: 00/0000:00:03.2/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout MINOR=2
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=78736261 179
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_VENDOR=LSI
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=LSI\x20\x 20\x20\x20\x20
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_MODEL=MR9266-4i
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=MR9266-4i\ x20\x20\x20\x20\x20\x20\x20
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_REVISION=3.27
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=00e9876 7098a591e2a40d8ec06b00506
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REGEXT =600605b006ecd8402a1e598a096787e9
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_VENDOR=LSI
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=LSI\x20\x20 \x20\x20\x20
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_MODEL=MR9266-4i
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=MR9266-4i\x2 0\x20\x20\x20\x20\x20\x20
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_REVISION=3.27
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_WWN=0x600605b006ecd840 2a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x6 00605b006ecd8402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SERIAL=3600605b006ecd8 402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=600605b00 6ecd8402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH= 0
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:06:00.0- scsi-0:2:0:0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_06_0 0_0-scsi-0_2_0_0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=fec0fa a0-0d54-4e19-a330-eaf777dd4911
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-d ata
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=076b9d 21-1e26-4751-b602-2b108d59e8df
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63d af-8483-4772-8e79-3d69d8477de4
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=4194 5088
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=390175 10879
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-par tuuid/076b9d21-1e26-4751-b602-2b108d59e8df /dev/disk/by-path/pci-0000:06:00.0-scsi-0:2: 0:0-part2 /dev/disk/by-partlabel/ceph-data /dev/disk/by-id/wwn-0x600605b006ecd8402a1e59 8a096787e9-part2 /dev/disk/by-id/scsi-3600605b006ecd8402a1e598a096787e9-part2 /dev/disk /by-id/scsi-SLSI_MR9266-4i_00e98767098a591e2a40d8ec06b00506-part2
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-24 13:36:17,569][ceph_volume.api.lvm][WARNING] device is not part of ceph: Non e
[2022-05-24 13:36:17,570][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 13:36:17,592][ceph_volume.process][INFO ] stdout AQBB0Yxit6w1IxAAmB50woaLj CcUE32615Vd2A==
[2022-05-24 13:36:17,593][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new 55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,150][ceph_volume.process][INFO ] stdout 0
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:17.719+0 100 7f8d7a3f2700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:17.719+0 100 7f8d7a3f2700 -1 AuthRegistry(0x7f8d740595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-24 13:36:18,156][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-24 13:36:18,156][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-24 13:36:18,184][ceph_volume.process][INFO ] stdout 791fa4c5-5eb5-4eef-8ec4-8 14973059c10
[2022-05-24 13:36:18,184][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 13:36:18,188][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="18. 2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" L OG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-data"
[2022-05-24 13:36:18,189][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0. 00 B
[2022-05-24 13:36:18,189][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 13:36:18,257][ceph_volume.process][INFO ] stderr Failed to find physical v olume "/dev/sda2".
[2022-05-24 13:36:18,258][ceph_volume.process][INFO ] Running command: /usr/sbin/vgcre ate --force --yes ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558 /dev/sda2
[2022-05-24 13:36:18,310][ceph_volume.process][INFO ] stdout Physical volume "/dev/sda 2" successfully created.
[2022-05-24 13:36:18,316][ceph_volume.process][INFO ] stdout Volume group "ceph-1cbcc4 b0-3ccc-431b-bff6-3c03b1dc9558" successfully created
[2022-05-24 13:36:18,361][ceph_volume.process][INFO ] Running command: /usr/sbin/vgs - -noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-1cbcc4b0-3c cc-431b-bff6-3c03b1dc9558 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_ count,vg_extent_size
[2022-05-24 13:36:18,429][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"0";"wz--n-";"4762879";"4762879";"4194304
[2022-05-24 13:36:18,429][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 4762879
[2022-05-24 13:36:18,430][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcre ate --yes -l 4762879 -n osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558
[2022-05-24 13:36:18,512][ceph_volume.process][INFO ] stdout Logical volume "osd-data- 55f50f2f-d558-4fc4-be4c-4378bf094232" created.
[2022-05-24 13:36:18,549][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=osd-data-55f5 0f2f-d558-4fc4-be4c-4378bf094232,vg_name=ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558 -o l v_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 13:36:18,601][ceph_volume.process][INFO ] stdout ";"/dev/ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50 f2f-d558-4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4p ed-b1CW-X5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 13:36:18,601][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.type=data --addtag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6 -3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 /dev/ceph-1cbcc4b0-3ccc-431 b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,681][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,681][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --deltag ceph.type=data --deltag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6 -3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 /dev/ceph-1cbcc4b0-3ccc-431 b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,749][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,749][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee --addtag ceph.cluster_n ame=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=791fa4c5-5eb5-4eef-8ec4-814973059c10 --addtag cep h.journal_device=/dev/sda1 --addtag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6- 3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 --addtag ceph.data_uuid=rKjf Y4-4ped-b1CW-X5xr-QY8C-RklX-Jdew6J /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-d ata-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,821][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,822][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 13:36:18,843][ceph_volume.process][INFO ] stdout AQBC0YxixPEpMhAAkNWd0CNrz rih6+2bFB3h3g==
[2022-05-24 13:36:18,844][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f -d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:20,156][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 isize=204 8 agcount=19, agsize=268435455 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=4877188096, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-24 13:36:20,157][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55 f50f2f-d558-4fc4-be4c-4378bf094232 /var/lib/ceph/osd/ceph-0
[2022-05-24 13:36:20,454][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-24 13:36:20,455][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 13:36:20,456][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-24 13:36:20,457][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-24 13:36:20,596][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:20.583+0 100 7f05dbf7d700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-24T13:36:20.583+0100 7f05dbf7d700 -1 AuthRegistry(0x7f05d40595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-24 13:36:21,002][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-24 13:36:21,015][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 15:12:28,502][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm activate --filestore 0 55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 15:12:28,505][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S tags={ceph.osd_id=0,c eph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232} -o lv_tags,lv_path,lv_name,vg_name,l v_uuid,lv_size
[2022-05-24 15:12:28,569][ceph_volume.process][INFO ] stdout ceph.cephx_lockbox_secret =,ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee,ceph.cluster_name=ceph,ceph.cr ush_device_class=None,ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/o sd-data-55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.data_uuid=rKjfY4-4ped-b1CW-X5xr-QY8C- RklX-Jdew6J,ceph.encrypted=0,ceph.journal_device=/dev/sda1,ceph.journal_uuid=791fa4c5-5 eb5-4eef-8ec4-814973059c10,ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.osd_ id=0,ceph.osdspec_affinity=,ceph.type=data,ceph.vdo=0";"/dev/ceph-1cbcc4b0-3ccc-431b-bf f6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50f2f-d558- 4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4ped-b1CW-X 5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 15:12:28,571][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -t PARTUUID="791fa4c5-5eb5-4eef-8ec4-814973059c10" -o device
[2022-05-24 15:12:28,645][ceph_volume.process][INFO ] stdout /dev/sda1
[2022-05-24 15:12:28,651][ceph_volume.util.system][INFO ] /dev/ceph-1cbcc4b0-3ccc-431b -bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 was not found as mount ed
[2022-05-24 15:12:28,652][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55 f50f2f-d558-4fc4-be4c-4378bf094232 /var/lib/ceph/osd/ceph-0
[2022-05-24 15:12:28,698][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-24 15:12:28,699][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 370, in main
self.activate(args)
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 296, in activate
activate_filestore(lvs, args.no_systemd)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 74, i n activate_filestore
prepare_utils.mount_osd(source, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 17:26:14,559][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
[2022-05-24 17:26:14,567][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-24 17:26:14,572][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-24 17:26:14,572][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/cep h--1cbcc4b0--3ccc--431b--bff6--3c03b1dc9558-osd--data--55f50f2f--d558--4fc4--be4c--4378 bf094232 lvm
[2022-05-24 17:26:14,581][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 17:26:14,651][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 17:26:14,656][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="3jkQAm-10vw-K46l-JeFt-H GMa-QNtH-eGjTJU" RO="0" RM="0" MODEL="" SIZE="18.2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE ="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL= "ceph-data"
[2022-05-24 17:26:14,656][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sda2
[2022-05-24 17:26:14,659][ceph_volume.process][INFO ] stdout /dev/sda2: UUID="3jkQAm-1 0vw-K46l-JeFt-HGMa-QNtH-eGjTJU" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" PART _ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="20a881d8-44c1-4d68-8ab 0-e3e8a469baad" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBE R="2" PART_ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="39017510879" PART_ENTRY_DISK="8:0"
[2022-05-24 17:26:14,659][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 17:26:14,726][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"1";"wz--n-";"4762879";"0";"4194304
[2022-05-24 17:26:14,727][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -o lv_tags,lv_path,lv_na me,vg_name,lv_uuid,lv_size /dev/sda2
[2022-05-24 17:26:14,782][ceph_volume.process][INFO ] stdout ceph.cephx_lockbox_secret =,ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee,ceph.cluster_name=ceph,ceph.cr ush_device_class=None,ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/o sd-data-55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.data_uuid=rKjfY4-4ped-b1CW-X5xr-QY8C- RklX-Jdew6J,ceph.encrypted=0,ceph.journal_device=/dev/sda1,ceph.journal_uuid=791fa4c5-5 eb5-4eef-8ec4-814973059c10,ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.osd_ id=0,ceph.osdspec_affinity=,ceph.type=data,ceph.vdo=0";"/dev/ceph-1cbcc4b0-3ccc-431b-bf f6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50f2f-d558- 4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4ped-b1CW-X 5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 17:26:14,783][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 17:26:14,876][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 17:26:14,876][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/sda2
[2022-05-24 17:26:14,880][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000: 00/0000:00:03.2/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda/sda2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sda2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout MINOR=2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=74541882 59
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_VENDOR=LSI
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=LSI\x20\x 20\x20\x20\x20
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_MODEL=MR9266-4i
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=MR9266-4i\ x20\x20\x20\x20\x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_REVISION=3.27
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=00e9876 7098a591e2a40d8ec06b00506
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REGEXT =600605b006ecd8402a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_VENDOR=LSI
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=LSI\x20\x20 \x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_MODEL=MR9266-4i
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=MR9266-4i\x2 0\x20\x20\x20\x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_REVISION=3.27
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_WWN=0x600605b006ecd840 2a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x6 00605b006ecd8402a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SERIAL=3600605b006ecd8 402a1e598a096787e9
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=600605b00 6ecd8402a1e598a096787e9
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH= 0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:06:00.0- scsi-0:2:0:0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_06_0 0_0-scsi-0_2_0_0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=c0e615 88-c7fb-476d-880d-325faa63557b
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_UUID=3jkQAm-10vw-K4 6l-JeFt-HGMa-QNtH-eGjTJU
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=3jkQAm-10v w-K46l-JeFt-HGMa-QNtH-eGjTJU
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-d ata
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=20a881 d8-44c1-4d68-8ab0-e3e8a469baad
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63d af-8483-4772-8e79-3d69d8477de4
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=4194 5088
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=390175 10879
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:0
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/ 8:2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan @8:2.service
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/ scsi-3600605b006ecd8402a1e598a096787e9-part2 /dev/disk/by-partlabel/ceph-data /dev/disk /by-partuuid/20a881d8-44c1-4d68-8ab0-e3e8a469baad /dev/disk/by-id/lvm-pv-uuid-3jkQAm-10 vw-K46l-JeFt-HGMa-QNtH-eGjTJU /dev/disk/by-id/scsi-SLSI_MR9266-4i_00e98767098a591e2a40d 8ec06b00506-part2 /dev/disk/by-id/wwn-0x600605b006ecd8402a1e598a096787e9-part2 /dev/dis k/by-path/pci-0000:06:00.0-scsi-0:2:0:0-part2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-24 17:26:14,885][ceph_volume.api.lvm][WARNING] device is not part of ceph: Non e
[2022-05-24 17:26:14,886][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 17:26:14,908][ceph_volume.process][INFO ] stdout AQAmB41i9JUNNhAAMGbvb2oD/ 87cxmVAwTUtOg==
[2022-05-24 17:26:14,909][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new 85f574b4-282f-428b-9324-3a5feb28a4e3
[2022-05-24 17:26:15,464][ceph_volume.process][INFO ] stdout 0
[2022-05-24 17:26:15,464][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.033+0 100 7f212424b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-24 17:26:15,465][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.033+0 100 7f212424b700 -1 AuthRegistry(0x7f211c0595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-24 17:26:15,465][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-24 17:26:15,469][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-24 17:26:15,470][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-24 17:26:15,503][ceph_volume.process][INFO ] stdout bb5848bf-2734-4648-bc97-9 097a2f67a78
[2022-05-24 17:26:15,504][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 17:26:15,508][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="3jkQAm-10vw-K46l-JeFt-H GMa-QNtH-eGjTJU" RO="0" RM="0" MODEL="" SIZE="18.2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE ="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL= "ceph-data"
[2022-05-24 17:26:15,508][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0. 00 B
[2022-05-24 17:26:15,509][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 17:26:15,570][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"1";"wz--n-";"4762879";"0";"4194304
[2022-05-24 17:26:15,571][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 4762879
[2022-05-24 17:26:15,571][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcre ate --yes -l 4762879 -n osd-data-85f574b4-282f-428b-9324-3a5feb28a4e3 ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558
[2022-05-24 17:26:15,591][ceph_volume.process][INFO ] stderr Volume group "ceph-1cbcc4 b0-3ccc-431b-bff6-3c03b1dc9558" has insufficient free space (0 extents): 4762879 requir ed.
[2022-05-24 17:26:15,634][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, i n prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, i n prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2022-05-24 17:26:15,636][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-24 17:26:15,636][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-24 17:26:15,777][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.761+0 100 7f5b2d823700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-24T17:26:15.761+0100 7f5b2d823700 -1 AuthRegistry(0x7f5b280595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-24 17:26:16,187][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-24 17:26:16,201][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, i n prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, i n prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
root@ps-node1:~# tail /opt/petasan/log/ceph-volume.log
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, in prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
to be honest the type of the engine was picked by default configuration during installation process. Apology for my ignorance, I should know what the different between those 2 is but I haven't got that far.
Please find my ceph-volume.log bellow
[2022-05-23 15:11:13,271][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data ps-e51bc678-0e20-462a-b537-493485d 153ee-wc-osd.0/main --journal /dev/sda1
[2022-05-23 15:11:13,275][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/sdc2 /dev/sdc2 part
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main lvm
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-1 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-cache_cvol lvm
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-2 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main_wcorig lvm
[2022-05-23 15:11:13,294][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:13,361][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:13,361][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/ma in
[2022-05-23 15:11:13,455][ceph_volume.process][INFO ] stderr unable to read label for /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main: (2) No such file or directo ry
[2022-05-23 15:11:13,456][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/ma in
[2022-05-23 15:11:13,485][ceph_volume.process][INFO ] stderr unable to read label for /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main: (2) No such file or directo ry
[2022-05-23 15:11:13,485][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVPATH=/devices/virtual/ block/dm-0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVNAME=/dev/dm-0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout MAJOR=254
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout MINOR=0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=48176660 44
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_DISABLE_LIBRARY_F ALLBACK_FLAG=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_PRIMARY_SOURCE_FL AG=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_ACTIVATION=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_NAME=ps--e51bc678--0e2 0--462a--b537--493485d153ee--wc--osd.0-main
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UUID=LVM-kEm7AJau7LtL3 yx2B5Ar3aWeuAyhuOxOUDViz0ihtft3rP00lVWsGtl0w22HqoY0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_SUSPENDED=0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_RULES=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_RULES_VSN=2
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_VG_NAME=ps-e51bc678-0e 20-462a-b537-493485d153ee-wc-osd.0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_LV_NAME=main
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_TABLE_STATE=LIVE
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_STATE=ACTIVE
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/ dm-name-ps--e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main /dev/disk/by-id/dm -uuid-LVM-kEm7AJau7LtL3yx2B5Ar3aWeuAyhuOxOUDViz0ihtft3rP00lVWsGtl0w22HqoY0 /dev/ps-e51b c678-0e20-462a-b537-493485d153ee-wc-osd.0/main /dev/mapper/ps--e51bc678--0e20--462a--b5 37--493485d153ee--wc--osd.0-main
[2022-05-23 15:11:13,491][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-23 15:11:13,491][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:13,545][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:13,545][ceph_volume.api.lvm][WARNING] device is not part of ceph: </d ev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main>
[2022-05-23 15:11:13,546][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-23 15:11:13,567][ceph_volume.process][INFO ] stdout AQABlotiXS23IRAAYIxkLyUad ewf5fY4RYVbgA==
[2022-05-23 15:11:13,567][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new b486d663-8fa0-4af2-af12-07d9a917f452
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stdout 0
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:13.692+0 100 7f0808ffa700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:13.692+0 100 7f0808ffa700 -1 AuthRegistry(0x7f08040595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-23 15:11:14,113][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-23 15:11:14,113][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-23 15:11:14,119][ceph_volume.process][INFO ] stdout c081bde1-8e12-43f5-aeaa-5 291fbbabf8f
[2022-05-23 15:11:14,119][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:14,177][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:14,177][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.osd_fsid=b486d663-8fa0-4af2-af12-07d9a917f452 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee --addtag ceph.cluster_n ame=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=c081bde1-8e12-43f5-aeaa-5291fbbabf8f --addtag cep h.journal_device=/dev/sda1 --addtag ceph.data_device=/dev/ps-e51bc678-0e20-462a-b537-49 3485d153ee-wc-osd.0/main --addtag ceph.data_uuid=UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:14,265][ceph_volume.process][INFO ] stdout Logical volume ps-e51bc67 8-0e20-462a-b537-493485d153ee-wc-osd.0/main changed.
[2022-05-23 15:11:14,266][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-23 15:11:14,286][ceph_volume.process][INFO ] stdout AQAClotinPUCERAA5p/tE3iZe oBJJzKnHFYOYQ==
[2022-05-23 15:11:14,288][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:20,843][ceph_volume.process][INFO ] stdout meta-data=/dev/ps-e51bc67 8-0e20-462a-b537-493485d153ee-wc-osd.0/main isize=2048 agcount=20, agsize=268435455 b lks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=5365431296, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-23 15:11:20,844][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main /var/lib/ceph/osd/ceph-0
[2022-05-23 15:11:20,878][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-23 15:11:20,878][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-23 15:11:20,880][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-23 15:11:20,880][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-23 15:11:21,019][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:21.004+0 100 7fc33e3ba700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-23T15:11:21.004+0100 7fc33e3ba700 -1 AuthRegistry(0x7fc3380595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-23 15:11:21,409][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-23 15:11:21,421][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 13:36:17,183][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
[2022-05-24 13:36:17,186][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-24 13:36:17,193][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-05-24 13:36:17,193][ceph_volume.process][INFO ] stdout /dev/sdc2 /dev/sdc2 part
[2022-05-24 13:36:17,201][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 13:36:17,373][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 13:36:17,378][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="18. 2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" L OG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-data"
[2022-05-24 13:36:17,378][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sda2
[2022-05-24 13:36:17,384][ceph_volume.process][INFO ] stdout /dev/sda2: PART_ENTRY_SCH EME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="076b9d21-1e26-4751-b602-2b108d59 e8df" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART _ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="39017510879" PART_ENTRY_DISK="8:0"
[2022-05-24 13:36:17,385][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 13:36:17,437][ceph_volume.process][INFO ] stderr Failed to find physical v olume "/dev/sda2".
[2022-05-24 13:36:17,438][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 13:36:17,531][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 13:36:17,531][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 13:36:17,560][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 13:36:17,561][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000: 00/0000:00:03.2/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout MINOR=2
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=78736261 179
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_VENDOR=LSI
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=LSI\x20\x 20\x20\x20\x20
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_MODEL=MR9266-4i
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=MR9266-4i\ x20\x20\x20\x20\x20\x20\x20
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_REVISION=3.27
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=00e9876 7098a591e2a40d8ec06b00506
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REGEXT =600605b006ecd8402a1e598a096787e9
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_VENDOR=LSI
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=LSI\x20\x20 \x20\x20\x20
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_MODEL=MR9266-4i
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=MR9266-4i\x2 0\x20\x20\x20\x20\x20\x20
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_REVISION=3.27
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_WWN=0x600605b006ecd840 2a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x6 00605b006ecd8402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SERIAL=3600605b006ecd8 402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=600605b00 6ecd8402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH= 0
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:06:00.0- scsi-0:2:0:0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_06_0 0_0-scsi-0_2_0_0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=fec0fa a0-0d54-4e19-a330-eaf777dd4911
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-d ata
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=076b9d 21-1e26-4751-b602-2b108d59e8df
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63d af-8483-4772-8e79-3d69d8477de4
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=4194 5088
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=390175 10879
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-par tuuid/076b9d21-1e26-4751-b602-2b108d59e8df /dev/disk/by-path/pci-0000:06:00.0-scsi-0:2: 0:0-part2 /dev/disk/by-partlabel/ceph-data /dev/disk/by-id/wwn-0x600605b006ecd8402a1e59 8a096787e9-part2 /dev/disk/by-id/scsi-3600605b006ecd8402a1e598a096787e9-part2 /dev/disk /by-id/scsi-SLSI_MR9266-4i_00e98767098a591e2a40d8ec06b00506-part2
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-24 13:36:17,569][ceph_volume.api.lvm][WARNING] device is not part of ceph: Non e
[2022-05-24 13:36:17,570][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 13:36:17,592][ceph_volume.process][INFO ] stdout AQBB0Yxit6w1IxAAmB50woaLj CcUE32615Vd2A==
[2022-05-24 13:36:17,593][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new 55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,150][ceph_volume.process][INFO ] stdout 0
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:17.719+0 100 7f8d7a3f2700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:17.719+0 100 7f8d7a3f2700 -1 AuthRegistry(0x7f8d740595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-24 13:36:18,156][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-24 13:36:18,156][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-24 13:36:18,184][ceph_volume.process][INFO ] stdout 791fa4c5-5eb5-4eef-8ec4-8 14973059c10
[2022-05-24 13:36:18,184][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 13:36:18,188][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="18. 2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" L OG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-data"
[2022-05-24 13:36:18,189][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0. 00 B
[2022-05-24 13:36:18,189][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 13:36:18,257][ceph_volume.process][INFO ] stderr Failed to find physical v olume "/dev/sda2".
[2022-05-24 13:36:18,258][ceph_volume.process][INFO ] Running command: /usr/sbin/vgcre ate --force --yes ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558 /dev/sda2
[2022-05-24 13:36:18,310][ceph_volume.process][INFO ] stdout Physical volume "/dev/sda 2" successfully created.
[2022-05-24 13:36:18,316][ceph_volume.process][INFO ] stdout Volume group "ceph-1cbcc4 b0-3ccc-431b-bff6-3c03b1dc9558" successfully created
[2022-05-24 13:36:18,361][ceph_volume.process][INFO ] Running command: /usr/sbin/vgs - -noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-1cbcc4b0-3c cc-431b-bff6-3c03b1dc9558 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_ count,vg_extent_size
[2022-05-24 13:36:18,429][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"0";"wz--n-";"4762879";"4762879";"4194304
[2022-05-24 13:36:18,429][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 4762879
[2022-05-24 13:36:18,430][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcre ate --yes -l 4762879 -n osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558
[2022-05-24 13:36:18,512][ceph_volume.process][INFO ] stdout Logical volume "osd-data- 55f50f2f-d558-4fc4-be4c-4378bf094232" created.
[2022-05-24 13:36:18,549][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=osd-data-55f5 0f2f-d558-4fc4-be4c-4378bf094232,vg_name=ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558 -o l v_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 13:36:18,601][ceph_volume.process][INFO ] stdout ";"/dev/ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50 f2f-d558-4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4p ed-b1CW-X5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 13:36:18,601][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.type=data --addtag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6 -3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 /dev/ceph-1cbcc4b0-3ccc-431 b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,681][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,681][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --deltag ceph.type=data --deltag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6 -3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 /dev/ceph-1cbcc4b0-3ccc-431 b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,749][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,749][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee --addtag ceph.cluster_n ame=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=791fa4c5-5eb5-4eef-8ec4-814973059c10 --addtag cep h.journal_device=/dev/sda1 --addtag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6- 3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 --addtag ceph.data_uuid=rKjf Y4-4ped-b1CW-X5xr-QY8C-RklX-Jdew6J /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-d ata-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,821][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,822][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 13:36:18,843][ceph_volume.process][INFO ] stdout AQBC0YxixPEpMhAAkNWd0CNrz rih6+2bFB3h3g==
[2022-05-24 13:36:18,844][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f -d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:20,156][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 isize=204 8 agcount=19, agsize=268435455 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=4877188096, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-24 13:36:20,157][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55 f50f2f-d558-4fc4-be4c-4378bf094232 /var/lib/ceph/osd/ceph-0
[2022-05-24 13:36:20,454][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-24 13:36:20,455][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 13:36:20,456][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-24 13:36:20,457][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-24 13:36:20,596][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:20.583+0 100 7f05dbf7d700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-24T13:36:20.583+0100 7f05dbf7d700 -1 AuthRegistry(0x7f05d40595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-24 13:36:21,002][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-24 13:36:21,015][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 15:12:28,502][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm activate --filestore 0 55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 15:12:28,505][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S tags={ceph.osd_id=0,c eph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232} -o lv_tags,lv_path,lv_name,vg_name,l v_uuid,lv_size
[2022-05-24 15:12:28,569][ceph_volume.process][INFO ] stdout ceph.cephx_lockbox_secret =,ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee,ceph.cluster_name=ceph,ceph.cr ush_device_class=None,ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/o sd-data-55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.data_uuid=rKjfY4-4ped-b1CW-X5xr-QY8C- RklX-Jdew6J,ceph.encrypted=0,ceph.journal_device=/dev/sda1,ceph.journal_uuid=791fa4c5-5 eb5-4eef-8ec4-814973059c10,ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.osd_ id=0,ceph.osdspec_affinity=,ceph.type=data,ceph.vdo=0";"/dev/ceph-1cbcc4b0-3ccc-431b-bf f6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50f2f-d558- 4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4ped-b1CW-X 5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 15:12:28,571][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -t PARTUUID="791fa4c5-5eb5-4eef-8ec4-814973059c10" -o device
[2022-05-24 15:12:28,645][ceph_volume.process][INFO ] stdout /dev/sda1
[2022-05-24 15:12:28,651][ceph_volume.util.system][INFO ] /dev/ceph-1cbcc4b0-3ccc-431b -bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 was not found as mount ed
[2022-05-24 15:12:28,652][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55 f50f2f-d558-4fc4-be4c-4378bf094232 /var/lib/ceph/osd/ceph-0
[2022-05-24 15:12:28,698][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-24 15:12:28,699][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 370, in main
self.activate(args)
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 296, in activate
activate_filestore(lvs, args.no_systemd)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 74, i n activate_filestore
prepare_utils.mount_osd(source, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 17:26:14,559][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
[2022-05-24 17:26:14,567][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-24 17:26:14,572][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-24 17:26:14,572][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/cep h--1cbcc4b0--3ccc--431b--bff6--3c03b1dc9558-osd--data--55f50f2f--d558--4fc4--be4c--4378 bf094232 lvm
[2022-05-24 17:26:14,581][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 17:26:14,651][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 17:26:14,656][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="3jkQAm-10vw-K46l-JeFt-H GMa-QNtH-eGjTJU" RO="0" RM="0" MODEL="" SIZE="18.2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE ="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL= "ceph-data"
[2022-05-24 17:26:14,656][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sda2
[2022-05-24 17:26:14,659][ceph_volume.process][INFO ] stdout /dev/sda2: UUID="3jkQAm-1 0vw-K46l-JeFt-HGMa-QNtH-eGjTJU" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" PART _ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="20a881d8-44c1-4d68-8ab 0-e3e8a469baad" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBE R="2" PART_ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="39017510879" PART_ENTRY_DISK="8:0"
[2022-05-24 17:26:14,659][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 17:26:14,726][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"1";"wz--n-";"4762879";"0";"4194304
[2022-05-24 17:26:14,727][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -o lv_tags,lv_path,lv_na me,vg_name,lv_uuid,lv_size /dev/sda2
[2022-05-24 17:26:14,782][ceph_volume.process][INFO ] stdout ceph.cephx_lockbox_secret =,ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee,ceph.cluster_name=ceph,ceph.cr ush_device_class=None,ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/o sd-data-55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.data_uuid=rKjfY4-4ped-b1CW-X5xr-QY8C- RklX-Jdew6J,ceph.encrypted=0,ceph.journal_device=/dev/sda1,ceph.journal_uuid=791fa4c5-5 eb5-4eef-8ec4-814973059c10,ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.osd_ id=0,ceph.osdspec_affinity=,ceph.type=data,ceph.vdo=0";"/dev/ceph-1cbcc4b0-3ccc-431b-bf f6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50f2f-d558- 4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4ped-b1CW-X 5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 17:26:14,783][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 17:26:14,876][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 17:26:14,876][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/sda2
[2022-05-24 17:26:14,880][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000: 00/0000:00:03.2/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda/sda2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sda2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout MINOR=2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=74541882 59
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_VENDOR=LSI
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=LSI\x20\x 20\x20\x20\x20
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_MODEL=MR9266-4i
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=MR9266-4i\ x20\x20\x20\x20\x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_REVISION=3.27
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=00e9876 7098a591e2a40d8ec06b00506
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REGEXT =600605b006ecd8402a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_VENDOR=LSI
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=LSI\x20\x20 \x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_MODEL=MR9266-4i
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=MR9266-4i\x2 0\x20\x20\x20\x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_REVISION=3.27
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_WWN=0x600605b006ecd840 2a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x6 00605b006ecd8402a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SERIAL=3600605b006ecd8 402a1e598a096787e9
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=600605b00 6ecd8402a1e598a096787e9
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH= 0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:06:00.0- scsi-0:2:0:0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_06_0 0_0-scsi-0_2_0_0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=c0e615 88-c7fb-476d-880d-325faa63557b
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_UUID=3jkQAm-10vw-K4 6l-JeFt-HGMa-QNtH-eGjTJU
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=3jkQAm-10v w-K46l-JeFt-HGMa-QNtH-eGjTJU
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-d ata
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=20a881 d8-44c1-4d68-8ab0-e3e8a469baad
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63d af-8483-4772-8e79-3d69d8477de4
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=4194 5088
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=390175 10879
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:0
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/ 8:2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan @8:2.service
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/ scsi-3600605b006ecd8402a1e598a096787e9-part2 /dev/disk/by-partlabel/ceph-data /dev/disk /by-partuuid/20a881d8-44c1-4d68-8ab0-e3e8a469baad /dev/disk/by-id/lvm-pv-uuid-3jkQAm-10 vw-K46l-JeFt-HGMa-QNtH-eGjTJU /dev/disk/by-id/scsi-SLSI_MR9266-4i_00e98767098a591e2a40d 8ec06b00506-part2 /dev/disk/by-id/wwn-0x600605b006ecd8402a1e598a096787e9-part2 /dev/dis k/by-path/pci-0000:06:00.0-scsi-0:2:0:0-part2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-24 17:26:14,885][ceph_volume.api.lvm][WARNING] device is not part of ceph: Non e
[2022-05-24 17:26:14,886][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 17:26:14,908][ceph_volume.process][INFO ] stdout AQAmB41i9JUNNhAAMGbvb2oD/ 87cxmVAwTUtOg==
[2022-05-24 17:26:14,909][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new 85f574b4-282f-428b-9324-3a5feb28a4e3
[2022-05-24 17:26:15,464][ceph_volume.process][INFO ] stdout 0
[2022-05-24 17:26:15,464][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.033+0 100 7f212424b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-24 17:26:15,465][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.033+0 100 7f212424b700 -1 AuthRegistry(0x7f211c0595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-24 17:26:15,465][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-24 17:26:15,469][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-24 17:26:15,470][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-24 17:26:15,503][ceph_volume.process][INFO ] stdout bb5848bf-2734-4648-bc97-9 097a2f67a78
[2022-05-24 17:26:15,504][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 17:26:15,508][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="3jkQAm-10vw-K46l-JeFt-H GMa-QNtH-eGjTJU" RO="0" RM="0" MODEL="" SIZE="18.2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE ="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL= "ceph-data"
[2022-05-24 17:26:15,508][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0. 00 B
[2022-05-24 17:26:15,509][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 17:26:15,570][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"1";"wz--n-";"4762879";"0";"4194304
[2022-05-24 17:26:15,571][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 4762879
[2022-05-24 17:26:15,571][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcre ate --yes -l 4762879 -n osd-data-85f574b4-282f-428b-9324-3a5feb28a4e3 ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558
[2022-05-24 17:26:15,591][ceph_volume.process][INFO ] stderr Volume group "ceph-1cbcc4 b0-3ccc-431b-bff6-3c03b1dc9558" has insufficient free space (0 extents): 4762879 requir ed.
[2022-05-24 17:26:15,634][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, i n prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, i n prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2022-05-24 17:26:15,636][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-24 17:26:15,636][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-24 17:26:15,777][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.761+0 100 7f5b2d823700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-24T17:26:15.761+0100 7f5b2d823700 -1 AuthRegistry(0x7f5b280595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-24 17:26:16,187][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-24 17:26:16,201][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, i n prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, i n prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
root@ps-node1:~# tail /opt/petasan/log/ceph-volume.log
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, in prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
admin
2,930 Posts
May 25, 2022, 4:48 pmQuote from admin on May 25, 2022, 4:48 pmcan you re-install from iso, make sure the default engine is set at "Bluestore" and not "Filestore"
then retry, if you still have errors, post both logs.
can you re-install from iso, make sure the default engine is set at "Bluestore" and not "Filestore"
then retry, if you still have errors, post both logs.
setantatx
7 Posts
May 26, 2022, 9:32 amQuote from setantatx on May 26, 2022, 9:32 amthanks, I'll try install again with filestore engine this time and I'll comeback to you then
thanks, I'll try install again with filestore engine this time and I'll comeback to you then
Last edited on May 26, 2022, 1:03 pm by setantatx · #5
setantatx
7 Posts
May 30, 2022, 3:31 pmQuote from setantatx on May 30, 2022, 3:31 pmHi
I have just tried to install your recent release but unfortunately same thing. It's keep failing on creating OSD.
Error List
Disk sda prepare failure on node ps-node3.
Disk sda prepare failure on node ps-node1.
Disk sda prepare failure on node ps-node2.
Is there anything else that I could try?
Thanks
Hi
I have just tried to install your recent release but unfortunately same thing. It's keep failing on creating OSD.
Error List
Disk sda prepare failure on node ps-node3.
Disk sda prepare failure on node ps-node1.
Disk sda prepare failure on node ps-node2.
Is there anything else that I could try?
Thanks
admin
2,930 Posts
May 30, 2022, 6:44 pmQuote from admin on May 30, 2022, 6:44 pmHow large are the disks you want to add as OSDs ?
Were the disks used before on FreeBSD/ZFS ?
can you try to manually clean the disks with ( replace sdXX with drive ):
wipefs --all /dev/sdXX
dd if=/dev/zero of=/dev/sdXX bs=1M count=20 oflag=direct,dsync
Do you have other disks you can try ? do all disks fail to add or only some specific disks ?
Are you adding OSD with cache/journal ? if so can you test without for testing ?
How large are the disks you want to add as OSDs ?
Were the disks used before on FreeBSD/ZFS ?
can you try to manually clean the disks with ( replace sdXX with drive ):
wipefs --all /dev/sdXX
dd if=/dev/zero of=/dev/sdXX bs=1M count=20 oflag=direct,dsync
Do you have other disks you can try ? do all disks fail to add or only some specific disks ?
Are you adding OSD with cache/journal ? if so can you test without for testing ?
Last edited on May 30, 2022, 6:47 pm by admin · #7
setantatx
7 Posts
May 31, 2022, 9:28 amQuote from setantatx on May 31, 2022, 9:28 amall the drives were restriped few times, all 3 nodes have 12 drives each set in as raid5 by LSI MR9266-4i controller. I did try remove partitions and fs few times. I do also have ssd there which I was hoping to use as my cache drive, and adding that seams to be working fine. Currently I'm trying to add OSD without cache and journal since I think I've seen some post saying that adding with those 2 may fail.
Could you explain why, when I'm trying to add osd drive under node configuration, at the end of the process drive status is changed to mounted but under the ubuntu is actually not.
Also is the any manual process of creating OSD explained anywhere?
thanks
all the drives were restriped few times, all 3 nodes have 12 drives each set in as raid5 by LSI MR9266-4i controller. I did try remove partitions and fs few times. I do also have ssd there which I was hoping to use as my cache drive, and adding that seams to be working fine. Currently I'm trying to add OSD without cache and journal since I think I've seen some post saying that adding with those 2 may fail.
Could you explain why, when I'm trying to add osd drive under node configuration, at the end of the process drive status is changed to mounted but under the ubuntu is actually not.
Also is the any manual process of creating OSD explained anywhere?
thanks
admin
2,930 Posts
May 31, 2022, 10:51 amQuote from admin on May 31, 2022, 10:51 amMy feeling is it relates to the RAID 5 setup, the ceph-volume.log has a error has "..insufficient free space (0 extents): 4762879 required". Can you recheck the LSI setup, also it is better not to use RAID with ceph and give all the disks to the system individually to create separate OSDs, it is better for performance and HA is build within. As an exception to the last rule, if using HDD, it is sometimes better to configure RAID 0 and create single disk volumes (each RAID 0 is a single disk) so you can use controller writeback cache (if battery backed)..but at this stage i recommend you disable RAID altogether and setup the drives in JBOD mode, at leat till you solve the issue.
Another thing you can do, is deploy the cluster without selecting to add OSDs and without aly pools. Once built, you can use the Management UI and add disks individually, it may make it easier to look at the logs as you add a single disk.
Adding cache and journal surely works, we do a lot of testing ourselves. In some cases if you do not have enough ram your cache could fail ( we put a notice you need 2% of cache size in ram ). But again if you have disk issues it is better to use plain OSDs at this stage so it is easier to identify the problem.
Cant say why the failed OSD attempt show the disks as mounted, probably it was done in the middle of the failed OSD creation attempt
My feeling is it relates to the RAID 5 setup, the ceph-volume.log has a error has "..insufficient free space (0 extents): 4762879 required". Can you recheck the LSI setup, also it is better not to use RAID with ceph and give all the disks to the system individually to create separate OSDs, it is better for performance and HA is build within. As an exception to the last rule, if using HDD, it is sometimes better to configure RAID 0 and create single disk volumes (each RAID 0 is a single disk) so you can use controller writeback cache (if battery backed)..but at this stage i recommend you disable RAID altogether and setup the drives in JBOD mode, at leat till you solve the issue.
Another thing you can do, is deploy the cluster without selecting to add OSDs and without aly pools. Once built, you can use the Management UI and add disks individually, it may make it easier to look at the logs as you add a single disk.
Adding cache and journal surely works, we do a lot of testing ourselves. In some cases if you do not have enough ram your cache could fail ( we put a notice you need 2% of cache size in ram ). But again if you have disk issues it is better to use plain OSDs at this stage so it is easier to identify the problem.
Cant say why the failed OSD attempt show the disks as mounted, probably it was done in the middle of the failed OSD creation attempt
Last edited on May 31, 2022, 10:54 am by admin · #9
setantatx
7 Posts
May 31, 2022, 3:22 pmQuote from setantatx on May 31, 2022, 3:22 pmOK so I did create 12 virtual raid0 drives, unfortunately it's still failing to create OSD, this was new installation, and I have selected rbd instead of cephfs during initial configuration. It's interesting that journal drive, of this same controller, can be created without any problem
31/05/2022 12:59:23 INFO Start cleaning : sdb
31/05/2022 12:59:23 INFO Executing : wipefs --all /dev/sdb
31/05/2022 12:59:23 INFO Executing : dd if=/dev/zero of=/dev/sdb bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
31/05/2022 12:59:23 INFO Executing : parted -s /dev/sdb mklabel gpt
31/05/2022 12:59:24 INFO Executing : partprobe /dev/sdb
31/05/2022 12:59:27 INFO User didn't select a journal for disk sdb, so the journal will be on the same disk.
31/05/2022 12:59:27 INFO Creating journal partition num 1 size 20480MB on /dev/sdb
31/05/2022 12:59:28 INFO Calling partprobe on sdb device
31/05/2022 12:59:28 INFO Executing partprobe /dev/sdb
31/05/2022 12:59:28 INFO Calling udevadm on sdb device
31/05/2022 12:59:28 INFO Executing udevadm settle --timeout 30
31/05/2022 12:59:31 INFO User didn't select a cache for disk sdb.
31/05/2022 12:59:31 INFO Start prepare filestore OSD : sdb
31/05/2022 12:59:31 INFO Creating data partition num 2 size 1907200MB on /dev/sdb
31/05/2022 12:59:32 INFO Calling partprobe on sdb device
31/05/2022 12:59:32 INFO Executing partprobe /dev/sdb
31/05/2022 12:59:33 INFO Calling udevadm on sdb device
31/05/2022 12:59:33 INFO Executing udevadm settle --timeout 30
31/05/2022 12:59:36 INFO Starting : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdb2 --journal /dev/sdb1
31/05/2022 12:59:41 ERROR Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdb2 --journal /dev /sdb1
[2022-05-31 12:59:38,206][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 12:59:38,227][ceph_volume.process][INFO ] stdout AQAqA5ZizBd3DRAAJgfBcJdVFDP+Q5/RrSA3ww==
[2022-05-31 12:59:38,228][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499
[2022-05-31 12:59:40,624][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499 isize=2048 agcount=4, agsize=120749824 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=482999296, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=235839, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-31 12:59:40,624][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499 /var/lib/ceph/osd/ceph-0
[2022-05-31 12:59:40,628][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-31 12:59:40,629][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-31 12:59:40,630][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-31 12:59:40,630][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-31 12:59:40,770][ceph_volume.process][INFO ] stderr 2022-05-31T12:59:40.759+0100 7fec8754a700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-31T12:59:40.759+0100 7fec8754a700 -1 AuthRegistry(0x7fec800595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 12:59:41,208][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-31 12:59:41,221][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
I'm also don't think that is controller related since, OSD installing attempt on SSD drive connected directly to the board via Intel Controller (this is supermicro board) also give an error
[2022-05-31 13:15:55,980][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdn2 --journal /dev/sdn1
[2022-05-31 13:15:55,983][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdd /dev/sdd disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sde /dev/sde disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdf /dev/sdf disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdg /dev/sdg disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdh /dev/sdh disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdi /dev/sdi disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdj /dev/sdj disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdk /dev/sdk disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdl /dev/sdl disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm /dev/sdm disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm1 /dev/sdm1 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm2 /dev/sdm2 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm3 /dev/sdm3 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdm4 /dev/sdm4 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdm5 /dev/sdm5 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn /dev/sdn disk
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn1 /dev/sdn1 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn2 /dev/sdn2 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--6354cda5--8eb9--45c7--b5fd--832cf25db520-osd--data--ff06d179--238c--4522--8732--02dd77906499 lvm
[2022-05-31 13:15:56,015][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdn2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-31 13:15:56,205][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn2
[2022-05-31 13:15:56,210][ceph_volume.process][INFO ] stdout NAME="sdn2" KNAME="sdn2" MAJ:MIN="8:210" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="91.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-data"
[2022-05-31 13:15:56,211][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sdn2
[2022-05-31 13:15:56,224][ceph_volume.process][INFO ] stdout /dev/sdn2: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="02ef1a82-f707-46ad-a1bd-0543b18b6194" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="192496527" PART_ENTRY_DISK="8:208"
[2022-05-31 13:15:56,225][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdn2
[2022-05-31 13:15:56,293][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdn2".
[2022-05-31 13:15:56,293][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdn2
[2022-05-31 13:15:56,322][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdn2: (2) No such file or directory
[2022-05-31 13:15:56,323][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdn2
[2022-05-31 13:15:56,352][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdn2: (2) No such file or directory
[2022-05-31 13:15:56,353][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata2/host3/target3:0:0/3:0:0:0/block/sdn/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout MINOR=210
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=5666124091
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_VENDOR=ATA
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_MODEL=PNY_CS900_120GB
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=PNY\x20CS900\x20120GB\x20
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_REVISION=0613
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_VENDOR=PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_T10=ATA_PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_ATA=PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REG=5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_MODEL=PNY_CS900_120GB
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=PNY\x20CS900\x20120GB\x20
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_REVISION=0613
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_WWN=0x5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_BUS=ata
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_ATA=1
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SERIAL=PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=PNY4419191101150445B
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:1f.2-ata-2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=9a98def8-05a4-4ea9-b06c-7b134afcb5fb
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-data
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=02ef1a82-f707-46ad-a1bd-0543b18b6194
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=41945088
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=192496527
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:208
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/scsi-1ATA_PNY_CS900_120GB_SSD_PNY4419191101150445B-part2 /dev/disk/by-id/scsi-35f8db4c44190445b-part2 /dev/disk/by-path/pci-0000:00:1f.2-ata-2-part2 /dev/disk/by-partuuid/02ef1a82-f707-46ad-a1bd-0543b18b6194 /dev/disk/by-id/wwn-0x5f8db4c44190445b-part2 /dev/disk/by-id/scsi-SATA_PNY_CS900_120GB_PNY4419191101150445B-part2 /dev/disk/by-partlabel/ceph-data /dev/disk/by-id/ata-PNY_CS900_120GB_SSD_PNY4419191101150445B-part2 /dev/disk/by-id/scsi-0ATA_PNY_CS900_120GB_PNY4419191101150445B-part2
[2022-05-31 13:15:56,361][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-31 13:15:56,361][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2022-05-31 13:15:56,362][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 13:15:56,382][ceph_volume.process][INFO ] stdout AQD8BpZiXzm6FhAAddGnc78irvtxEvN2iJbxOg==
[2022-05-31 13:15:56,383][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stdout 0
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:56.511+0100 7f644d8a1700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:56.511+0100 7f644d8a1700 -1 AuthRegistry(0x7f64480595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn1
[2022-05-31 13:15:56,978][ceph_volume.process][INFO ] stdout NAME="sdn1" KNAME="sdn1" MAJ:MIN="8:209" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-journal"
[2022-05-31 13:15:56,979][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sdn1
[2022-05-31 13:15:56,988][ceph_volume.process][INFO ] stdout 5e0b68da-7c1f-4271-9d25-00d66167a53d
[2022-05-31 13:15:56,989][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn2
[2022-05-31 13:15:56,994][ceph_volume.process][INFO ] stdout NAME="sdn2" KNAME="sdn2" MAJ:MIN="8:210" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="91.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-data"
[2022-05-31 13:15:56,995][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2022-05-31 13:15:56,995][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdn2
[2022-05-31 13:15:57,057][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdn2".
[2022-05-31 13:15:57,058][ceph_volume.process][INFO ] Running command: /usr/sbin/vgcreate --force --yes ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 /dev/sdn2
[2022-05-31 13:15:57,108][ceph_volume.process][INFO ] stdout Physical volume "/dev/sdn2" successfully created.
[2022-05-31 13:15:57,114][ceph_volume.process][INFO ] stdout Volume group "ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92" successfully created
[2022-05-31 13:15:57,169][ceph_volume.process][INFO ] Running command: /usr/sbin/vgs --noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2022-05-31 13:15:57,245][ceph_volume.process][INFO ] stdout ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92";"1";"0";"wz--n-";"23497";"23497";"4194304
[2022-05-31 13:15:57,246][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 23497
[2022-05-31 13:15:57,246][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcreate --yes -l 23497 -n osd-data-99936872-8847-42d0-a644-7c6568f7d662 ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92
[2022-05-31 13:15:57,309][ceph_volume.process][INFO ] stdout Logical volume "osd-data-99936872-8847-42d0-a644-7c6568f7d662" created.
[2022-05-31 13:15:57,357][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=osd-data-99936872-8847-42d0-a644-7c6568f7d662,vg_name=ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-31 13:15:57,429][ceph_volume.process][INFO ] stdout ";"/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662";"osd-data-99936872-8847-42d0-a644-7c6568f7d662";"ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92";"SLnRA9-j2Wf-ZXj0-KF9H-9Lnz-99pV-vRujH9";"98553561088
[2022-05-31 13:15:57,429][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --addtag ceph.type=data --addtag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,517][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,518][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --deltag ceph.type=data --deltag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,597][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,598][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --addtag ceph.osd_fsid=99936872-8847-42d0-a644-7c6568f7d662 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=b9fb70da-2714-4440-a22a-08302a728bb3 --addtag ceph.cluster_name=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=5e0b68da-7c1f-4271-9d25-00d66167a53d --addtag ceph.journal_device=/dev/sdn1 --addtag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 --addtag ceph.data_uuid=SLnRA9-j2Wf-ZXj0-KF9H-9Lnz-99pV-vRujH9 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,681][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,682][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 13:15:57,703][ceph_volume.process][INFO ] stdout AQD9BpZid1LSKRAAdVigDmbcfLJnX3v/dCPB5w==
[2022-05-31 13:15:57,704][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,902][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 isize=2048 agcount=4, agsize=6015232 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=24060928, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=11748, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-31 13:15:57,903][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /var/lib/ceph/osd/ceph-0
[2022-05-31 13:15:57,907][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-31 13:15:57,907][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-31 13:15:57,910][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-31 13:15:57,910][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-31 13:15:58,055][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:58.043+0100 7f686b426700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-31T13:15:58.043+0100 7f686b426700 -1 AuthRegistry(0x7f68640595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 13:15:58,494][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-31 13:15:58,506][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
OK so I did create 12 virtual raid0 drives, unfortunately it's still failing to create OSD, this was new installation, and I have selected rbd instead of cephfs during initial configuration. It's interesting that journal drive, of this same controller, can be created without any problem
31/05/2022 12:59:23 INFO Start cleaning : sdb
31/05/2022 12:59:23 INFO Executing : wipefs --all /dev/sdb
31/05/2022 12:59:23 INFO Executing : dd if=/dev/zero of=/dev/sdb bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
31/05/2022 12:59:23 INFO Executing : parted -s /dev/sdb mklabel gpt
31/05/2022 12:59:24 INFO Executing : partprobe /dev/sdb
31/05/2022 12:59:27 INFO User didn't select a journal for disk sdb, so the journal will be on the same disk.
31/05/2022 12:59:27 INFO Creating journal partition num 1 size 20480MB on /dev/sdb
31/05/2022 12:59:28 INFO Calling partprobe on sdb device
31/05/2022 12:59:28 INFO Executing partprobe /dev/sdb
31/05/2022 12:59:28 INFO Calling udevadm on sdb device
31/05/2022 12:59:28 INFO Executing udevadm settle --timeout 30
31/05/2022 12:59:31 INFO User didn't select a cache for disk sdb.
31/05/2022 12:59:31 INFO Start prepare filestore OSD : sdb
31/05/2022 12:59:31 INFO Creating data partition num 2 size 1907200MB on /dev/sdb
31/05/2022 12:59:32 INFO Calling partprobe on sdb device
31/05/2022 12:59:32 INFO Executing partprobe /dev/sdb
31/05/2022 12:59:33 INFO Calling udevadm on sdb device
31/05/2022 12:59:33 INFO Executing udevadm settle --timeout 30
31/05/2022 12:59:36 INFO Starting : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdb2 --journal /dev/sdb1
31/05/2022 12:59:41 ERROR Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdb2 --journal /dev /sdb1
[2022-05-31 12:59:38,206][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 12:59:38,227][ceph_volume.process][INFO ] stdout AQAqA5ZizBd3DRAAJgfBcJdVFDP+Q5/RrSA3ww==
[2022-05-31 12:59:38,228][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499
[2022-05-31 12:59:40,624][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499 isize=2048 agcount=4, agsize=120749824 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=482999296, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=235839, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-31 12:59:40,624][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499 /var/lib/ceph/osd/ceph-0
[2022-05-31 12:59:40,628][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-31 12:59:40,629][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-31 12:59:40,630][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-31 12:59:40,630][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-31 12:59:40,770][ceph_volume.process][INFO ] stderr 2022-05-31T12:59:40.759+0100 7fec8754a700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-31T12:59:40.759+0100 7fec8754a700 -1 AuthRegistry(0x7fec800595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 12:59:41,208][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-31 12:59:41,221][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
I'm also don't think that is controller related since, OSD installing attempt on SSD drive connected directly to the board via Intel Controller (this is supermicro board) also give an error
[2022-05-31 13:15:55,980][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdn2 --journal /dev/sdn1
[2022-05-31 13:15:55,983][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdd /dev/sdd disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sde /dev/sde disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdf /dev/sdf disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdg /dev/sdg disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdh /dev/sdh disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdi /dev/sdi disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdj /dev/sdj disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdk /dev/sdk disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdl /dev/sdl disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm /dev/sdm disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm1 /dev/sdm1 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm2 /dev/sdm2 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm3 /dev/sdm3 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdm4 /dev/sdm4 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdm5 /dev/sdm5 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn /dev/sdn disk
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn1 /dev/sdn1 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn2 /dev/sdn2 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--6354cda5--8eb9--45c7--b5fd--832cf25db520-osd--data--ff06d179--238c--4522--8732--02dd77906499 lvm
[2022-05-31 13:15:56,015][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdn2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-31 13:15:56,205][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn2
[2022-05-31 13:15:56,210][ceph_volume.process][INFO ] stdout NAME="sdn2" KNAME="sdn2" MAJ:MIN="8:210" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="91.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-data"
[2022-05-31 13:15:56,211][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sdn2
[2022-05-31 13:15:56,224][ceph_volume.process][INFO ] stdout /dev/sdn2: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="02ef1a82-f707-46ad-a1bd-0543b18b6194" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="192496527" PART_ENTRY_DISK="8:208"
[2022-05-31 13:15:56,225][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdn2
[2022-05-31 13:15:56,293][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdn2".
[2022-05-31 13:15:56,293][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdn2
[2022-05-31 13:15:56,322][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdn2: (2) No such file or directory
[2022-05-31 13:15:56,323][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdn2
[2022-05-31 13:15:56,352][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdn2: (2) No such file or directory
[2022-05-31 13:15:56,353][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata2/host3/target3:0:0/3:0:0:0/block/sdn/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout MINOR=210
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=5666124091
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_VENDOR=ATA
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_MODEL=PNY_CS900_120GB
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=PNY\x20CS900\x20120GB\x20
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_REVISION=0613
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_VENDOR=PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_T10=ATA_PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_ATA=PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REG=5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_MODEL=PNY_CS900_120GB
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=PNY\x20CS900\x20120GB\x20
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_REVISION=0613
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_WWN=0x5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_BUS=ata
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_ATA=1
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SERIAL=PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=PNY4419191101150445B
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:1f.2-ata-2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=9a98def8-05a4-4ea9-b06c-7b134afcb5fb
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-data
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=02ef1a82-f707-46ad-a1bd-0543b18b6194
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=41945088
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=192496527
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:208
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/scsi-1ATA_PNY_CS900_120GB_SSD_PNY4419191101150445B-part2 /dev/disk/by-id/scsi-35f8db4c44190445b-part2 /dev/disk/by-path/pci-0000:00:1f.2-ata-2-part2 /dev/disk/by-partuuid/02ef1a82-f707-46ad-a1bd-0543b18b6194 /dev/disk/by-id/wwn-0x5f8db4c44190445b-part2 /dev/disk/by-id/scsi-SATA_PNY_CS900_120GB_PNY4419191101150445B-part2 /dev/disk/by-partlabel/ceph-data /dev/disk/by-id/ata-PNY_CS900_120GB_SSD_PNY4419191101150445B-part2 /dev/disk/by-id/scsi-0ATA_PNY_CS900_120GB_PNY4419191101150445B-part2
[2022-05-31 13:15:56,361][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-31 13:15:56,361][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2022-05-31 13:15:56,362][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 13:15:56,382][ceph_volume.process][INFO ] stdout AQD8BpZiXzm6FhAAddGnc78irvtxEvN2iJbxOg==
[2022-05-31 13:15:56,383][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stdout 0
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:56.511+0100 7f644d8a1700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:56.511+0100 7f644d8a1700 -1 AuthRegistry(0x7f64480595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn1
[2022-05-31 13:15:56,978][ceph_volume.process][INFO ] stdout NAME="sdn1" KNAME="sdn1" MAJ:MIN="8:209" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-journal"
[2022-05-31 13:15:56,979][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sdn1
[2022-05-31 13:15:56,988][ceph_volume.process][INFO ] stdout 5e0b68da-7c1f-4271-9d25-00d66167a53d
[2022-05-31 13:15:56,989][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn2
[2022-05-31 13:15:56,994][ceph_volume.process][INFO ] stdout NAME="sdn2" KNAME="sdn2" MAJ:MIN="8:210" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="91.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-data"
[2022-05-31 13:15:56,995][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2022-05-31 13:15:56,995][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdn2
[2022-05-31 13:15:57,057][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdn2".
[2022-05-31 13:15:57,058][ceph_volume.process][INFO ] Running command: /usr/sbin/vgcreate --force --yes ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 /dev/sdn2
[2022-05-31 13:15:57,108][ceph_volume.process][INFO ] stdout Physical volume "/dev/sdn2" successfully created.
[2022-05-31 13:15:57,114][ceph_volume.process][INFO ] stdout Volume group "ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92" successfully created
[2022-05-31 13:15:57,169][ceph_volume.process][INFO ] Running command: /usr/sbin/vgs --noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2022-05-31 13:15:57,245][ceph_volume.process][INFO ] stdout ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92";"1";"0";"wz--n-";"23497";"23497";"4194304
[2022-05-31 13:15:57,246][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 23497
[2022-05-31 13:15:57,246][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcreate --yes -l 23497 -n osd-data-99936872-8847-42d0-a644-7c6568f7d662 ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92
[2022-05-31 13:15:57,309][ceph_volume.process][INFO ] stdout Logical volume "osd-data-99936872-8847-42d0-a644-7c6568f7d662" created.
[2022-05-31 13:15:57,357][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=osd-data-99936872-8847-42d0-a644-7c6568f7d662,vg_name=ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-31 13:15:57,429][ceph_volume.process][INFO ] stdout ";"/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662";"osd-data-99936872-8847-42d0-a644-7c6568f7d662";"ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92";"SLnRA9-j2Wf-ZXj0-KF9H-9Lnz-99pV-vRujH9";"98553561088
[2022-05-31 13:15:57,429][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --addtag ceph.type=data --addtag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,517][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,518][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --deltag ceph.type=data --deltag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,597][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,598][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --addtag ceph.osd_fsid=99936872-8847-42d0-a644-7c6568f7d662 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=b9fb70da-2714-4440-a22a-08302a728bb3 --addtag ceph.cluster_name=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=5e0b68da-7c1f-4271-9d25-00d66167a53d --addtag ceph.journal_device=/dev/sdn1 --addtag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 --addtag ceph.data_uuid=SLnRA9-j2Wf-ZXj0-KF9H-9Lnz-99pV-vRujH9 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,681][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,682][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 13:15:57,703][ceph_volume.process][INFO ] stdout AQD9BpZid1LSKRAAdVigDmbcfLJnX3v/dCPB5w==
[2022-05-31 13:15:57,704][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,902][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 isize=2048 agcount=4, agsize=6015232 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=24060928, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=11748, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-31 13:15:57,903][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /var/lib/ceph/osd/ceph-0
[2022-05-31 13:15:57,907][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-31 13:15:57,907][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-31 13:15:57,910][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-31 13:15:57,910][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-31 13:15:58,055][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:58.043+0100 7f686b426700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-31T13:15:58.043+0100 7f686b426700 -1 AuthRegistry(0x7f68640595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 13:15:58,494][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-31 13:15:58,506][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
Last edited on May 31, 2022, 3:23 pm by setantatx · #10
Pages: 1 2
Unable to create osd
setantatx
7 Posts
Quote from setantatx on May 24, 2022, 4:30 pmHi All
I'm new to this project and I have already come across few issues. My setup contains 3 nodes 12 x 2TB drives each setup as Raid5 with LSI controller. Initial configuration worked fine for me however after I've messed up some configuration I have decided to start from the scratch. However fresh install attempt failed on creating OSD. I was also trying to create OSD from web gui, which changed usage status of the drive from none to mounted, but doesn't seams be available as OSD drive.
As I've mentioned this is all new to me and I would appreciate any help on this matter.
24/05/2022 17:00:17 ERROR Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
24/05/2022 17:00:13 INFO Starting : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
24/05/2022 17:00:10 INFO Executing udevadm settle --timeout 30
24/05/2022 17:00:10 INFO Calling udevadm on sda device
24/05/2022 17:00:10 INFO Executing partprobe /dev/sda
24/05/2022 17:00:10 INFO Calling partprobe on sda device
24/05/2022 17:00:09 INFO Creating data partition num 2 size 19072000MB on /dev/sda
24/05/2022 17:00:09 INFO Start prepare filestore OSD : sda
24/05/2022 17:00:09 INFO User didn't select a cache for disk sda.
24/05/2022 17:00:06 INFO Executing udevadm settle --timeout 30
24/05/2022 17:00:06 INFO Calling udevadm on sda device
24/05/2022 17:00:06 INFO Executing partprobe /dev/sda
24/05/2022 17:00:06 INFO Calling partprobe on sda device
24/05/2022 17:00:05 INFO Creating journal partition num 1 size 20480MB on /dev/sda
24/05/2022 17:00:04 INFO User didn't select a journal for disk sda, so the journal will be on the same disk.
24/05/2022 17:00:01 INFO Executing : partprobe /dev/sda
24/05/2022 17:00:01 INFO Executing : parted -s /dev/sda mklabel gpt
24/05/2022 17:00:01 INFO Executing : dd if=/dev/zero of=/dev/sda bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
Hi All
I'm new to this project and I have already come across few issues. My setup contains 3 nodes 12 x 2TB drives each setup as Raid5 with LSI controller. Initial configuration worked fine for me however after I've messed up some configuration I have decided to start from the scratch. However fresh install attempt failed on creating OSD. I was also trying to create OSD from web gui, which changed usage status of the drive from none to mounted, but doesn't seams be available as OSD drive.
As I've mentioned this is all new to me and I would appreciate any help on this matter.
24/05/2022 17:00:17 ERROR Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
24/05/2022 17:00:13 INFO Starting : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
24/05/2022 17:00:10 INFO Executing udevadm settle --timeout 30
24/05/2022 17:00:10 INFO Calling udevadm on sda device
24/05/2022 17:00:10 INFO Executing partprobe /dev/sda
24/05/2022 17:00:10 INFO Calling partprobe on sda device
24/05/2022 17:00:09 INFO Creating data partition num 2 size 19072000MB on /dev/sda
24/05/2022 17:00:09 INFO Start prepare filestore OSD : sda
24/05/2022 17:00:09 INFO User didn't select a cache for disk sda.
24/05/2022 17:00:06 INFO Executing udevadm settle --timeout 30
24/05/2022 17:00:06 INFO Calling udevadm on sda device
24/05/2022 17:00:06 INFO Executing partprobe /dev/sda
24/05/2022 17:00:06 INFO Calling partprobe on sda device
24/05/2022 17:00:05 INFO Creating journal partition num 1 size 20480MB on /dev/sda
24/05/2022 17:00:04 INFO User didn't select a journal for disk sda, so the journal will be on the same disk.
24/05/2022 17:00:01 INFO Executing : partprobe /dev/sda
24/05/2022 17:00:01 INFO Executing : parted -s /dev/sda mklabel gpt
24/05/2022 17:00:01 INFO Executing : dd if=/dev/zero of=/dev/sda bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
admin
2,930 Posts
Quote from admin on May 25, 2022, 8:18 amThe ceph-volumeprepare command failed, you could get its logs from :
/opt/petasan/log/ceph-volume.log
Also you seem to be using filestore than bluestore engine, if you dont have a special reason for this, recommend you use default bluestor.
The ceph-volumeprepare command failed, you could get its logs from :
/opt/petasan/log/ceph-volume.log
Also you seem to be using filestore than bluestore engine, if you dont have a special reason for this, recommend you use default bluestor.
setantatx
7 Posts
Quote from setantatx on May 25, 2022, 3:42 pmto be honest the type of the engine was picked by default configuration during installation process. Apology for my ignorance, I should know what the different between those 2 is but I haven't got that far.
Please find my ceph-volume.log bellow
[2022-05-23 15:11:13,271][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data ps-e51bc678-0e20-462a-b537-493485d 153ee-wc-osd.0/main --journal /dev/sda1
[2022-05-23 15:11:13,275][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/sdc2 /dev/sdc2 part
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main lvm
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-1 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-cache_cvol lvm
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-2 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main_wcorig lvm
[2022-05-23 15:11:13,294][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:13,361][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:13,361][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/ma in
[2022-05-23 15:11:13,455][ceph_volume.process][INFO ] stderr unable to read label for /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main: (2) No such file or directo ry
[2022-05-23 15:11:13,456][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/ma in
[2022-05-23 15:11:13,485][ceph_volume.process][INFO ] stderr unable to read label for /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main: (2) No such file or directo ry
[2022-05-23 15:11:13,485][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVPATH=/devices/virtual/ block/dm-0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVNAME=/dev/dm-0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout MAJOR=254
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout MINOR=0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=48176660 44
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_DISABLE_LIBRARY_F ALLBACK_FLAG=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_PRIMARY_SOURCE_FL AG=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_ACTIVATION=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_NAME=ps--e51bc678--0e2 0--462a--b537--493485d153ee--wc--osd.0-main
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UUID=LVM-kEm7AJau7LtL3 yx2B5Ar3aWeuAyhuOxOUDViz0ihtft3rP00lVWsGtl0w22HqoY0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_SUSPENDED=0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_RULES=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_RULES_VSN=2
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_VG_NAME=ps-e51bc678-0e 20-462a-b537-493485d153ee-wc-osd.0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_LV_NAME=main
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_TABLE_STATE=LIVE
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_STATE=ACTIVE
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/ dm-name-ps--e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main /dev/disk/by-id/dm -uuid-LVM-kEm7AJau7LtL3yx2B5Ar3aWeuAyhuOxOUDViz0ihtft3rP00lVWsGtl0w22HqoY0 /dev/ps-e51b c678-0e20-462a-b537-493485d153ee-wc-osd.0/main /dev/mapper/ps--e51bc678--0e20--462a--b5 37--493485d153ee--wc--osd.0-main
[2022-05-23 15:11:13,491][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-23 15:11:13,491][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:13,545][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:13,545][ceph_volume.api.lvm][WARNING] device is not part of ceph: </d ev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main>
[2022-05-23 15:11:13,546][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-23 15:11:13,567][ceph_volume.process][INFO ] stdout AQABlotiXS23IRAAYIxkLyUad ewf5fY4RYVbgA==
[2022-05-23 15:11:13,567][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new b486d663-8fa0-4af2-af12-07d9a917f452
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stdout 0
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:13.692+0 100 7f0808ffa700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:13.692+0 100 7f0808ffa700 -1 AuthRegistry(0x7f08040595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-23 15:11:14,113][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-23 15:11:14,113][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-23 15:11:14,119][ceph_volume.process][INFO ] stdout c081bde1-8e12-43f5-aeaa-5 291fbbabf8f
[2022-05-23 15:11:14,119][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:14,177][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:14,177][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.osd_fsid=b486d663-8fa0-4af2-af12-07d9a917f452 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee --addtag ceph.cluster_n ame=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=c081bde1-8e12-43f5-aeaa-5291fbbabf8f --addtag cep h.journal_device=/dev/sda1 --addtag ceph.data_device=/dev/ps-e51bc678-0e20-462a-b537-49 3485d153ee-wc-osd.0/main --addtag ceph.data_uuid=UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:14,265][ceph_volume.process][INFO ] stdout Logical volume ps-e51bc67 8-0e20-462a-b537-493485d153ee-wc-osd.0/main changed.
[2022-05-23 15:11:14,266][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-23 15:11:14,286][ceph_volume.process][INFO ] stdout AQAClotinPUCERAA5p/tE3iZe oBJJzKnHFYOYQ==
[2022-05-23 15:11:14,288][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:20,843][ceph_volume.process][INFO ] stdout meta-data=/dev/ps-e51bc67 8-0e20-462a-b537-493485d153ee-wc-osd.0/main isize=2048 agcount=20, agsize=268435455 b lks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=5365431296, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-23 15:11:20,844][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main /var/lib/ceph/osd/ceph-0
[2022-05-23 15:11:20,878][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-23 15:11:20,878][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-23 15:11:20,880][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-23 15:11:20,880][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-23 15:11:21,019][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:21.004+0 100 7fc33e3ba700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-23T15:11:21.004+0100 7fc33e3ba700 -1 AuthRegistry(0x7fc3380595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-23 15:11:21,409][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-23 15:11:21,421][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 13:36:17,183][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
[2022-05-24 13:36:17,186][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-24 13:36:17,193][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-05-24 13:36:17,193][ceph_volume.process][INFO ] stdout /dev/sdc2 /dev/sdc2 part
[2022-05-24 13:36:17,201][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 13:36:17,373][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 13:36:17,378][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="18. 2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" L OG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-data"
[2022-05-24 13:36:17,378][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sda2
[2022-05-24 13:36:17,384][ceph_volume.process][INFO ] stdout /dev/sda2: PART_ENTRY_SCH EME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="076b9d21-1e26-4751-b602-2b108d59 e8df" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART _ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="39017510879" PART_ENTRY_DISK="8:0"
[2022-05-24 13:36:17,385][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 13:36:17,437][ceph_volume.process][INFO ] stderr Failed to find physical v olume "/dev/sda2".
[2022-05-24 13:36:17,438][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 13:36:17,531][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 13:36:17,531][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 13:36:17,560][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 13:36:17,561][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000: 00/0000:00:03.2/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout MINOR=2
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=78736261 179
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_VENDOR=LSI
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=LSI\x20\x 20\x20\x20\x20
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_MODEL=MR9266-4i
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=MR9266-4i\ x20\x20\x20\x20\x20\x20\x20
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_REVISION=3.27
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=00e9876 7098a591e2a40d8ec06b00506
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REGEXT =600605b006ecd8402a1e598a096787e9
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_VENDOR=LSI
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=LSI\x20\x20 \x20\x20\x20
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_MODEL=MR9266-4i
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=MR9266-4i\x2 0\x20\x20\x20\x20\x20\x20
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_REVISION=3.27
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_WWN=0x600605b006ecd840 2a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x6 00605b006ecd8402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SERIAL=3600605b006ecd8 402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=600605b00 6ecd8402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH= 0
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:06:00.0- scsi-0:2:0:0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_06_0 0_0-scsi-0_2_0_0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=fec0fa a0-0d54-4e19-a330-eaf777dd4911
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-d ata
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=076b9d 21-1e26-4751-b602-2b108d59e8df
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63d af-8483-4772-8e79-3d69d8477de4
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=4194 5088
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=390175 10879
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-par tuuid/076b9d21-1e26-4751-b602-2b108d59e8df /dev/disk/by-path/pci-0000:06:00.0-scsi-0:2: 0:0-part2 /dev/disk/by-partlabel/ceph-data /dev/disk/by-id/wwn-0x600605b006ecd8402a1e59 8a096787e9-part2 /dev/disk/by-id/scsi-3600605b006ecd8402a1e598a096787e9-part2 /dev/disk /by-id/scsi-SLSI_MR9266-4i_00e98767098a591e2a40d8ec06b00506-part2
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-24 13:36:17,569][ceph_volume.api.lvm][WARNING] device is not part of ceph: Non e
[2022-05-24 13:36:17,570][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 13:36:17,592][ceph_volume.process][INFO ] stdout AQBB0Yxit6w1IxAAmB50woaLj CcUE32615Vd2A==
[2022-05-24 13:36:17,593][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new 55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,150][ceph_volume.process][INFO ] stdout 0
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:17.719+0 100 7f8d7a3f2700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:17.719+0 100 7f8d7a3f2700 -1 AuthRegistry(0x7f8d740595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-24 13:36:18,156][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-24 13:36:18,156][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-24 13:36:18,184][ceph_volume.process][INFO ] stdout 791fa4c5-5eb5-4eef-8ec4-8 14973059c10
[2022-05-24 13:36:18,184][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 13:36:18,188][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="18. 2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" L OG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-data"
[2022-05-24 13:36:18,189][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0. 00 B
[2022-05-24 13:36:18,189][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 13:36:18,257][ceph_volume.process][INFO ] stderr Failed to find physical v olume "/dev/sda2".
[2022-05-24 13:36:18,258][ceph_volume.process][INFO ] Running command: /usr/sbin/vgcre ate --force --yes ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558 /dev/sda2
[2022-05-24 13:36:18,310][ceph_volume.process][INFO ] stdout Physical volume "/dev/sda 2" successfully created.
[2022-05-24 13:36:18,316][ceph_volume.process][INFO ] stdout Volume group "ceph-1cbcc4 b0-3ccc-431b-bff6-3c03b1dc9558" successfully created
[2022-05-24 13:36:18,361][ceph_volume.process][INFO ] Running command: /usr/sbin/vgs - -noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-1cbcc4b0-3c cc-431b-bff6-3c03b1dc9558 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_ count,vg_extent_size
[2022-05-24 13:36:18,429][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"0";"wz--n-";"4762879";"4762879";"4194304
[2022-05-24 13:36:18,429][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 4762879
[2022-05-24 13:36:18,430][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcre ate --yes -l 4762879 -n osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558
[2022-05-24 13:36:18,512][ceph_volume.process][INFO ] stdout Logical volume "osd-data- 55f50f2f-d558-4fc4-be4c-4378bf094232" created.
[2022-05-24 13:36:18,549][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=osd-data-55f5 0f2f-d558-4fc4-be4c-4378bf094232,vg_name=ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558 -o l v_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 13:36:18,601][ceph_volume.process][INFO ] stdout ";"/dev/ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50 f2f-d558-4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4p ed-b1CW-X5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 13:36:18,601][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.type=data --addtag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6 -3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 /dev/ceph-1cbcc4b0-3ccc-431 b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,681][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,681][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --deltag ceph.type=data --deltag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6 -3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 /dev/ceph-1cbcc4b0-3ccc-431 b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,749][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,749][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee --addtag ceph.cluster_n ame=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=791fa4c5-5eb5-4eef-8ec4-814973059c10 --addtag cep h.journal_device=/dev/sda1 --addtag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6- 3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 --addtag ceph.data_uuid=rKjf Y4-4ped-b1CW-X5xr-QY8C-RklX-Jdew6J /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-d ata-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,821][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,822][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 13:36:18,843][ceph_volume.process][INFO ] stdout AQBC0YxixPEpMhAAkNWd0CNrz rih6+2bFB3h3g==
[2022-05-24 13:36:18,844][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f -d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:20,156][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 isize=204 8 agcount=19, agsize=268435455 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=4877188096, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-24 13:36:20,157][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55 f50f2f-d558-4fc4-be4c-4378bf094232 /var/lib/ceph/osd/ceph-0
[2022-05-24 13:36:20,454][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-24 13:36:20,455][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 13:36:20,456][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-24 13:36:20,457][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-24 13:36:20,596][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:20.583+0 100 7f05dbf7d700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-24T13:36:20.583+0100 7f05dbf7d700 -1 AuthRegistry(0x7f05d40595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-24 13:36:21,002][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-24 13:36:21,015][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 15:12:28,502][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm activate --filestore 0 55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 15:12:28,505][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S tags={ceph.osd_id=0,c eph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232} -o lv_tags,lv_path,lv_name,vg_name,l v_uuid,lv_size
[2022-05-24 15:12:28,569][ceph_volume.process][INFO ] stdout ceph.cephx_lockbox_secret =,ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee,ceph.cluster_name=ceph,ceph.cr ush_device_class=None,ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/o sd-data-55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.data_uuid=rKjfY4-4ped-b1CW-X5xr-QY8C- RklX-Jdew6J,ceph.encrypted=0,ceph.journal_device=/dev/sda1,ceph.journal_uuid=791fa4c5-5 eb5-4eef-8ec4-814973059c10,ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.osd_ id=0,ceph.osdspec_affinity=,ceph.type=data,ceph.vdo=0";"/dev/ceph-1cbcc4b0-3ccc-431b-bf f6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50f2f-d558- 4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4ped-b1CW-X 5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 15:12:28,571][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -t PARTUUID="791fa4c5-5eb5-4eef-8ec4-814973059c10" -o device
[2022-05-24 15:12:28,645][ceph_volume.process][INFO ] stdout /dev/sda1
[2022-05-24 15:12:28,651][ceph_volume.util.system][INFO ] /dev/ceph-1cbcc4b0-3ccc-431b -bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 was not found as mount ed
[2022-05-24 15:12:28,652][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55 f50f2f-d558-4fc4-be4c-4378bf094232 /var/lib/ceph/osd/ceph-0
[2022-05-24 15:12:28,698][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-24 15:12:28,699][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 370, in main
self.activate(args)
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 296, in activate
activate_filestore(lvs, args.no_systemd)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 74, i n activate_filestore
prepare_utils.mount_osd(source, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 17:26:14,559][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
[2022-05-24 17:26:14,567][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-24 17:26:14,572][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-24 17:26:14,572][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/cep h--1cbcc4b0--3ccc--431b--bff6--3c03b1dc9558-osd--data--55f50f2f--d558--4fc4--be4c--4378 bf094232 lvm
[2022-05-24 17:26:14,581][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 17:26:14,651][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 17:26:14,656][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="3jkQAm-10vw-K46l-JeFt-H GMa-QNtH-eGjTJU" RO="0" RM="0" MODEL="" SIZE="18.2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE ="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL= "ceph-data"
[2022-05-24 17:26:14,656][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sda2
[2022-05-24 17:26:14,659][ceph_volume.process][INFO ] stdout /dev/sda2: UUID="3jkQAm-1 0vw-K46l-JeFt-HGMa-QNtH-eGjTJU" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" PART _ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="20a881d8-44c1-4d68-8ab 0-e3e8a469baad" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBE R="2" PART_ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="39017510879" PART_ENTRY_DISK="8:0"
[2022-05-24 17:26:14,659][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 17:26:14,726][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"1";"wz--n-";"4762879";"0";"4194304
[2022-05-24 17:26:14,727][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -o lv_tags,lv_path,lv_na me,vg_name,lv_uuid,lv_size /dev/sda2
[2022-05-24 17:26:14,782][ceph_volume.process][INFO ] stdout ceph.cephx_lockbox_secret =,ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee,ceph.cluster_name=ceph,ceph.cr ush_device_class=None,ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/o sd-data-55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.data_uuid=rKjfY4-4ped-b1CW-X5xr-QY8C- RklX-Jdew6J,ceph.encrypted=0,ceph.journal_device=/dev/sda1,ceph.journal_uuid=791fa4c5-5 eb5-4eef-8ec4-814973059c10,ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.osd_ id=0,ceph.osdspec_affinity=,ceph.type=data,ceph.vdo=0";"/dev/ceph-1cbcc4b0-3ccc-431b-bf f6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50f2f-d558- 4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4ped-b1CW-X 5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 17:26:14,783][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 17:26:14,876][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 17:26:14,876][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/sda2
[2022-05-24 17:26:14,880][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000: 00/0000:00:03.2/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda/sda2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sda2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout MINOR=2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=74541882 59
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_VENDOR=LSI
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=LSI\x20\x 20\x20\x20\x20
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_MODEL=MR9266-4i
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=MR9266-4i\ x20\x20\x20\x20\x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_REVISION=3.27
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=00e9876 7098a591e2a40d8ec06b00506
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REGEXT =600605b006ecd8402a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_VENDOR=LSI
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=LSI\x20\x20 \x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_MODEL=MR9266-4i
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=MR9266-4i\x2 0\x20\x20\x20\x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_REVISION=3.27
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_WWN=0x600605b006ecd840 2a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x6 00605b006ecd8402a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SERIAL=3600605b006ecd8 402a1e598a096787e9
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=600605b00 6ecd8402a1e598a096787e9
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH= 0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:06:00.0- scsi-0:2:0:0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_06_0 0_0-scsi-0_2_0_0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=c0e615 88-c7fb-476d-880d-325faa63557b
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_UUID=3jkQAm-10vw-K4 6l-JeFt-HGMa-QNtH-eGjTJU
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=3jkQAm-10v w-K46l-JeFt-HGMa-QNtH-eGjTJU
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-d ata
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=20a881 d8-44c1-4d68-8ab0-e3e8a469baad
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63d af-8483-4772-8e79-3d69d8477de4
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=4194 5088
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=390175 10879
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:0
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/ 8:2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan @8:2.service
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/ scsi-3600605b006ecd8402a1e598a096787e9-part2 /dev/disk/by-partlabel/ceph-data /dev/disk /by-partuuid/20a881d8-44c1-4d68-8ab0-e3e8a469baad /dev/disk/by-id/lvm-pv-uuid-3jkQAm-10 vw-K46l-JeFt-HGMa-QNtH-eGjTJU /dev/disk/by-id/scsi-SLSI_MR9266-4i_00e98767098a591e2a40d 8ec06b00506-part2 /dev/disk/by-id/wwn-0x600605b006ecd8402a1e598a096787e9-part2 /dev/dis k/by-path/pci-0000:06:00.0-scsi-0:2:0:0-part2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-24 17:26:14,885][ceph_volume.api.lvm][WARNING] device is not part of ceph: Non e
[2022-05-24 17:26:14,886][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 17:26:14,908][ceph_volume.process][INFO ] stdout AQAmB41i9JUNNhAAMGbvb2oD/ 87cxmVAwTUtOg==
[2022-05-24 17:26:14,909][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new 85f574b4-282f-428b-9324-3a5feb28a4e3
[2022-05-24 17:26:15,464][ceph_volume.process][INFO ] stdout 0
[2022-05-24 17:26:15,464][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.033+0 100 7f212424b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-24 17:26:15,465][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.033+0 100 7f212424b700 -1 AuthRegistry(0x7f211c0595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-24 17:26:15,465][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-24 17:26:15,469][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-24 17:26:15,470][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-24 17:26:15,503][ceph_volume.process][INFO ] stdout bb5848bf-2734-4648-bc97-9 097a2f67a78
[2022-05-24 17:26:15,504][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 17:26:15,508][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="3jkQAm-10vw-K46l-JeFt-H GMa-QNtH-eGjTJU" RO="0" RM="0" MODEL="" SIZE="18.2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE ="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL= "ceph-data"
[2022-05-24 17:26:15,508][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0. 00 B
[2022-05-24 17:26:15,509][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 17:26:15,570][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"1";"wz--n-";"4762879";"0";"4194304
[2022-05-24 17:26:15,571][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 4762879
[2022-05-24 17:26:15,571][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcre ate --yes -l 4762879 -n osd-data-85f574b4-282f-428b-9324-3a5feb28a4e3 ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558
[2022-05-24 17:26:15,591][ceph_volume.process][INFO ] stderr Volume group "ceph-1cbcc4 b0-3ccc-431b-bff6-3c03b1dc9558" has insufficient free space (0 extents): 4762879 requir ed.
[2022-05-24 17:26:15,634][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, i n prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, i n prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2022-05-24 17:26:15,636][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-24 17:26:15,636][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-24 17:26:15,777][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.761+0 100 7f5b2d823700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-24T17:26:15.761+0100 7f5b2d823700 -1 AuthRegistry(0x7f5b280595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-24 17:26:16,187][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-24 17:26:16,201][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, i n prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, i n prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
root@ps-node1:~# tail /opt/petasan/log/ceph-volume.log
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, in prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
to be honest the type of the engine was picked by default configuration during installation process. Apology for my ignorance, I should know what the different between those 2 is but I haven't got that far.
Please find my ceph-volume.log bellow
[2022-05-23 15:11:13,271][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data ps-e51bc678-0e20-462a-b537-493485d 153ee-wc-osd.0/main --journal /dev/sda1
[2022-05-23 15:11:13,275][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-23 15:11:13,285][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/sdc2 /dev/sdc2 part
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main lvm
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-1 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-cache_cvol lvm
[2022-05-23 15:11:13,286][ceph_volume.process][INFO ] stdout /dev/dm-2 /dev/mapper/ps- -e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main_wcorig lvm
[2022-05-23 15:11:13,294][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:13,361][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:13,361][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/ma in
[2022-05-23 15:11:13,455][ceph_volume.process][INFO ] stderr unable to read label for /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main: (2) No such file or directo ry
[2022-05-23 15:11:13,456][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/ma in
[2022-05-23 15:11:13,485][ceph_volume.process][INFO ] stderr unable to read label for /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main: (2) No such file or directo ry
[2022-05-23 15:11:13,485][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVPATH=/devices/virtual/ block/dm-0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVNAME=/dev/dm-0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout MAJOR=254
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout MINOR=0
[2022-05-23 15:11:13,489][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=48176660 44
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_DISABLE_LIBRARY_F ALLBACK_FLAG=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_PRIMARY_SOURCE_FL AG=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_ACTIVATION=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_NAME=ps--e51bc678--0e2 0--462a--b537--493485d153ee--wc--osd.0-main
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UUID=LVM-kEm7AJau7LtL3 yx2B5Ar3aWeuAyhuOxOUDViz0ihtft3rP00lVWsGtl0w22HqoY0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_SUSPENDED=0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_RULES=1
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_UDEV_RULES_VSN=2
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_VG_NAME=ps-e51bc678-0e 20-462a-b537-493485d153ee-wc-osd.0
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_LV_NAME=main
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_TABLE_STATE=LIVE
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DM_STATE=ACTIVE
[2022-05-23 15:11:13,490][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/ dm-name-ps--e51bc678--0e20--462a--b537--493485d153ee--wc--osd.0-main /dev/disk/by-id/dm -uuid-LVM-kEm7AJau7LtL3yx2B5Ar3aWeuAyhuOxOUDViz0ihtft3rP00lVWsGtl0w22HqoY0 /dev/ps-e51b c678-0e20-462a-b537-493485d153ee-wc-osd.0/main /dev/mapper/ps--e51bc678--0e20--462a--b5 37--493485d153ee--wc--osd.0-main
[2022-05-23 15:11:13,491][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-23 15:11:13,491][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:13,545][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:13,545][ceph_volume.api.lvm][WARNING] device is not part of ceph: </d ev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main>
[2022-05-23 15:11:13,546][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-23 15:11:13,567][ceph_volume.process][INFO ] stdout AQABlotiXS23IRAAYIxkLyUad ewf5fY4RYVbgA==
[2022-05-23 15:11:13,567][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new b486d663-8fa0-4af2-af12-07d9a917f452
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stdout 0
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:13.692+0 100 7f0808ffa700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:13.692+0 100 7f0808ffa700 -1 AuthRegistry(0x7f08040595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-23 15:11:14,108][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-23 15:11:14,113][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-23 15:11:14,113][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-23 15:11:14,119][ceph_volume.process][INFO ] stdout c081bde1-8e12-43f5-aeaa-5 291fbbabf8f
[2022-05-23 15:11:14,119][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=main,vg_name= ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0 -o lv_tags,lv_path,lv_name,vg_name,lv_ uuid,lv_size
[2022-05-23 15:11:14,177][ceph_volume.process][INFO ] stdout ";"/dev/ps-e51bc678-0e20- 462a-b537-493485d153ee-wc-osd.0/main";"main";"ps-e51bc678-0e20-462a-b537-493485d153ee-w c-osd.0";"UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0";"21976806588416
[2022-05-23 15:11:14,177][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.osd_fsid=b486d663-8fa0-4af2-af12-07d9a917f452 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee --addtag ceph.cluster_n ame=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=c081bde1-8e12-43f5-aeaa-5291fbbabf8f --addtag cep h.journal_device=/dev/sda1 --addtag ceph.data_device=/dev/ps-e51bc678-0e20-462a-b537-49 3485d153ee-wc-osd.0/main --addtag ceph.data_uuid=UDViz0-ihtf-t3rP-00lV-WsGt-l0w2-2HqoY0 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:14,265][ceph_volume.process][INFO ] stdout Logical volume ps-e51bc67 8-0e20-462a-b537-493485d153ee-wc-osd.0/main changed.
[2022-05-23 15:11:14,266][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-23 15:11:14,286][ceph_volume.process][INFO ] stdout AQAClotinPUCERAA5p/tE3iZe oBJJzKnHFYOYQ==
[2022-05-23 15:11:14,288][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main
[2022-05-23 15:11:20,843][ceph_volume.process][INFO ] stdout meta-data=/dev/ps-e51bc67 8-0e20-462a-b537-493485d153ee-wc-osd.0/main isize=2048 agcount=20, agsize=268435455 b lks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=5365431296, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-23 15:11:20,844][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ps-e51bc678-0e20-462a-b537-493485d153ee-wc-osd.0/main /var/lib/ceph/osd/ceph-0
[2022-05-23 15:11:20,878][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-23 15:11:20,878][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-23 15:11:20,880][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-23 15:11:20,880][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-23 15:11:21,019][ceph_volume.process][INFO ] stderr 2022-05-23T15:11:21.004+0 100 7fc33e3ba700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-23T15:11:21.004+0100 7fc33e3ba700 -1 AuthRegistry(0x7fc3380595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-23 15:11:21,409][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-23 15:11:21,421][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 13:36:17,183][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
[2022-05-24 13:36:17,186][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-24 13:36:17,192][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-24 13:36:17,193][ceph_volume.process][INFO ] stdout /dev/sdc1 /dev/sdc1 part
[2022-05-24 13:36:17,193][ceph_volume.process][INFO ] stdout /dev/sdc2 /dev/sdc2 part
[2022-05-24 13:36:17,201][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 13:36:17,373][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 13:36:17,378][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="18. 2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" L OG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-data"
[2022-05-24 13:36:17,378][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sda2
[2022-05-24 13:36:17,384][ceph_volume.process][INFO ] stdout /dev/sda2: PART_ENTRY_SCH EME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="076b9d21-1e26-4751-b602-2b108d59 e8df" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART _ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="39017510879" PART_ENTRY_DISK="8:0"
[2022-05-24 13:36:17,385][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 13:36:17,437][ceph_volume.process][INFO ] stderr Failed to find physical v olume "/dev/sda2".
[2022-05-24 13:36:17,438][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 13:36:17,531][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 13:36:17,531][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 13:36:17,560][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 13:36:17,561][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000: 00/0000:00:03.2/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sda2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-24 13:36:17,565][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout MINOR=2
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=78736261 179
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_VENDOR=LSI
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=LSI\x20\x 20\x20\x20\x20
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_MODEL=MR9266-4i
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=MR9266-4i\ x20\x20\x20\x20\x20\x20\x20
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_REVISION=3.27
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=00e9876 7098a591e2a40d8ec06b00506
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REGEXT =600605b006ecd8402a1e598a096787e9
[2022-05-24 13:36:17,566][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_VENDOR=LSI
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=LSI\x20\x20 \x20\x20\x20
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_MODEL=MR9266-4i
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=MR9266-4i\x2 0\x20\x20\x20\x20\x20\x20
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_REVISION=3.27
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_WWN=0x600605b006ecd840 2a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x6 00605b006ecd8402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SERIAL=3600605b006ecd8 402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=600605b00 6ecd8402a1e598a096787e9
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH= 0
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-24 13:36:17,567][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:06:00.0- scsi-0:2:0:0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_06_0 0_0-scsi-0_2_0_0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=fec0fa a0-0d54-4e19-a330-eaf777dd4911
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-d ata
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=076b9d 21-1e26-4751-b602-2b108d59e8df
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63d af-8483-4772-8e79-3d69d8477de4
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=4194 5088
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=390175 10879
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:0
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-par tuuid/076b9d21-1e26-4751-b602-2b108d59e8df /dev/disk/by-path/pci-0000:06:00.0-scsi-0:2: 0:0-part2 /dev/disk/by-partlabel/ceph-data /dev/disk/by-id/wwn-0x600605b006ecd8402a1e59 8a096787e9-part2 /dev/disk/by-id/scsi-3600605b006ecd8402a1e598a096787e9-part2 /dev/disk /by-id/scsi-SLSI_MR9266-4i_00e98767098a591e2a40d8ec06b00506-part2
[2022-05-24 13:36:17,568][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-24 13:36:17,569][ceph_volume.api.lvm][WARNING] device is not part of ceph: Non e
[2022-05-24 13:36:17,570][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 13:36:17,592][ceph_volume.process][INFO ] stdout AQBB0Yxit6w1IxAAmB50woaLj CcUE32615Vd2A==
[2022-05-24 13:36:17,593][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new 55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,150][ceph_volume.process][INFO ] stdout 0
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:17.719+0 100 7f8d7a3f2700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:17.719+0 100 7f8d7a3f2700 -1 AuthRegistry(0x7f8d740595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-24 13:36:18,151][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-24 13:36:18,156][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-24 13:36:18,156][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-24 13:36:18,184][ceph_volume.process][INFO ] stdout 791fa4c5-5eb5-4eef-8ec4-8 14973059c10
[2022-05-24 13:36:18,184][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 13:36:18,188][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="18. 2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" L OG-SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-data"
[2022-05-24 13:36:18,189][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0. 00 B
[2022-05-24 13:36:18,189][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 13:36:18,257][ceph_volume.process][INFO ] stderr Failed to find physical v olume "/dev/sda2".
[2022-05-24 13:36:18,258][ceph_volume.process][INFO ] Running command: /usr/sbin/vgcre ate --force --yes ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558 /dev/sda2
[2022-05-24 13:36:18,310][ceph_volume.process][INFO ] stdout Physical volume "/dev/sda 2" successfully created.
[2022-05-24 13:36:18,316][ceph_volume.process][INFO ] stdout Volume group "ceph-1cbcc4 b0-3ccc-431b-bff6-3c03b1dc9558" successfully created
[2022-05-24 13:36:18,361][ceph_volume.process][INFO ] Running command: /usr/sbin/vgs - -noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-1cbcc4b0-3c cc-431b-bff6-3c03b1dc9558 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_ count,vg_extent_size
[2022-05-24 13:36:18,429][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"0";"wz--n-";"4762879";"4762879";"4194304
[2022-05-24 13:36:18,429][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 4762879
[2022-05-24 13:36:18,430][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcre ate --yes -l 4762879 -n osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558
[2022-05-24 13:36:18,512][ceph_volume.process][INFO ] stdout Logical volume "osd-data- 55f50f2f-d558-4fc4-be4c-4378bf094232" created.
[2022-05-24 13:36:18,549][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=osd-data-55f5 0f2f-d558-4fc4-be4c-4378bf094232,vg_name=ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558 -o l v_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 13:36:18,601][ceph_volume.process][INFO ] stdout ";"/dev/ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50 f2f-d558-4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4p ed-b1CW-X5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 13:36:18,601][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.type=data --addtag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6 -3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 /dev/ceph-1cbcc4b0-3ccc-431 b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,681][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,681][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --deltag ceph.type=data --deltag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6 -3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 /dev/ceph-1cbcc4b0-3ccc-431 b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,749][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,749][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcha nge --addtag ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee --addtag ceph.cluster_n ame=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=791fa4c5-5eb5-4eef-8ec4-814973059c10 --addtag cep h.journal_device=/dev/sda1 --addtag ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6- 3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 --addtag ceph.data_uuid=rKjf Y4-4ped-b1CW-X5xr-QY8C-RklX-Jdew6J /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-d ata-55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:18,821][ceph_volume.process][INFO ] stdout Logical volume ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 changed.
[2022-05-24 13:36:18,822][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 13:36:18,843][ceph_volume.process][INFO ] stdout AQBC0YxixPEpMhAAkNWd0CNrz rih6+2bFB3h3g==
[2022-05-24 13:36:18,844][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f -d558-4fc4-be4c-4378bf094232
[2022-05-24 13:36:20,156][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-1cbcc 4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 isize=204 8 agcount=19, agsize=268435455 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=4877188096, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-24 13:36:20,157][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55 f50f2f-d558-4fc4-be4c-4378bf094232 /var/lib/ceph/osd/ceph-0
[2022-05-24 13:36:20,454][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-24 13:36:20,455][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 13:36:20,456][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-24 13:36:20,457][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-24 13:36:20,596][ceph_volume.process][INFO ] stderr 2022-05-24T13:36:20.583+0 100 7f05dbf7d700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-24T13:36:20.583+0100 7f05dbf7d700 -1 AuthRegistry(0x7f05d40595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-24 13:36:21,002][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-24 13:36:21,015][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, i n prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 15:12:28,502][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm activate --filestore 0 55f50f2f-d558-4fc4-be4c-4378bf094232
[2022-05-24 15:12:28,505][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S tags={ceph.osd_id=0,c eph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232} -o lv_tags,lv_path,lv_name,vg_name,l v_uuid,lv_size
[2022-05-24 15:12:28,569][ceph_volume.process][INFO ] stdout ceph.cephx_lockbox_secret =,ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee,ceph.cluster_name=ceph,ceph.cr ush_device_class=None,ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/o sd-data-55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.data_uuid=rKjfY4-4ped-b1CW-X5xr-QY8C- RklX-Jdew6J,ceph.encrypted=0,ceph.journal_device=/dev/sda1,ceph.journal_uuid=791fa4c5-5 eb5-4eef-8ec4-814973059c10,ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.osd_ id=0,ceph.osdspec_affinity=,ceph.type=data,ceph.vdo=0";"/dev/ceph-1cbcc4b0-3ccc-431b-bf f6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50f2f-d558- 4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4ped-b1CW-X 5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 15:12:28,571][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -t PARTUUID="791fa4c5-5eb5-4eef-8ec4-814973059c10" -o device
[2022-05-24 15:12:28,645][ceph_volume.process][INFO ] stdout /dev/sda1
[2022-05-24 15:12:28,651][ceph_volume.util.system][INFO ] /dev/ceph-1cbcc4b0-3ccc-431b -bff6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232 was not found as mount ed
[2022-05-24 15:12:28,652][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/osd-data-55 f50f2f-d558-4fc4-be4c-4378bf094232 /var/lib/ceph/osd/ceph-0
[2022-05-24 15:12:28,698][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-24 15:12:28,699][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 370, in main
self.activate(args)
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 296, in activate
activate_filestore(lvs, args.no_systemd)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/activate.py", line 74, i n activate_filestore
prepare_utils.mount_osd(source, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount _osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-24 17:26:14,559][ceph_volume.main][INFO ] Running command: ceph-volume --log- path /opt/petasan/log lvm prepare --filestore --data /dev/sda2 --journal /dev/sda1
[2022-05-24 17:26:14,567][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-24 17:26:14,572][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-24 17:26:14,572][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb3 /dev/sdb3 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb4 /dev/sdb4 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdb5 /dev/sdb5 part
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-24 17:26:14,573][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/cep h--1cbcc4b0--3ccc--431b--bff6--3c03b1dc9558-osd--data--55f50f2f--d558--4fc4--be4c--4378 bf094232 lvm
[2022-05-24 17:26:14,581][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sda2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-24 17:26:14,651][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 17:26:14,656][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="3jkQAm-10vw-K46l-JeFt-H GMa-QNtH-eGjTJU" RO="0" RM="0" MODEL="" SIZE="18.2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE ="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL= "ceph-data"
[2022-05-24 17:26:14,656][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sda2
[2022-05-24 17:26:14,659][ceph_volume.process][INFO ] stdout /dev/sda2: UUID="3jkQAm-1 0vw-K46l-JeFt-HGMa-QNtH-eGjTJU" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" PART _ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="20a881d8-44c1-4d68-8ab 0-e3e8a469baad" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBE R="2" PART_ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="39017510879" PART_ENTRY_DISK="8:0"
[2022-05-24 17:26:14,659][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 17:26:14,726][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"1";"wz--n-";"4762879";"0";"4194304
[2022-05-24 17:26:14,727][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --separator=";" -a --units=b --nosuffix -o lv_tags,lv_path,lv_na me,vg_name,lv_uuid,lv_size /dev/sda2
[2022-05-24 17:26:14,782][ceph_volume.process][INFO ] stdout ceph.cephx_lockbox_secret =,ceph.cluster_fsid=e51bc678-0e20-462a-b537-493485d153ee,ceph.cluster_name=ceph,ceph.cr ush_device_class=None,ceph.data_device=/dev/ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558/o sd-data-55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.data_uuid=rKjfY4-4ped-b1CW-X5xr-QY8C- RklX-Jdew6J,ceph.encrypted=0,ceph.journal_device=/dev/sda1,ceph.journal_uuid=791fa4c5-5 eb5-4eef-8ec4-814973059c10,ceph.osd_fsid=55f50f2f-d558-4fc4-be4c-4378bf094232,ceph.osd_ id=0,ceph.osdspec_affinity=,ceph.type=data,ceph.vdo=0";"/dev/ceph-1cbcc4b0-3ccc-431b-bf f6-3c03b1dc9558/osd-data-55f50f2f-d558-4fc4-be4c-4378bf094232";"osd-data-55f50f2f-d558- 4fc4-be4c-4378bf094232";"ceph-1cbcc4b0-3ccc-431b-bff6-3c03b1dc9558";"rKjfY4-4ped-b1CW-X 5xr-QY8C-RklX-Jdew6J";"19976962441216
[2022-05-24 17:26:14,783][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-b luestore-tool show-label --dev /dev/sda2
[2022-05-24 17:26:14,876][ceph_volume.process][INFO ] stderr unable to read label for /dev/sda2: (2) No such file or directory
[2022-05-24 17:26:14,876][ceph_volume.process][INFO ] Running command: /usr/bin/udevad m info --query=property /dev/sda2
[2022-05-24 17:26:14,880][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000: 00/0000:00:03.2/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda/sda2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sda2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout MINOR=2
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=74541882 59
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_VENDOR=LSI
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=LSI\x20\x 20\x20\x20\x20
[2022-05-24 17:26:14,881][ceph_volume.process][INFO ] stdout SCSI_MODEL=MR9266-4i
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=MR9266-4i\ x20\x20\x20\x20\x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_REVISION=3.27
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=00e9876 7098a591e2a40d8ec06b00506
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REGEXT =600605b006ecd8402a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_VENDOR=LSI
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=LSI\x20\x20 \x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_MODEL=MR9266-4i
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=MR9266-4i\x2 0\x20\x20\x20\x20\x20\x20
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_REVISION=3.27
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_WWN=0x600605b006ecd840 2a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x6 00605b006ecd8402a1e598a096787e9
[2022-05-24 17:26:14,882][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SERIAL=3600605b006ecd8 402a1e598a096787e9
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=600605b00 6ecd8402a1e598a096787e9
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH= 0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:06:00.0- scsi-0:2:0:0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_06_0 0_0-scsi-0_2_0_0
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=c0e615 88-c7fb-476d-880d-325faa63557b
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_UUID=3jkQAm-10vw-K4 6l-JeFt-HGMa-QNtH-eGjTJU
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_UUID_ENC=3jkQAm-10v w-K46l-JeFt-HGMa-QNtH-eGjTJU
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_VERSION=LVM2 001
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_TYPE=LVM2_member
[2022-05-24 17:26:14,883][ceph_volume.process][INFO ] stdout ID_FS_USAGE=raid
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-d ata
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=20a881 d8-44c1-4d68-8ab0-e3e8a469baad
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63d af-8483-4772-8e79-3d69d8477de4
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=4194 5088
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=390175 10879
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:0
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_READY=1
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_ALIAS=/dev/block/ 8:2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout SYSTEMD_WANTS=lvm2-pvscan @8:2.service
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/ scsi-3600605b006ecd8402a1e598a096787e9-part2 /dev/disk/by-partlabel/ceph-data /dev/disk /by-partuuid/20a881d8-44c1-4d68-8ab0-e3e8a469baad /dev/disk/by-id/lvm-pv-uuid-3jkQAm-10 vw-K46l-JeFt-HGMa-QNtH-eGjTJU /dev/disk/by-id/scsi-SLSI_MR9266-4i_00e98767098a591e2a40d 8ec06b00506-part2 /dev/disk/by-id/wwn-0x600605b006ecd8402a1e598a096787e9-part2 /dev/dis k/by-path/pci-0000:06:00.0-scsi-0:2:0:0-part2
[2022-05-24 17:26:14,884][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-24 17:26:14,885][ceph_volume.api.lvm][WARNING] device is not part of ceph: Non e
[2022-05-24 17:26:14,886][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-a uthtool --gen-print-key
[2022-05-24 17:26:14,908][ceph_volume.process][INFO ] stdout AQAmB41i9JUNNhAAMGbvb2oD/ 87cxmVAwTUtOg==
[2022-05-24 17:26:14,909][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring -i - osd new 85f574b4-282f-428b-9324-3a5feb28a4e3
[2022-05-24 17:26:15,464][ceph_volume.process][INFO ] stdout 0
[2022-05-24 17:26:15,464][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.033+0 100 7f212424b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-24 17:26:15,465][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.033+0 100 7f212424b700 -1 AuthRegistry(0x7f211c0595f0) no keyring found at /etc/ceph/ceph.cli ent.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bi n,, disabling cephx
[2022-05-24 17:26:15,465][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda1
[2022-05-24 17:26:15,469][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" MAJ:MIN="8:1" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G " STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG -SEC="512" ROTA="1" SCHED="cfq" TYPE="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" D ISC-ZERO="0" PKNAME="sda" PARTLABEL="ceph-journal"
[2022-05-24 17:26:15,470][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sda1
[2022-05-24 17:26:15,503][ceph_volume.process][INFO ] stdout bb5848bf-2734-4648-bc97-9 097a2f67a78
[2022-05-24 17:26:15,504][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O WNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,D ISC-ZERO,PKNAME,PARTLABEL /dev/sda2
[2022-05-24 17:26:15,508][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" MAJ:MIN="8:2" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL="" UUID="3jkQAm-10vw-K46l-JeFt-H GMa-QNtH-eGjTJU" RO="0" RM="0" MODEL="" SIZE="18.2T" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="cfq" TYPE ="part" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="sda" PARTLABEL= "ceph-data"
[2022-05-24 17:26:15,508][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0. 00 B
[2022-05-24 17:26:15,509][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs - -noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_coun t,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sda2
[2022-05-24 17:26:15,570][ceph_volume.process][INFO ] stdout ceph-1cbcc4b0-3ccc-431b-b ff6-3c03b1dc9558";"1";"1";"wz--n-";"4762879";"0";"4194304
[2022-05-24 17:26:15,571][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 4762879
[2022-05-24 17:26:15,571][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcre ate --yes -l 4762879 -n osd-data-85f574b4-282f-428b-9324-3a5feb28a4e3 ceph-1cbcc4b0-3cc c-431b-bff6-3c03b1dc9558
[2022-05-24 17:26:15,591][ceph_volume.process][INFO ] stderr Volume group "ceph-1cbcc4 b0-3ccc-431b-bff6-3c03b1dc9558" has insufficient free space (0 extents): 4762879 requir ed.
[2022-05-24 17:26:15,634][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unab le to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, i n prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, i n prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2022-05-24 17:26:15,636][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-24 17:26:15,636][ceph_volume.process][INFO ] Running command: /usr/bin/ceph - -cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.ke yring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-24 17:26:15,777][ceph_volume.process][INFO ] stderr 2022-05-24T17:26:15.761+0 100 7f5b2d823700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-o sd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-24T17:26:15.761+0100 7f5b2d823700 -1 AuthRegistry(0x7f5b280595f0) no keyring fo und at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/key ring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-24 17:26:16,187][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-24 17:26:16,201][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in ma in
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, i n main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, i n safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, i n prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, i n prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
root@ps-node1:~# tail /opt/petasan/log/ceph-volume.log
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 334, in prepare
data_lv = self.prepare_data_device('data', osd_fsid)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 218, in prepare_data_device
return api.create_lv(
File "/usr/lib/python3/dist-packages/ceph_volume/api/lvm.py", line 971, in create_lv
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
admin
2,930 Posts
Quote from admin on May 25, 2022, 4:48 pmcan you re-install from iso, make sure the default engine is set at "Bluestore" and not "Filestore"
then retry, if you still have errors, post both logs.
can you re-install from iso, make sure the default engine is set at "Bluestore" and not "Filestore"
then retry, if you still have errors, post both logs.
setantatx
7 Posts
Quote from setantatx on May 26, 2022, 9:32 amthanks, I'll try install again with filestore engine this time and I'll comeback to you then
thanks, I'll try install again with filestore engine this time and I'll comeback to you then
setantatx
7 Posts
Quote from setantatx on May 30, 2022, 3:31 pmHi
I have just tried to install your recent release but unfortunately same thing. It's keep failing on creating OSD.
Error List
Disk sda prepare failure on node ps-node3.
Disk sda prepare failure on node ps-node1.
Disk sda prepare failure on node ps-node2.Is there anything else that I could try?
Thanks
Hi
I have just tried to install your recent release but unfortunately same thing. It's keep failing on creating OSD.
Error List
Disk sda prepare failure on node ps-node3.
Disk sda prepare failure on node ps-node1.
Disk sda prepare failure on node ps-node2.
Is there anything else that I could try?
Thanks
admin
2,930 Posts
Quote from admin on May 30, 2022, 6:44 pmHow large are the disks you want to add as OSDs ?
Were the disks used before on FreeBSD/ZFS ?can you try to manually clean the disks with ( replace sdXX with drive ):
wipefs --all /dev/sdXX
dd if=/dev/zero of=/dev/sdXX bs=1M count=20 oflag=direct,dsyncDo you have other disks you can try ? do all disks fail to add or only some specific disks ?
Are you adding OSD with cache/journal ? if so can you test without for testing ?
How large are the disks you want to add as OSDs ?
Were the disks used before on FreeBSD/ZFS ?
can you try to manually clean the disks with ( replace sdXX with drive ):
wipefs --all /dev/sdXX
dd if=/dev/zero of=/dev/sdXX bs=1M count=20 oflag=direct,dsync
Do you have other disks you can try ? do all disks fail to add or only some specific disks ?
Are you adding OSD with cache/journal ? if so can you test without for testing ?
setantatx
7 Posts
Quote from setantatx on May 31, 2022, 9:28 amall the drives were restriped few times, all 3 nodes have 12 drives each set in as raid5 by LSI MR9266-4i controller. I did try remove partitions and fs few times. I do also have ssd there which I was hoping to use as my cache drive, and adding that seams to be working fine. Currently I'm trying to add OSD without cache and journal since I think I've seen some post saying that adding with those 2 may fail.
Could you explain why, when I'm trying to add osd drive under node configuration, at the end of the process drive status is changed to mounted but under the ubuntu is actually not.
Also is the any manual process of creating OSD explained anywhere?
thanks
all the drives were restriped few times, all 3 nodes have 12 drives each set in as raid5 by LSI MR9266-4i controller. I did try remove partitions and fs few times. I do also have ssd there which I was hoping to use as my cache drive, and adding that seams to be working fine. Currently I'm trying to add OSD without cache and journal since I think I've seen some post saying that adding with those 2 may fail.
Could you explain why, when I'm trying to add osd drive under node configuration, at the end of the process drive status is changed to mounted but under the ubuntu is actually not.
Also is the any manual process of creating OSD explained anywhere?
thanks
admin
2,930 Posts
Quote from admin on May 31, 2022, 10:51 amMy feeling is it relates to the RAID 5 setup, the ceph-volume.log has a error has "..insufficient free space (0 extents): 4762879 required". Can you recheck the LSI setup, also it is better not to use RAID with ceph and give all the disks to the system individually to create separate OSDs, it is better for performance and HA is build within. As an exception to the last rule, if using HDD, it is sometimes better to configure RAID 0 and create single disk volumes (each RAID 0 is a single disk) so you can use controller writeback cache (if battery backed)..but at this stage i recommend you disable RAID altogether and setup the drives in JBOD mode, at leat till you solve the issue.
Another thing you can do, is deploy the cluster without selecting to add OSDs and without aly pools. Once built, you can use the Management UI and add disks individually, it may make it easier to look at the logs as you add a single disk.
Adding cache and journal surely works, we do a lot of testing ourselves. In some cases if you do not have enough ram your cache could fail ( we put a notice you need 2% of cache size in ram ). But again if you have disk issues it is better to use plain OSDs at this stage so it is easier to identify the problem.
Cant say why the failed OSD attempt show the disks as mounted, probably it was done in the middle of the failed OSD creation attempt
My feeling is it relates to the RAID 5 setup, the ceph-volume.log has a error has "..insufficient free space (0 extents): 4762879 required". Can you recheck the LSI setup, also it is better not to use RAID with ceph and give all the disks to the system individually to create separate OSDs, it is better for performance and HA is build within. As an exception to the last rule, if using HDD, it is sometimes better to configure RAID 0 and create single disk volumes (each RAID 0 is a single disk) so you can use controller writeback cache (if battery backed)..but at this stage i recommend you disable RAID altogether and setup the drives in JBOD mode, at leat till you solve the issue.
Another thing you can do, is deploy the cluster without selecting to add OSDs and without aly pools. Once built, you can use the Management UI and add disks individually, it may make it easier to look at the logs as you add a single disk.
Adding cache and journal surely works, we do a lot of testing ourselves. In some cases if you do not have enough ram your cache could fail ( we put a notice you need 2% of cache size in ram ). But again if you have disk issues it is better to use plain OSDs at this stage so it is easier to identify the problem.
Cant say why the failed OSD attempt show the disks as mounted, probably it was done in the middle of the failed OSD creation attempt
setantatx
7 Posts
Quote from setantatx on May 31, 2022, 3:22 pmOK so I did create 12 virtual raid0 drives, unfortunately it's still failing to create OSD, this was new installation, and I have selected rbd instead of cephfs during initial configuration. It's interesting that journal drive, of this same controller, can be created without any problem
31/05/2022 12:59:23 INFO Start cleaning : sdb
31/05/2022 12:59:23 INFO Executing : wipefs --all /dev/sdb
31/05/2022 12:59:23 INFO Executing : dd if=/dev/zero of=/dev/sdb bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
31/05/2022 12:59:23 INFO Executing : parted -s /dev/sdb mklabel gpt
31/05/2022 12:59:24 INFO Executing : partprobe /dev/sdb
31/05/2022 12:59:27 INFO User didn't select a journal for disk sdb, so the journal will be on the same disk.
31/05/2022 12:59:27 INFO Creating journal partition num 1 size 20480MB on /dev/sdb
31/05/2022 12:59:28 INFO Calling partprobe on sdb device
31/05/2022 12:59:28 INFO Executing partprobe /dev/sdb
31/05/2022 12:59:28 INFO Calling udevadm on sdb device
31/05/2022 12:59:28 INFO Executing udevadm settle --timeout 30
31/05/2022 12:59:31 INFO User didn't select a cache for disk sdb.
31/05/2022 12:59:31 INFO Start prepare filestore OSD : sdb
31/05/2022 12:59:31 INFO Creating data partition num 2 size 1907200MB on /dev/sdb
31/05/2022 12:59:32 INFO Calling partprobe on sdb device
31/05/2022 12:59:32 INFO Executing partprobe /dev/sdb
31/05/2022 12:59:33 INFO Calling udevadm on sdb device
31/05/2022 12:59:33 INFO Executing udevadm settle --timeout 30
31/05/2022 12:59:36 INFO Starting : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdb2 --journal /dev/sdb1
31/05/2022 12:59:41 ERROR Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdb2 --journal /dev /sdb1
[2022-05-31 12:59:38,206][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 12:59:38,227][ceph_volume.process][INFO ] stdout AQAqA5ZizBd3DRAAJgfBcJdVFDP+Q5/RrSA3ww==
[2022-05-31 12:59:38,228][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499
[2022-05-31 12:59:40,624][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499 isize=2048 agcount=4, agsize=120749824 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=482999296, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=235839, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-31 12:59:40,624][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499 /var/lib/ceph/osd/ceph-0
[2022-05-31 12:59:40,628][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-31 12:59:40,629][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-31 12:59:40,630][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-31 12:59:40,630][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-31 12:59:40,770][ceph_volume.process][INFO ] stderr 2022-05-31T12:59:40.759+0100 7fec8754a700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-31T12:59:40.759+0100 7fec8754a700 -1 AuthRegistry(0x7fec800595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 12:59:41,208][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-31 12:59:41,221][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32I'm also don't think that is controller related since, OSD installing attempt on SSD drive connected directly to the board via Intel Controller (this is supermicro board) also give an error
[2022-05-31 13:15:55,980][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdn2 --journal /dev/sdn1
[2022-05-31 13:15:55,983][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdd /dev/sdd disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sde /dev/sde disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdf /dev/sdf disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdg /dev/sdg disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdh /dev/sdh disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdi /dev/sdi disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdj /dev/sdj disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdk /dev/sdk disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdl /dev/sdl disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm /dev/sdm disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm1 /dev/sdm1 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm2 /dev/sdm2 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm3 /dev/sdm3 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdm4 /dev/sdm4 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdm5 /dev/sdm5 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn /dev/sdn disk
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn1 /dev/sdn1 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn2 /dev/sdn2 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--6354cda5--8eb9--45c7--b5fd--832cf25db520-osd--data--ff06d179--238c--4522--8732--02dd77906499 lvm
[2022-05-31 13:15:56,015][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdn2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-31 13:15:56,205][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn2
[2022-05-31 13:15:56,210][ceph_volume.process][INFO ] stdout NAME="sdn2" KNAME="sdn2" MAJ:MIN="8:210" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="91.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-data"
[2022-05-31 13:15:56,211][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sdn2
[2022-05-31 13:15:56,224][ceph_volume.process][INFO ] stdout /dev/sdn2: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="02ef1a82-f707-46ad-a1bd-0543b18b6194" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="192496527" PART_ENTRY_DISK="8:208"
[2022-05-31 13:15:56,225][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdn2
[2022-05-31 13:15:56,293][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdn2".
[2022-05-31 13:15:56,293][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdn2
[2022-05-31 13:15:56,322][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdn2: (2) No such file or directory
[2022-05-31 13:15:56,323][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdn2
[2022-05-31 13:15:56,352][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdn2: (2) No such file or directory
[2022-05-31 13:15:56,353][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata2/host3/target3:0:0/3:0:0:0/block/sdn/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout MINOR=210
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=5666124091
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_VENDOR=ATA
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_MODEL=PNY_CS900_120GB
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=PNY\x20CS900\x20120GB\x20
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_REVISION=0613
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_VENDOR=PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_T10=ATA_PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_ATA=PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REG=5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_MODEL=PNY_CS900_120GB
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=PNY\x20CS900\x20120GB\x20
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_REVISION=0613
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_WWN=0x5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_BUS=ata
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_ATA=1
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SERIAL=PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=PNY4419191101150445B
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:1f.2-ata-2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=9a98def8-05a4-4ea9-b06c-7b134afcb5fb
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-data
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=02ef1a82-f707-46ad-a1bd-0543b18b6194
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=41945088
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=192496527
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:208
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/scsi-1ATA_PNY_CS900_120GB_SSD_PNY4419191101150445B-part2 /dev/disk/by-id/scsi-35f8db4c44190445b-part2 /dev/disk/by-path/pci-0000:00:1f.2-ata-2-part2 /dev/disk/by-partuuid/02ef1a82-f707-46ad-a1bd-0543b18b6194 /dev/disk/by-id/wwn-0x5f8db4c44190445b-part2 /dev/disk/by-id/scsi-SATA_PNY_CS900_120GB_PNY4419191101150445B-part2 /dev/disk/by-partlabel/ceph-data /dev/disk/by-id/ata-PNY_CS900_120GB_SSD_PNY4419191101150445B-part2 /dev/disk/by-id/scsi-0ATA_PNY_CS900_120GB_PNY4419191101150445B-part2
[2022-05-31 13:15:56,361][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-31 13:15:56,361][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2022-05-31 13:15:56,362][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 13:15:56,382][ceph_volume.process][INFO ] stdout AQD8BpZiXzm6FhAAddGnc78irvtxEvN2iJbxOg==
[2022-05-31 13:15:56,383][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stdout 0
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:56.511+0100 7f644d8a1700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:56.511+0100 7f644d8a1700 -1 AuthRegistry(0x7f64480595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn1
[2022-05-31 13:15:56,978][ceph_volume.process][INFO ] stdout NAME="sdn1" KNAME="sdn1" MAJ:MIN="8:209" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-journal"
[2022-05-31 13:15:56,979][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sdn1
[2022-05-31 13:15:56,988][ceph_volume.process][INFO ] stdout 5e0b68da-7c1f-4271-9d25-00d66167a53d
[2022-05-31 13:15:56,989][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn2
[2022-05-31 13:15:56,994][ceph_volume.process][INFO ] stdout NAME="sdn2" KNAME="sdn2" MAJ:MIN="8:210" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="91.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-data"
[2022-05-31 13:15:56,995][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2022-05-31 13:15:56,995][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdn2
[2022-05-31 13:15:57,057][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdn2".
[2022-05-31 13:15:57,058][ceph_volume.process][INFO ] Running command: /usr/sbin/vgcreate --force --yes ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 /dev/sdn2
[2022-05-31 13:15:57,108][ceph_volume.process][INFO ] stdout Physical volume "/dev/sdn2" successfully created.
[2022-05-31 13:15:57,114][ceph_volume.process][INFO ] stdout Volume group "ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92" successfully created
[2022-05-31 13:15:57,169][ceph_volume.process][INFO ] Running command: /usr/sbin/vgs --noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2022-05-31 13:15:57,245][ceph_volume.process][INFO ] stdout ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92";"1";"0";"wz--n-";"23497";"23497";"4194304
[2022-05-31 13:15:57,246][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 23497
[2022-05-31 13:15:57,246][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcreate --yes -l 23497 -n osd-data-99936872-8847-42d0-a644-7c6568f7d662 ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92
[2022-05-31 13:15:57,309][ceph_volume.process][INFO ] stdout Logical volume "osd-data-99936872-8847-42d0-a644-7c6568f7d662" created.
[2022-05-31 13:15:57,357][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=osd-data-99936872-8847-42d0-a644-7c6568f7d662,vg_name=ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-31 13:15:57,429][ceph_volume.process][INFO ] stdout ";"/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662";"osd-data-99936872-8847-42d0-a644-7c6568f7d662";"ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92";"SLnRA9-j2Wf-ZXj0-KF9H-9Lnz-99pV-vRujH9";"98553561088
[2022-05-31 13:15:57,429][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --addtag ceph.type=data --addtag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,517][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,518][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --deltag ceph.type=data --deltag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,597][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,598][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --addtag ceph.osd_fsid=99936872-8847-42d0-a644-7c6568f7d662 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=b9fb70da-2714-4440-a22a-08302a728bb3 --addtag ceph.cluster_name=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=5e0b68da-7c1f-4271-9d25-00d66167a53d --addtag ceph.journal_device=/dev/sdn1 --addtag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 --addtag ceph.data_uuid=SLnRA9-j2Wf-ZXj0-KF9H-9Lnz-99pV-vRujH9 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,681][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,682][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 13:15:57,703][ceph_volume.process][INFO ] stdout AQD9BpZid1LSKRAAdVigDmbcfLJnX3v/dCPB5w==
[2022-05-31 13:15:57,704][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,902][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 isize=2048 agcount=4, agsize=6015232 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=24060928, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=11748, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-31 13:15:57,903][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /var/lib/ceph/osd/ceph-0
[2022-05-31 13:15:57,907][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-31 13:15:57,907][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-31 13:15:57,910][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-31 13:15:57,910][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-31 13:15:58,055][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:58.043+0100 7f686b426700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-31T13:15:58.043+0100 7f686b426700 -1 AuthRegistry(0x7f68640595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 13:15:58,494][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-31 13:15:58,506][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
OK so I did create 12 virtual raid0 drives, unfortunately it's still failing to create OSD, this was new installation, and I have selected rbd instead of cephfs during initial configuration. It's interesting that journal drive, of this same controller, can be created without any problem
31/05/2022 12:59:23 INFO Start cleaning : sdb
31/05/2022 12:59:23 INFO Executing : wipefs --all /dev/sdb
31/05/2022 12:59:23 INFO Executing : dd if=/dev/zero of=/dev/sdb bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
31/05/2022 12:59:23 INFO Executing : parted -s /dev/sdb mklabel gpt
31/05/2022 12:59:24 INFO Executing : partprobe /dev/sdb
31/05/2022 12:59:27 INFO User didn't select a journal for disk sdb, so the journal will be on the same disk.
31/05/2022 12:59:27 INFO Creating journal partition num 1 size 20480MB on /dev/sdb
31/05/2022 12:59:28 INFO Calling partprobe on sdb device
31/05/2022 12:59:28 INFO Executing partprobe /dev/sdb
31/05/2022 12:59:28 INFO Calling udevadm on sdb device
31/05/2022 12:59:28 INFO Executing udevadm settle --timeout 30
31/05/2022 12:59:31 INFO User didn't select a cache for disk sdb.
31/05/2022 12:59:31 INFO Start prepare filestore OSD : sdb
31/05/2022 12:59:31 INFO Creating data partition num 2 size 1907200MB on /dev/sdb
31/05/2022 12:59:32 INFO Calling partprobe on sdb device
31/05/2022 12:59:32 INFO Executing partprobe /dev/sdb
31/05/2022 12:59:33 INFO Calling udevadm on sdb device
31/05/2022 12:59:33 INFO Executing udevadm settle --timeout 30
31/05/2022 12:59:36 INFO Starting : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdb2 --journal /dev/sdb1
31/05/2022 12:59:41 ERROR Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdb2 --journal /dev /sdb1
[2022-05-31 12:59:38,206][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 12:59:38,227][ceph_volume.process][INFO ] stdout AQAqA5ZizBd3DRAAJgfBcJdVFDP+Q5/RrSA3ww==
[2022-05-31 12:59:38,228][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499
[2022-05-31 12:59:40,624][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499 isize=2048 agcount=4, agsize=120749824 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=482999296, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=235839, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-31 12:59:40,624][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-6354cda5-8eb9-45c7-b5fd-832cf25db520/osd-data-ff06d179-238c-4522-8732-02dd77906499 /var/lib/ceph/osd/ceph-0
[2022-05-31 12:59:40,628][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-31 12:59:40,629][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-31 12:59:40,630][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-31 12:59:40,630][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-31 12:59:40,770][ceph_volume.process][INFO ] stderr 2022-05-31T12:59:40.759+0100 7fec8754a700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-31T12:59:40.759+0100 7fec8754a700 -1 AuthRegistry(0x7fec800595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 12:59:41,208][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-31 12:59:41,221][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
I'm also don't think that is controller related since, OSD installing attempt on SSD drive connected directly to the board via Intel Controller (this is supermicro board) also give an error
[2022-05-31 13:15:55,980][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm prepare --filestore --data /dev/sdn2 --journal /dev/sdn1
[2022-05-31 13:15:55,983][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb1 /dev/sdb1 part
[2022-05-31 13:15:55,988][ceph_volume.process][INFO ] stdout /dev/sdb2 /dev/sdb2 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdc /dev/sdc disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdd /dev/sdd disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sde /dev/sde disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdf /dev/sdf disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdg /dev/sdg disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdh /dev/sdh disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdi /dev/sdi disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdj /dev/sdj disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdk /dev/sdk disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdl /dev/sdl disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm /dev/sdm disk
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm1 /dev/sdm1 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm2 /dev/sdm2 part
[2022-05-31 13:15:55,989][ceph_volume.process][INFO ] stdout /dev/sdm3 /dev/sdm3 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdm4 /dev/sdm4 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdm5 /dev/sdm5 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn /dev/sdn disk
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn1 /dev/sdn1 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/sdn2 /dev/sdn2 part
[2022-05-31 13:15:55,990][ceph_volume.process][INFO ] stdout /dev/dm-0 /dev/mapper/ceph--6354cda5--8eb9--45c7--b5fd--832cf25db520-osd--data--ff06d179--238c--4522--8732--02dd77906499 lvm
[2022-05-31 13:15:56,015][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdn2 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-31 13:15:56,205][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn2
[2022-05-31 13:15:56,210][ceph_volume.process][INFO ] stdout NAME="sdn2" KNAME="sdn2" MAJ:MIN="8:210" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="91.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-data"
[2022-05-31 13:15:56,211][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -p /dev/sdn2
[2022-05-31 13:15:56,224][ceph_volume.process][INFO ] stdout /dev/sdn2: PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="ceph-data" PART_ENTRY_UUID="02ef1a82-f707-46ad-a1bd-0543b18b6194" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="41945088" PART_ENTRY_SIZE="192496527" PART_ENTRY_DISK="8:208"
[2022-05-31 13:15:56,225][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdn2
[2022-05-31 13:15:56,293][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdn2".
[2022-05-31 13:15:56,293][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdn2
[2022-05-31 13:15:56,322][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdn2: (2) No such file or directory
[2022-05-31 13:15:56,323][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdn2
[2022-05-31 13:15:56,352][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdn2: (2) No such file or directory
[2022-05-31 13:15:56,353][ceph_volume.process][INFO ] Running command: /usr/bin/udevadm info --query=property /dev/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata2/host3/target3:0:0/3:0:0:0/block/sdn/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdn2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout DEVTYPE=partition
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout PARTN=2
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout PARTNAME=ceph-data
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout MAJOR=8
[2022-05-31 13:15:56,357][ceph_volume.process][INFO ] stdout MINOR=210
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=5666124091
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_TPGS=0
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_TYPE=disk
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_VENDOR=ATA
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_MODEL=PNY_CS900_120GB
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_MODEL_ENC=PNY\x20CS900\x20120GB\x20
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_REVISION=0613
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_SERIAL=PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_VENDOR=PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_T10=ATA_PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_ATA=PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,358][ceph_volume.process][INFO ] stdout SCSI_IDENT_LUN_NAA_REG=5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_MODEL=PNY_CS900_120GB
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=PNY\x20CS900\x20120GB\x20
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_REVISION=0613
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_WWN=0x5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x5f8db4c44190445b
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_BUS=ata
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_ATA=1
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SERIAL=PNY_CS900_120GB_SSD_PNY4419191101150445B
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=PNY4419191101150445B
[2022-05-31 13:15:56,359][ceph_volume.process][INFO ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_SCSI_INQUIRY=1
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:00:1f.2-ata-2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_00_1f_2-ata-2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_TABLE_UUID=9a98def8-05a4-4ea9-b06c-7b134afcb5fb
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_TABLE_TYPE=gpt
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SCHEME=gpt
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NAME=ceph-data
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_UUID=02ef1a82-f707-46ad-a1bd-0543b18b6194
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_NUMBER=2
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_OFFSET=41945088
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_SIZE=192496527
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout ID_PART_ENTRY_DISK=8:208
[2022-05-31 13:15:56,360][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/scsi-1ATA_PNY_CS900_120GB_SSD_PNY4419191101150445B-part2 /dev/disk/by-id/scsi-35f8db4c44190445b-part2 /dev/disk/by-path/pci-0000:00:1f.2-ata-2-part2 /dev/disk/by-partuuid/02ef1a82-f707-46ad-a1bd-0543b18b6194 /dev/disk/by-id/wwn-0x5f8db4c44190445b-part2 /dev/disk/by-id/scsi-SATA_PNY_CS900_120GB_PNY4419191101150445B-part2 /dev/disk/by-partlabel/ceph-data /dev/disk/by-id/ata-PNY_CS900_120GB_SSD_PNY4419191101150445B-part2 /dev/disk/by-id/scsi-0ATA_PNY_CS900_120GB_PNY4419191101150445B-part2
[2022-05-31 13:15:56,361][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2022-05-31 13:15:56,361][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2022-05-31 13:15:56,362][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 13:15:56,382][ceph_volume.process][INFO ] stdout AQD8BpZiXzm6FhAAddGnc78irvtxEvN2iJbxOg==
[2022-05-31 13:15:56,383][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stdout 0
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:56.511+0100 7f644d8a1700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:56.511+0100 7f644d8a1700 -1 AuthRegistry(0x7f64480595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 13:15:56,973][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn1
[2022-05-31 13:15:56,978][ceph_volume.process][INFO ] stdout NAME="sdn1" KNAME="sdn1" MAJ:MIN="8:209" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="20G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-journal"
[2022-05-31 13:15:56,979][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -s PARTUUID -o value /dev/sdn1
[2022-05-31 13:15:56,988][ceph_volume.process][INFO ] stdout 5e0b68da-7c1f-4271-9d25-00d66167a53d
[2022-05-31 13:15:56,989][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdn2
[2022-05-31 13:15:56,994][ceph_volume.process][INFO ] stdout NAME="sdn2" KNAME="sdn2" MAJ:MIN="8:210" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="91.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="noop" TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sdn" PARTLABEL="ceph-data"
[2022-05-31 13:15:56,995][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 0.00 B
[2022-05-31 13:15:56,995][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdn2
[2022-05-31 13:15:57,057][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdn2".
[2022-05-31 13:15:57,058][ceph_volume.process][INFO ] Running command: /usr/sbin/vgcreate --force --yes ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 /dev/sdn2
[2022-05-31 13:15:57,108][ceph_volume.process][INFO ] stdout Physical volume "/dev/sdn2" successfully created.
[2022-05-31 13:15:57,114][ceph_volume.process][INFO ] stdout Volume group "ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92" successfully created
[2022-05-31 13:15:57,169][ceph_volume.process][INFO ] Running command: /usr/sbin/vgs --noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2022-05-31 13:15:57,245][ceph_volume.process][INFO ] stdout ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92";"1";"0";"wz--n-";"23497";"23497";"4194304
[2022-05-31 13:15:57,246][ceph_volume.api.lvm][DEBUG ] slots was passed: 1 -> 23497
[2022-05-31 13:15:57,246][ceph_volume.process][INFO ] Running command: /usr/sbin/lvcreate --yes -l 23497 -n osd-data-99936872-8847-42d0-a644-7c6568f7d662 ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92
[2022-05-31 13:15:57,309][ceph_volume.process][INFO ] stdout Logical volume "osd-data-99936872-8847-42d0-a644-7c6568f7d662" created.
[2022-05-31 13:15:57,357][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_name=osd-data-99936872-8847-42d0-a644-7c6568f7d662,vg_name=ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92 -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-05-31 13:15:57,429][ceph_volume.process][INFO ] stdout ";"/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662";"osd-data-99936872-8847-42d0-a644-7c6568f7d662";"ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92";"SLnRA9-j2Wf-ZXj0-KF9H-9Lnz-99pV-vRujH9";"98553561088
[2022-05-31 13:15:57,429][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --addtag ceph.type=data --addtag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,517][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,518][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --deltag ceph.type=data --deltag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,597][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,598][ceph_volume.process][INFO ] Running command: /usr/sbin/lvchange --addtag ceph.osd_fsid=99936872-8847-42d0-a644-7c6568f7d662 --addtag ceph.osd_id=0 --addtag ceph.cluster_fsid=b9fb70da-2714-4440-a22a-08302a728bb3 --addtag ceph.cluster_name=ceph --addtag ceph.crush_device_class=None --addtag ceph.osdspec_affinity= --addtag ceph.cephx_lockbox_secret= --addtag ceph.encrypted=0 --addtag ceph.type=data --addtag ceph.vdo=0 --addtag ceph.journal_uuid=5e0b68da-7c1f-4271-9d25-00d66167a53d --addtag ceph.journal_device=/dev/sdn1 --addtag ceph.data_device=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 --addtag ceph.data_uuid=SLnRA9-j2Wf-ZXj0-KF9H-9Lnz-99pV-vRujH9 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,681][ceph_volume.process][INFO ] stdout Logical volume ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 changed.
[2022-05-31 13:15:57,682][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2022-05-31 13:15:57,703][ceph_volume.process][INFO ] stdout AQD9BpZid1LSKRAAdVigDmbcfLJnX3v/dCPB5w==
[2022-05-31 13:15:57,704][ceph_volume.process][INFO ] Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662
[2022-05-31 13:15:57,902][ceph_volume.process][INFO ] stdout meta-data=/dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 isize=2048 agcount=4, agsize=6015232 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=24060928, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=11748, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[2022-05-31 13:15:57,903][ceph_volume.process][INFO ] Running command: /usr/bin/mount -t xfs -o rw,noatime,inode64 /dev/ceph-577a30cd-ea0b-41a1-8aed-276d030b1e92/osd-data-99936872-8847-42d0-a644-7c6568f7d662 /var/lib/ceph/osd/ceph-0
[2022-05-31 13:15:57,907][ceph_volume.process][INFO ] stderr mount: /var/lib/ceph/osd/ceph-0: mount(2) system call failed: Operation not supported.
[2022-05-31 13:15:57,907][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
[2022-05-31 13:15:57,910][ceph_volume.devices.lvm.prepare][INFO ] will rollback OSD ID creation
[2022-05-31 13:15:57,910][ceph_volume.process][INFO ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2022-05-31 13:15:58,055][ceph_volume.process][INFO ] stderr 2022-05-31T13:15:58.043+0100 7f686b426700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-05-31T13:15:58.043+0100 7f686b426700 -1 AuthRegistry(0x7f68640595f0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2022-05-31 13:15:58,494][ceph_volume.process][INFO ] stderr purged osd.0
[2022-05-31 13:15:58,506][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 152, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 441, in main
self.safe_prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
self.prepare()
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 346, in prepare
prepare_filestore(
File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 65, in prepare_filestore
prepare_utils.mount_osd(device, osd_id, is_vdo=is_vdo)
File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 320, in mount_osd
process.run(command)
File "/usr/lib/python3/dist-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32