Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Why i cannot add osd?

I have a cluster of three nodes with 6 disks. I added 6 more disks to each node. It is impossible to add an OSD on any node. What am I doing wrong?

29/10/2020 17:24:23 INFO     /opt/petasan/scripts/admin/node_manage_disks.py add-osd
29/10/2020 17:24:23 INFO     params
29/10/2020 17:24:23 INFO     -disk_name sdd
29/10/2020 17:24:23 INFO     Start add osd job for disk sdd.
29/10/2020 17:24:23 INFO     Start add osd job 50490

29/10/2020 17:24:37 INFO     Start cleaning disk : sdd
29/10/2020 17:24:42 INFO     Executing : wipefs --all /dev/sdd1
29/10/2020 17:24:42 INFO     Executing : dd if=/dev/zero of=/dev/sdd1 bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
29/10/2020 17:24:42 INFO     Executing : wipefs --all /dev/sdd
29/10/2020 17:24:42 INFO     Executing : dd if=/dev/zero of=/dev/sdd bs=1M count=20 oflag=direct,dsync >/dev/null 2>&1
29/10/2020 17:24:43 INFO     Executing : parted -s /dev/sdd mklabel gpt
29/10/2020 17:24:43 INFO     Executing : partprobe /dev/sdd
29/10/2020 17:24:46 INFO     User didn't select a journal for disk sdd, so the journal will be on the same disk.
29/10/2020 17:24:46 INFO     User didn't select a cache for disk sdd.
29/10/2020 17:24:46 INFO     User selected cache  disk for disk sdd.
29/10/2020 17:24:46 INFO     Start prepare bluestore OSD : sdd
29/10/2020 17:24:46 INFO     Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
29/10/2020 17:24:47 INFO     stdout ceph.block_device=/dev/ceph-1e54fbb7-4402-4c38-9320-dd45dec44262/osd-block-469514a5-7f97-47d3-a3e3-a65e0552b14d,ceph.block_uuid=2wyfFo-PDqI-JR7l-KH89-0W2V-KBfD-EGcbZl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc2a049b-1f7b-4020-92bf-3af68b5e707c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=469514a5-7f97-47d3-a3e3-a65e0552b14d,ceph.osd_id=19,ceph.type=block,ceph.vdo=0";"/dev/ceph-1e54fbb7-4402-4c38-9320-dd45dec44262/osd-block-469514a5-7f97-47d3-a3e3-a65e0552b14d";"osd-block-469514a5-7f97-47d3-a3e3-a65e0552b14d";"ceph-1e54fbb7-4402-4c38-9320-dd45dec44262";"2wyfFo-PDqI-JR7l-KH89-0W2V-KBfD-EGcbZl";"<7.28t
29/10/2020 17:24:47 INFO     stdout ceph.block_device=/dev/ceph-53320bc4-31b5-4e67-b917-9cc8f77b2da8/osd-block-c0e88099-412f-4292-87c5-da4e88d820d9,ceph.block_uuid=AfXoWU-7N55-H34r-jbUA-mqD4-Apqb-dhjHl0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc2a049b-1f7b-4020-92bf-3af68b5e707c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=c0e88099-412f-4292-87c5-da4e88d820d9,ceph.osd_id=3,ceph.type=block,ceph.vdo=0";"/dev/ceph-53320bc4-31b5-4e67-b917-9cc8f77b2da8/osd-block-c0e88099-412f-4292-87c5-da4e88d820d9";"osd-block-c0e88099-412f-4292-87c5-da4e88d820d9";"ceph-53320bc4-31b5-4e67-b917-9cc8f77b2da8";"AfXoWU-7N55-H34r-jbUA-mqD4-Apqb-dhjHl0";"<7.28t
29/10/2020 17:24:47 INFO     stdout ceph.block_device=/dev/ceph-6a990884-223c-41bc-b330-4a7d913be6f0/osd-block-9145b0e2-615d-4f47-a58e-1b89a9513d02,ceph.block_uuid=KR9wll-D0Pc-4U4I-cDIT-p2xb-HnUc-0pA8AR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc2a049b-1f7b-4020-92bf-3af68b5e707c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=9145b0e2-615d-4f47-a58e-1b89a9513d02,ceph.osd_id=18,ceph.type=block,ceph.vdo=0";"/dev/ceph-6a990884-223c-41bc-b330-4a7d913be6f0/osd-block-9145b0e2-615d-4f47-a58e-1b89a9513d02";"osd-block-9145b0e2-615d-4f47-a58e-1b89a9513d02";"ceph-6a990884-223c-41bc-b330-4a7d913be6f0";"KR9wll-D0Pc-4U4I-cDIT-p2xb-HnUc-0pA8AR";"<7.28t
29/10/2020 17:24:47 INFO     stdout ceph.block_device=/dev/ceph-827f23aa-5ec9-4bad-97bf-09876c7049b2/osd-block-e2b0deca-b890-4a57-baea-e9ad550c6bc0,ceph.block_uuid=JLmRwT-AGxd-ogCE-ykfL-ZcmE-rdKM-jz0qBL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc2a049b-1f7b-4020-92bf-3af68b5e707c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=e2b0deca-b890-4a57-baea-e9ad550c6bc0,ceph.osd_id=16,ceph.type=block,ceph.vdo=0";"/dev/ceph-827f23aa-5ec9-4bad-97bf-09876c7049b2/osd-block-e2b0deca-b890-4a57-baea-e9ad550c6bc0";"osd-block-e2b0deca-b890-4a57-baea-e9ad550c6bc0";"ceph-827f23aa-5ec9-4bad-97bf-09876c7049b2";"JLmRwT-AGxd-ogCE-ykfL-ZcmE-rdKM-jz0qBL";"<7.28t
29/10/2020 17:24:47 INFO     stdout ceph.block_device=/dev/ceph-834043a7-6134-4626-9d08-590e69623686/osd-block-7fa1650d-aac0-4bbc-8d4f-35be8157f7e0,ceph.block_uuid=mTZksI-5qjo-CGP7-FLfM-OVXM-wKPq-1F8RAy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc2a049b-1f7b-4020-92bf-3af68b5e707c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=7fa1650d-aac0-4bbc-8d4f-35be8157f7e0,ceph.osd_id=15,ceph.type=block,ceph.vdo=0";"/dev/ceph-834043a7-6134-4626-9d08-590e69623686/osd-block-7fa1650d-aac0-4bbc-8d4f-35be8157f7e0";"osd-block-7fa1650d-aac0-4bbc-8d4f-35be8157f7e0";"ceph-834043a7-6134-4626-9d08-590e69623686";"mTZksI-5qjo-CGP7-FLfM-OVXM-wKPq-1F8RAy";"<7.28t
29/10/2020 17:24:47 INFO     stdout ceph.block_device=/dev/ceph-83d90fe6-0383-4808-a03e-63bbc92a81de/osd-block-07593034-ec3d-4b4f-af89-43ffc98eb1b7,ceph.block_uuid=E8Ry1L-YIye-eiGa-8SGR-4oxi-XSG9-aI03Xu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc2a049b-1f7b-4020-92bf-3af68b5e707c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=07593034-ec3d-4b4f-af89-43ffc98eb1b7,ceph.osd_id=17,ceph.type=block,ceph.vdo=0";"/dev/ceph-83d90fe6-0383-4808-a03e-63bbc92a81de/osd-block-07593034-ec3d-4b4f-af89-43ffc98eb1b7";"osd-block-07593034-ec3d-4b4f-af89-43ffc98eb1b7";"ceph-83d90fe6-0383-4808-a03e-63bbc92a81de";"E8Ry1L-YIye-eiGa-8SGR-4oxi-XSG9-aI03Xu";"<7.28t
29/10/2020 17:24:47 INFO     Creating data partition num 1 size 0 on /dev/sdd
29/10/2020 17:24:48 INFO     Calling partprobe on sdd device
29/10/2020 17:24:48 INFO     Executing partprobe /dev/sdd
29/10/2020 17:24:48 INFO     Calling udevadm on sdd device
29/10/2020 17:24:48 INFO     Executing udevadm settle --timeout 30
29/10/2020 17:24:52 INFO     Starting : ceph-volume --log-path /opt/petasan/log lvm prepare  --bluestore --data /dev/sdd1
29/10/2020 17:24:53 ERROR    Error executing : ceph-volume --log-path /opt/petasan/log lvm prepare  --bluestore --data /dev/sdd1

Device Path               Size         rotates available Model name
/dev/sdi                  7.28 TB      True    True      TOSHIBA HDWF180
/dev/sdj                  7.28 TB      True    True      TOSHIBA HDWF180
/dev/sdk                  7.28 TB      True    True      TOSHIBA HDWF180
/dev/sdl                  7.28 TB      True    True      TOSHIBA HDWF180
/dev/sdm                  7.28 TB      True    True      TOSHIBA HDWF180
/dev/sdn                  7.28 TB      True    True      MG06SCA800E
/dev/sda                  119.24 GB    False   False     SPCC Solid State
/dev/sdb                  7.28 TB      True    False     MG06SCA800E
/dev/sdc                  7.28 TB      True    False     MG06SCA800E
/dev/sdd                  7.28 TB      True    False     TOSHIBA HDWF180
/dev/sde                  7.28 TB      True    False     MG06SCA800E
/dev/sdf                  7.28 TB      True    False     MG06SCA800E
/dev/sdg                  7.28 TB      True    False     MG06SCA800E
/dev/sdh                  7.28 TB      True    False     TOSHIBA HDWF180

 

Can you try to add the earlier disk /dev/sdd manually:

wipefs --all /dev/sdd
dd if=/dev/zero of=/dev/sdd bs=1M count=20 oflag=direct,dsync
parted -s /dev/sdd mklabel gpt
partprobe /dev/sdd
parted -s /dev/sdd mkpart primary 1 100%
ceph-volume lvm prepare --bluestore --data /dev/sdd1

ceph-volume lvm activate --all

If you get an error can you show any errors in

dmesg

 

 

It was added manually successfully.
And after that successfully added other OSDs via GUI.

Thank you!