Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

iscsi disk list not loding

Hi,

For some reason, I had to remove all osds from all hosts and replace with a different one ( a Bigger one), After that, I am not able to load iscsi disk list page or not able to add any more new iscsi disks.

Is it possible to manually delete my old iscsi disk using ssh?

Thanks

The list is not loading because ceph does not have any storage to load it from. The health of your cluster in the dashboard should also reflect this  There are 2 options: either manually go to the Physical Disk List on every node and add OSDs from your new disks  and delete any old OSDs your cluster had from old disks..,you need to manually select the disk in the ui and click on Add OSD. Note you do not need to delete any previous iSCSI disk list, all iSCSI info is saved as meta data with the individual Ceph block devices, so by removing your storage you already removed all iSCSI info. The second option is since you changed all your disks is to rebuild your cluster, this is probably quicker.

 

 

Hi,

Thanks for the fast response, I removed all old OSDs and added new but CEPH is still showing error!,

Rebuilding the cluster? Do I need to reinstall all nodes or is there any ssh comments to do it as I don't see any options to do it in UI.

Thanks

Joseph Stephen

OK it is a bit more complicated. Typically if you physically remove disks, it should be done one at a time and allow some time since ceph has the responsibility of automatic recovery, if you remove many disks at once, ceph gets stuck since it believes it is still responsible for the data...so we need to completely delete the "rbd" (rados block device) pool and recreate it:

ceph osd pool delete rbd rbd --yes-i-really-really-mean-it --cluster CLUSTER_NAME
ceph osd pool create rbd PG_COUNT PG_COUNT --cluster CLUSTER_NAME
ceph osd pool set rbd size 2 --cluster CLUSTER_NAME
ceph osd pool set rbd min_size 1 --cluster CLUSTER_NAME

where CLUSTER_NAME is the name you specified for your cluster and PG_COUNT is a number based on how many disks you expect in your cluster:

disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192

The second option to rebuild the cluster will involve re-installing from iso for a fresh install.

Thanks,  It worked without a fresh install, Great, it saved me a lot of time and I hope it will help others in the same situation. Ceph Health is green and I can add new iscsi disk.