iscsi disk list not loding
joe
3 Posts
September 18, 2017, 12:28 amQuote from joe on September 18, 2017, 12:28 amHi,
For some reason, I had to remove all osds from all hosts and replace with a different one ( a Bigger one), After that, I am not able to load iscsi disk list page or not able to add any more new iscsi disks.
Is it possible to manually delete my old iscsi disk using ssh?
Thanks
Hi,
For some reason, I had to remove all osds from all hosts and replace with a different one ( a Bigger one), After that, I am not able to load iscsi disk list page or not able to add any more new iscsi disks.
Is it possible to manually delete my old iscsi disk using ssh?
Thanks
admin
2,930 Posts
September 18, 2017, 7:04 amQuote from admin on September 18, 2017, 7:04 amThe list is not loading because ceph does not have any storage to load it from. The health of your cluster in the dashboard should also reflect this There are 2 options: either manually go to the Physical Disk List on every node and add OSDs from your new disks and delete any old OSDs your cluster had from old disks..,you need to manually select the disk in the ui and click on Add OSD. Note you do not need to delete any previous iSCSI disk list, all iSCSI info is saved as meta data with the individual Ceph block devices, so by removing your storage you already removed all iSCSI info. The second option is since you changed all your disks is to rebuild your cluster, this is probably quicker.
The list is not loading because ceph does not have any storage to load it from. The health of your cluster in the dashboard should also reflect this There are 2 options: either manually go to the Physical Disk List on every node and add OSDs from your new disks and delete any old OSDs your cluster had from old disks..,you need to manually select the disk in the ui and click on Add OSD. Note you do not need to delete any previous iSCSI disk list, all iSCSI info is saved as meta data with the individual Ceph block devices, so by removing your storage you already removed all iSCSI info. The second option is since you changed all your disks is to rebuild your cluster, this is probably quicker.
Last edited on September 18, 2017, 7:08 am by admin · #2
joe
3 Posts
September 18, 2017, 9:03 amQuote from joe on September 18, 2017, 9:03 amHi,
Thanks for the fast response, I removed all old OSDs and added new but CEPH is still showing error!,
Rebuilding the cluster? Do I need to reinstall all nodes or is there any ssh comments to do it as I don't see any options to do it in UI.
Thanks
Joseph Stephen
Hi,
Thanks for the fast response, I removed all old OSDs and added new but CEPH is still showing error!,
Rebuilding the cluster? Do I need to reinstall all nodes or is there any ssh comments to do it as I don't see any options to do it in UI.
Thanks
Joseph Stephen
admin
2,930 Posts
September 18, 2017, 9:59 amQuote from admin on September 18, 2017, 9:59 amOK it is a bit more complicated. Typically if you physically remove disks, it should be done one at a time and allow some time since ceph has the responsibility of automatic recovery, if you remove many disks at once, ceph gets stuck since it believes it is still responsible for the data...so we need to completely delete the "rbd" (rados block device) pool and recreate it:
ceph osd pool delete rbd rbd --yes-i-really-really-mean-it --cluster CLUSTER_NAME
ceph osd pool create rbd PG_COUNT PG_COUNT --cluster CLUSTER_NAME
ceph osd pool set rbd size 2 --cluster CLUSTER_NAME
ceph osd pool set rbd min_size 1 --cluster CLUSTER_NAME
where CLUSTER_NAME is the name you specified for your cluster and PG_COUNT is a number based on how many disks you expect in your cluster:
disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192
The second option to rebuild the cluster will involve re-installing from iso for a fresh install.
OK it is a bit more complicated. Typically if you physically remove disks, it should be done one at a time and allow some time since ceph has the responsibility of automatic recovery, if you remove many disks at once, ceph gets stuck since it believes it is still responsible for the data...so we need to completely delete the "rbd" (rados block device) pool and recreate it:
ceph osd pool delete rbd rbd --yes-i-really-really-mean-it --cluster CLUSTER_NAME
ceph osd pool create rbd PG_COUNT PG_COUNT --cluster CLUSTER_NAME
ceph osd pool set rbd size 2 --cluster CLUSTER_NAME
ceph osd pool set rbd min_size 1 --cluster CLUSTER_NAME
where CLUSTER_NAME is the name you specified for your cluster and PG_COUNT is a number based on how many disks you expect in your cluster:
disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192
The second option to rebuild the cluster will involve re-installing from iso for a fresh install.
joe
3 Posts
September 18, 2017, 10:38 amQuote from joe on September 18, 2017, 10:38 amThanks, It worked without a fresh install, Great, it saved me a lot of time and I hope it will help others in the same situation. Ceph Health is green and I can add new iscsi disk.
Thanks, It worked without a fresh install, Great, it saved me a lot of time and I hope it will help others in the same situation. Ceph Health is green and I can add new iscsi disk.
iscsi disk list not loding
joe
3 Posts
Quote from joe on September 18, 2017, 12:28 amHi,
For some reason, I had to remove all osds from all hosts and replace with a different one ( a Bigger one), After that, I am not able to load iscsi disk list page or not able to add any more new iscsi disks.
Is it possible to manually delete my old iscsi disk using ssh?
Thanks
Hi,
For some reason, I had to remove all osds from all hosts and replace with a different one ( a Bigger one), After that, I am not able to load iscsi disk list page or not able to add any more new iscsi disks.
Is it possible to manually delete my old iscsi disk using ssh?
Thanks
admin
2,930 Posts
Quote from admin on September 18, 2017, 7:04 amThe list is not loading because ceph does not have any storage to load it from. The health of your cluster in the dashboard should also reflect this There are 2 options: either manually go to the Physical Disk List on every node and add OSDs from your new disks and delete any old OSDs your cluster had from old disks..,you need to manually select the disk in the ui and click on Add OSD. Note you do not need to delete any previous iSCSI disk list, all iSCSI info is saved as meta data with the individual Ceph block devices, so by removing your storage you already removed all iSCSI info. The second option is since you changed all your disks is to rebuild your cluster, this is probably quicker.
The list is not loading because ceph does not have any storage to load it from. The health of your cluster in the dashboard should also reflect this There are 2 options: either manually go to the Physical Disk List on every node and add OSDs from your new disks and delete any old OSDs your cluster had from old disks..,you need to manually select the disk in the ui and click on Add OSD. Note you do not need to delete any previous iSCSI disk list, all iSCSI info is saved as meta data with the individual Ceph block devices, so by removing your storage you already removed all iSCSI info. The second option is since you changed all your disks is to rebuild your cluster, this is probably quicker.
joe
3 Posts
Quote from joe on September 18, 2017, 9:03 amHi,
Thanks for the fast response, I removed all old OSDs and added new but CEPH is still showing error!,
Rebuilding the cluster? Do I need to reinstall all nodes or is there any ssh comments to do it as I don't see any options to do it in UI.
Thanks
Joseph Stephen
Hi,
Thanks for the fast response, I removed all old OSDs and added new but CEPH is still showing error!,
Rebuilding the cluster? Do I need to reinstall all nodes or is there any ssh comments to do it as I don't see any options to do it in UI.
Thanks
Joseph Stephen
admin
2,930 Posts
Quote from admin on September 18, 2017, 9:59 amOK it is a bit more complicated. Typically if you physically remove disks, it should be done one at a time and allow some time since ceph has the responsibility of automatic recovery, if you remove many disks at once, ceph gets stuck since it believes it is still responsible for the data...so we need to completely delete the "rbd" (rados block device) pool and recreate it:
ceph osd pool delete rbd rbd --yes-i-really-really-mean-it --cluster CLUSTER_NAME
ceph osd pool create rbd PG_COUNT PG_COUNT --cluster CLUSTER_NAME
ceph osd pool set rbd size 2 --cluster CLUSTER_NAME
ceph osd pool set rbd min_size 1 --cluster CLUSTER_NAMEwhere CLUSTER_NAME is the name you specified for your cluster and PG_COUNT is a number based on how many disks you expect in your cluster:
disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192The second option to rebuild the cluster will involve re-installing from iso for a fresh install.
OK it is a bit more complicated. Typically if you physically remove disks, it should be done one at a time and allow some time since ceph has the responsibility of automatic recovery, if you remove many disks at once, ceph gets stuck since it believes it is still responsible for the data...so we need to completely delete the "rbd" (rados block device) pool and recreate it:
ceph osd pool delete rbd rbd --yes-i-really-really-mean-it --cluster CLUSTER_NAME
ceph osd pool create rbd PG_COUNT PG_COUNT --cluster CLUSTER_NAME
ceph osd pool set rbd size 2 --cluster CLUSTER_NAME
ceph osd pool set rbd min_size 1 --cluster CLUSTER_NAME
where CLUSTER_NAME is the name you specified for your cluster and PG_COUNT is a number based on how many disks you expect in your cluster:
disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192
The second option to rebuild the cluster will involve re-installing from iso for a fresh install.
joe
3 Posts
Quote from joe on September 18, 2017, 10:38 amThanks, It worked without a fresh install, Great, it saved me a lot of time and I hope it will help others in the same situation. Ceph Health is green and I can add new iscsi disk.
Thanks, It worked without a fresh install, Great, it saved me a lot of time and I hope it will help others in the same situation. Ceph Health is green and I can add new iscsi disk.