I can not remove an iscsi disk
maxthetor
24 Posts
July 11, 2017, 6:58 pmQuote from maxthetor on July 11, 2017, 6:58 pmWhen I try to remove an iscsi disk I get an error message.
http://imgur.com/a/jKBWa
root@san01:/opt/petasan/log# tail -f PetaSAN.log
11/07/2017 15:56:15 ERROR Delete disk 00005 error
11/07/2017 15:56:15 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 316, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 641, in rbd.RBD.remove (/opt/petasan/config/ceph-10.2.5/src/build/rbd.c:4940)
ImageBusy: error removing image
When I try to remove an iscsi disk I get an error message.
root@san01:/opt/petasan/log# tail -f PetaSAN.log
11/07/2017 15:56:15 ERROR Delete disk 00005 error
11/07/2017 15:56:15 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 316, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 641, in rbd.RBD.remove (/opt/petasan/config/ceph-10.2.5/src/build/rbd.c:4940)
ImageBusy: error removing image
admin
2,930 Posts
July 12, 2017, 12:58 pmQuote from admin on July 12, 2017, 12:58 pmWe do see cases like this when there was a recent node failure, typically in such cases you would have to wait about 15 min before retrying. Ceph places a 'watch' to track which client is connected to an rbd image and will not allow deleting it if it has clients, a clean client stop will remove his watch, but an unclean shutdown will not remove it, the watch has a heartbeat and expires if the client is not heard in 15 min.
There is a way to force the watch to expire, but it does so across all images for the client and not image specific so it is a bit risky, so currently we just have wait 15 min for the watch to expire.
If this is not your case, please let us know more detail on what steps you did and we will look into it.
We do see cases like this when there was a recent node failure, typically in such cases you would have to wait about 15 min before retrying. Ceph places a 'watch' to track which client is connected to an rbd image and will not allow deleting it if it has clients, a clean client stop will remove his watch, but an unclean shutdown will not remove it, the watch has a heartbeat and expires if the client is not heard in 15 min.
There is a way to force the watch to expire, but it does so across all images for the client and not image specific so it is a bit risky, so currently we just have wait 15 min for the watch to expire.
If this is not your case, please let us know more detail on what steps you did and we will look into it.
Last edited on July 12, 2017, 1:03 pm · #2
maxthetor
24 Posts
July 13, 2017, 10:31 pmQuote from maxthetor on July 13, 2017, 10:31 pmIt's been a few days and I still can not remove the disc. Is there any way to check who is still connected to this rbd image?
It's been a few days and I still can not remove the disc. Is there any way to check who is still connected to this rbd image?
admin
2,930 Posts
July 14, 2017, 4:57 amQuote from admin on July 14, 2017, 4:57 amThe failure log shows it is coming from the Ceph api call and ImageBusy indicates it still has watchers
You can see the connect clients / watch list:
rbd status image-00005 --cluster xx
If you do see client connects whereas PetaSAN shows it stopped, can you double check you had not mapped it manually using cli.
Does this happen for any disk you try to delete or only this case. If it is a general case i would try to delete a stopped disk manually using
rbd rm image-0000X --cluster xx
and see what errors you get
The failure log shows it is coming from the Ceph api call and ImageBusy indicates it still has watchers
You can see the connect clients / watch list:
rbd status image-00005 --cluster xx
If you do see client connects whereas PetaSAN shows it stopped, can you double check you had not mapped it manually using cli.
Does this happen for any disk you try to delete or only this case. If it is a general case i would try to delete a stopped disk manually using
rbd rm image-0000X --cluster xx
and see what errors you get
maxthetor
24 Posts
July 14, 2017, 11:32 pmQuote from maxthetor on July 14, 2017, 11:32 pmGreat, it worked out that way.
Thank you
Great, it worked out that way.
Thank you
I can not remove an iscsi disk
maxthetor
24 Posts
Quote from maxthetor on July 11, 2017, 6:58 pmWhen I try to remove an iscsi disk I get an error message.
http://imgur.com/a/jKBWa
root@san01:/opt/petasan/log# tail -f PetaSAN.log
11/07/2017 15:56:15 ERROR Delete disk 00005 error
11/07/2017 15:56:15 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 316, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 641, in rbd.RBD.remove (/opt/petasan/config/ceph-10.2.5/src/build/rbd.c:4940)
ImageBusy: error removing image
When I try to remove an iscsi disk I get an error message.
root@san01:/opt/petasan/log# tail -f PetaSAN.log
11/07/2017 15:56:15 ERROR Delete disk 00005 error
11/07/2017 15:56:15 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 316, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 641, in rbd.RBD.remove (/opt/petasan/config/ceph-10.2.5/src/build/rbd.c:4940)
ImageBusy: error removing image
admin
2,930 Posts
Quote from admin on July 12, 2017, 12:58 pmWe do see cases like this when there was a recent node failure, typically in such cases you would have to wait about 15 min before retrying. Ceph places a 'watch' to track which client is connected to an rbd image and will not allow deleting it if it has clients, a clean client stop will remove his watch, but an unclean shutdown will not remove it, the watch has a heartbeat and expires if the client is not heard in 15 min.
There is a way to force the watch to expire, but it does so across all images for the client and not image specific so it is a bit risky, so currently we just have wait 15 min for the watch to expire.
If this is not your case, please let us know more detail on what steps you did and we will look into it.
We do see cases like this when there was a recent node failure, typically in such cases you would have to wait about 15 min before retrying. Ceph places a 'watch' to track which client is connected to an rbd image and will not allow deleting it if it has clients, a clean client stop will remove his watch, but an unclean shutdown will not remove it, the watch has a heartbeat and expires if the client is not heard in 15 min.
There is a way to force the watch to expire, but it does so across all images for the client and not image specific so it is a bit risky, so currently we just have wait 15 min for the watch to expire.
If this is not your case, please let us know more detail on what steps you did and we will look into it.
maxthetor
24 Posts
Quote from maxthetor on July 13, 2017, 10:31 pmIt's been a few days and I still can not remove the disc. Is there any way to check who is still connected to this rbd image?
It's been a few days and I still can not remove the disc. Is there any way to check who is still connected to this rbd image?
admin
2,930 Posts
Quote from admin on July 14, 2017, 4:57 amThe failure log shows it is coming from the Ceph api call and ImageBusy indicates it still has watchers
You can see the connect clients / watch list:
rbd status image-00005 --cluster xx
If you do see client connects whereas PetaSAN shows it stopped, can you double check you had not mapped it manually using cli.
Does this happen for any disk you try to delete or only this case. If it is a general case i would try to delete a stopped disk manually using
rbd rm image-0000X --cluster xx
and see what errors you get
The failure log shows it is coming from the Ceph api call and ImageBusy indicates it still has watchers
You can see the connect clients / watch list:
rbd status image-00005 --cluster xx
If you do see client connects whereas PetaSAN shows it stopped, can you double check you had not mapped it manually using cli.
Does this happen for any disk you try to delete or only this case. If it is a general case i would try to delete a stopped disk manually using
rbd rm image-0000X --cluster xx
and see what errors you get
maxthetor
24 Posts
Quote from maxthetor on July 14, 2017, 11:32 pmGreat, it worked out that way.
Thank you
Great, it worked out that way.
Thank you