Delete iSCSI disk not working
wailer
75 Posts
November 29, 2018, 3:52 pmQuote from wailer on November 29, 2018, 3:52 pmHi ,
We have setup 3-node petasan cluster and we are unable to delete the iSCSI disk. Only workaround we have found is to delete the pool. Then the disk gets deleted.
We are using 2.2.0 version.
Thanks!
Hi ,
We have setup 3-node petasan cluster and we are unable to delete the iSCSI disk. Only workaround we have found is to delete the pool. Then the disk gets deleted.
We are using 2.2.0 version.
Thanks!
admin
2,930 Posts
November 29, 2018, 10:08 pmQuote from admin on November 29, 2018, 10:08 pmIs is taking a long time or just failing to delete ?
If you stop an iSCSI disk and try to delete it manually via Ceph commands, does it work
rbd rm image-000XX --cluster xx
If it fails, it is probably due to a client connection still pending, do a
rbd status image-000XX --cluster xx
does it list any "watchers" which are active client connections, in normal case when we stop a disk the watch is removed, but in some cases when the disks stops in an unclean way ( node unplugged ) the watch will remain for approx 15 minutes before Ceph detects there is connection and allows deletion. Another possibility is if you have connections to the disk from outside PetaSAN.
If however the command starts but takes a long time, then yes large images may take a couple of minutes to delete specially if the cluster is loaded or not fast.
Is is taking a long time or just failing to delete ?
If you stop an iSCSI disk and try to delete it manually via Ceph commands, does it work
rbd rm image-000XX --cluster xx
If it fails, it is probably due to a client connection still pending, do a
rbd status image-000XX --cluster xx
does it list any "watchers" which are active client connections, in normal case when we stop a disk the watch is removed, but in some cases when the disks stops in an unclean way ( node unplugged ) the watch will remain for approx 15 minutes before Ceph detects there is connection and allows deletion. Another possibility is if you have connections to the disk from outside PetaSAN.
If however the command starts but takes a long time, then yes large images may take a couple of minutes to delete specially if the cluster is loaded or not fast.
Last edited on November 29, 2018, 10:10 pm by admin · #2
wailer
75 Posts
April 2, 2019, 9:40 amQuote from wailer on April 2, 2019, 9:40 amWell finally I have found that it took around 24 hours for the disk to be deleted. I have noticed some intersting stuff:
- When iSCSI disk is mounted as datastore, and you delete data from it in VMware, data is not deleted in Petasan, data usage remains the same.
- When you delete disk, CEPH seems to reclaim all non used space from that disk , so until all this space gets "deleted" for good , the disk deletion doesn't finish.
Well finally I have found that it took around 24 hours for the disk to be deleted. I have noticed some intersting stuff:
- When iSCSI disk is mounted as datastore, and you delete data from it in VMware, data is not deleted in Petasan, data usage remains the same.
- When you delete disk, CEPH seems to reclaim all non used space from that disk , so until all this space gets "deleted" for good , the disk deletion doesn't finish.
admin
2,930 Posts
April 2, 2019, 1:23 pmQuote from admin on April 2, 2019, 1:23 pmVMWare Datastore is formatted via VMFS filesystem, PetaSAN works at the block level not at the file system level so in many cases it is unaware of data deletions on the file system. If you delete a vmdk from a datastore, VMWare sends an unmap/trim request which we should handle if the emulate_tpu flag is 1 in the /opt/petasan/config/tuning/current/lio_tunings
VMWare Datastore is formatted via VMFS filesystem, PetaSAN works at the block level not at the file system level so in many cases it is unaware of data deletions on the file system. If you delete a vmdk from a datastore, VMWare sends an unmap/trim request which we should handle if the emulate_tpu flag is 1 in the /opt/petasan/config/tuning/current/lio_tunings
wailer
75 Posts
April 3, 2019, 8:06 amQuote from wailer on April 3, 2019, 8:06 amI see this config is set to 0 by default, should we set it to 1 , then ?
"storage_objects": [
{
"attributes": {
"block_size": 512,
"emulate_3pc": 1,
"emulate_caw": 1,
"emulate_tpu": 0,
"queue_depth": 256
Setting this to 1, means "deleted" data would be erased immediately?
I see this config is set to 0 by default, should we set it to 1 , then ?
"storage_objects": [
{
"attributes": {
"block_size": 512,
"emulate_3pc": 1,
"emulate_caw": 1,
"emulate_tpu": 0,
"queue_depth": 256
Setting this to 1, means "deleted" data would be erased immediately?
admin
2,930 Posts
April 3, 2019, 2:27 pmQuote from admin on April 3, 2019, 2:27 pmyes setting the flag should free discarded data immediately. You may need to reboot the nodes for it to take effect.
note that if set to false, the space will still be used to write new data, so there is no space loss.
yes setting the flag should free discarded data immediately. You may need to reboot the nodes for it to take effect.
note that if set to false, the space will still be used to write new data, so there is no space loss.
Delete iSCSI disk not working
wailer
75 Posts
Quote from wailer on November 29, 2018, 3:52 pmHi ,
We have setup 3-node petasan cluster and we are unable to delete the iSCSI disk. Only workaround we have found is to delete the pool. Then the disk gets deleted.
We are using 2.2.0 version.
Thanks!
Hi ,
We have setup 3-node petasan cluster and we are unable to delete the iSCSI disk. Only workaround we have found is to delete the pool. Then the disk gets deleted.
We are using 2.2.0 version.
Thanks!
admin
2,930 Posts
Quote from admin on November 29, 2018, 10:08 pmIs is taking a long time or just failing to delete ?
If you stop an iSCSI disk and try to delete it manually via Ceph commands, does it work
rbd rm image-000XX --cluster xx
If it fails, it is probably due to a client connection still pending, do a
rbd status image-000XX --cluster xx
does it list any "watchers" which are active client connections, in normal case when we stop a disk the watch is removed, but in some cases when the disks stops in an unclean way ( node unplugged ) the watch will remain for approx 15 minutes before Ceph detects there is connection and allows deletion. Another possibility is if you have connections to the disk from outside PetaSAN.
If however the command starts but takes a long time, then yes large images may take a couple of minutes to delete specially if the cluster is loaded or not fast.
Is is taking a long time or just failing to delete ?
If you stop an iSCSI disk and try to delete it manually via Ceph commands, does it work
rbd rm image-000XX --cluster xx
If it fails, it is probably due to a client connection still pending, do a
rbd status image-000XX --cluster xx
does it list any "watchers" which are active client connections, in normal case when we stop a disk the watch is removed, but in some cases when the disks stops in an unclean way ( node unplugged ) the watch will remain for approx 15 minutes before Ceph detects there is connection and allows deletion. Another possibility is if you have connections to the disk from outside PetaSAN.
If however the command starts but takes a long time, then yes large images may take a couple of minutes to delete specially if the cluster is loaded or not fast.
wailer
75 Posts
Quote from wailer on April 2, 2019, 9:40 amWell finally I have found that it took around 24 hours for the disk to be deleted. I have noticed some intersting stuff:
- When iSCSI disk is mounted as datastore, and you delete data from it in VMware, data is not deleted in Petasan, data usage remains the same.
- When you delete disk, CEPH seems to reclaim all non used space from that disk , so until all this space gets "deleted" for good , the disk deletion doesn't finish.
Well finally I have found that it took around 24 hours for the disk to be deleted. I have noticed some intersting stuff:
- When iSCSI disk is mounted as datastore, and you delete data from it in VMware, data is not deleted in Petasan, data usage remains the same.
- When you delete disk, CEPH seems to reclaim all non used space from that disk , so until all this space gets "deleted" for good , the disk deletion doesn't finish.
admin
2,930 Posts
Quote from admin on April 2, 2019, 1:23 pmVMWare Datastore is formatted via VMFS filesystem, PetaSAN works at the block level not at the file system level so in many cases it is unaware of data deletions on the file system. If you delete a vmdk from a datastore, VMWare sends an unmap/trim request which we should handle if the emulate_tpu flag is 1 in the /opt/petasan/config/tuning/current/lio_tunings
VMWare Datastore is formatted via VMFS filesystem, PetaSAN works at the block level not at the file system level so in many cases it is unaware of data deletions on the file system. If you delete a vmdk from a datastore, VMWare sends an unmap/trim request which we should handle if the emulate_tpu flag is 1 in the /opt/petasan/config/tuning/current/lio_tunings
wailer
75 Posts
Quote from wailer on April 3, 2019, 8:06 amI see this config is set to 0 by default, should we set it to 1 , then ?
"storage_objects": [
{
"attributes": {
"block_size": 512,
"emulate_3pc": 1,
"emulate_caw": 1,
"emulate_tpu": 0,
"queue_depth": 256
Setting this to 1, means "deleted" data would be erased immediately?
I see this config is set to 0 by default, should we set it to 1 , then ?
"storage_objects": [
{
"attributes": {
"block_size": 512,
"emulate_3pc": 1,
"emulate_caw": 1,
"emulate_tpu": 0,
"queue_depth": 256
Setting this to 1, means "deleted" data would be erased immediately?
admin
2,930 Posts
Quote from admin on April 3, 2019, 2:27 pmyes setting the flag should free discarded data immediately. You may need to reboot the nodes for it to take effect.
note that if set to false, the space will still be used to write new data, so there is no space loss.
yes setting the flag should free discarded data immediately. You may need to reboot the nodes for it to take effect.
note that if set to false, the space will still be used to write new data, so there is no space loss.