Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Delete iSCSI disk not working

Hi ,

We have setup 3-node petasan cluster and we are unable to delete the iSCSI disk. Only workaround we have found is to delete the pool. Then the disk gets deleted.

We are using 2.2.0 version.

Thanks!

Is is taking a long time or just failing to delete ?

If you stop an iSCSI disk and  try to delete it manually via Ceph commands, does it work

rbd rm image-000XX --cluster xx

If it fails, it is probably due to a client connection still pending, do a

rbd status image-000XX --cluster xx

does it list any "watchers" which are active client connections,  in normal case when we stop a disk the watch is removed, but in some cases when the disks stops in an unclean way ( node unplugged ) the watch will remain for approx 15 minutes before Ceph detects there is connection and allows deletion. Another possibility is if you have connections to the disk from outside PetaSAN.

If however the command starts but takes a long time, then yes large images may take a couple of minutes to delete specially if the cluster is loaded or not fast.

 

Well finally I have found that it took around 24 hours for the disk to be deleted.  I have noticed some intersting stuff:

  1. When iSCSI disk is mounted as datastore, and you delete data from it in VMware, data is not deleted in Petasan, data usage remains the same.
  2. When you delete disk, CEPH seems to reclaim all non used space from that disk , so until all this space gets "deleted" for good , the disk deletion doesn't finish.

 

 

 

VMWare Datastore is formatted via VMFS filesystem, PetaSAN works at the block level not at the file system level so in many cases it is unaware of data deletions on the file system. If you delete a vmdk from a datastore,  VMWare sends an unmap/trim request which we should handle if the emulate_tpu  flag is 1 in the /opt/petasan/config/tuning/current/lio_tunings

I see this config is set to 0 by default, should we set it to 1 , then ?

"storage_objects": [
{
"attributes": {
"block_size": 512,
"emulate_3pc": 1,
"emulate_caw": 1,
"emulate_tpu": 0,
"queue_depth": 256

 

Setting this to 1, means "deleted" data would be erased immediately?

yes setting the flag should free discarded data immediately. You may need to reboot the nodes for it to take effect.

note that if set to false, the space will still be used to write new data, so there is no space loss.