Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Reclaim space from iscsi disk

Hi Admin,

I have installed Petasan with four nodes, each with two OSD's. Than I have created single ISCSI disk which I have presented to four ESXi hosts. I have moved couple of VM guests . Until this step everything works great as expected to be. Then I vmotion to another storage one VM guest and I realize that occupied specie in cluster does not reflect properly removed VM. It stays the same as before the movement. Than I removed all VM from Petasan ISCSI disk and space still shows the same as before the movement.  If I browse the ESX i data store  there is nothing in it, but Petasan dashboard still shows previously occupied space. My question is how to clean the space in ISCSI disk which is mounted as a data store on ESX i host?

Even if space is shown as taken, if you add new storage on the data store it will be re-used. Generally a SAN works at the block layer, the VMWare VMFS file system layer sits above it, it may delete by writing some info at the filesystem layer but this is normally not understood by the SAN.

PetaSAN does support VMWare VAAI/ATS including the unmap/trim command, but the unmap command is disabled by default, this is the LIO default setting as per

http://linux-iscsi.org/wiki/VStorage_APIs_for_Array_Integration

We do have LIO tuning section in the ui during cluster creation, beyond this you need to do this manually on all nodes

nano /opt/petasan/config/tuning/current/lio_tunings
and set "emulate_tpu"  attribute to 1

I am not sure that VMFS will send an unmap command in the case you described, but i believe it will. But also knowing that in either case the space will be re-used anyway it may not matter much if you leave it as is.

I have change this parameter on all Petasan nodes and restart iscsi disks. The resacan storage in ESX hosts.

Trying to unmap from esx cli:

[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-1
Devices backing volume 5bb9acfd-561acb78-5784-ac1f6b946d16 do not support UNMAP
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-2
Devices backing volume 5bb9ad2c-50d2086c-8d85-ac1f6b946d16 do not support UNMAP

 

And then issue this command to find VAAI plugin:

esxcli storage core device vaai status get

naa.60014050000100000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported

naa.60014050000900000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported

 

But even delete status it shows  supported, I can not do unmap on ESX datastore.

Than I copy back and forth the same VM couple of times and I am getting OSD near full, which means space have been not reused.

 

 

The re-use of space should happen, maybe on a large datastore it will not be apparent immediately but VMFS should do this, you can probably test this with a small datastore size,  it is quite unlikely that the space will keep adding.

I know the VAAI unmap has been tested before and working but maybe under different operations, i will check if it works in your scenario of vmotion migration.

Ok it took about an hour since I have issued :

esxcli storage vmfs unmap -l PETASAN

esxcli storage vmfs unmap -l PETASAN-1

esxcli storage vmfs unmap -l PETASAN-2

Size of Petasan is 4TB, Petasan-1 is 4TB and Petasan-2 is 2TB

Only than dashboard start showing the real usage of the pool space. In feature when I deploy Petasan where is this parameter

"emulate_tpu" ?

It is during cluster creation in deployment of first node, in the Cluster Tuning page there is an advanced show/hide tab, it allows you to tune Ceph/LIO/sysctl etc. The LIO has all the settings/tunables including emulate_tpu.

Thank you for your support!