Reclaim space from iscsi disk
valio
17 Posts
November 20, 2018, 5:38 amQuote from valio on November 20, 2018, 5:38 amHi Admin,
I have installed Petasan with four nodes, each with two OSD's. Than I have created single ISCSI disk which I have presented to four ESXi hosts. I have moved couple of VM guests . Until this step everything works great as expected to be. Then I vmotion to another storage one VM guest and I realize that occupied specie in cluster does not reflect properly removed VM. It stays the same as before the movement. Than I removed all VM from Petasan ISCSI disk and space still shows the same as before the movement. If I browse the ESX i data store there is nothing in it, but Petasan dashboard still shows previously occupied space. My question is how to clean the space in ISCSI disk which is mounted as a data store on ESX i host?
Hi Admin,
I have installed Petasan with four nodes, each with two OSD's. Than I have created single ISCSI disk which I have presented to four ESXi hosts. I have moved couple of VM guests . Until this step everything works great as expected to be. Then I vmotion to another storage one VM guest and I realize that occupied specie in cluster does not reflect properly removed VM. It stays the same as before the movement. Than I removed all VM from Petasan ISCSI disk and space still shows the same as before the movement. If I browse the ESX i data store there is nothing in it, but Petasan dashboard still shows previously occupied space. My question is how to clean the space in ISCSI disk which is mounted as a data store on ESX i host?
admin
2,930 Posts
November 20, 2018, 6:45 amQuote from admin on November 20, 2018, 6:45 amEven if space is shown as taken, if you add new storage on the data store it will be re-used. Generally a SAN works at the block layer, the VMWare VMFS file system layer sits above it, it may delete by writing some info at the filesystem layer but this is normally not understood by the SAN.
PetaSAN does support VMWare VAAI/ATS including the unmap/trim command, but the unmap command is disabled by default, this is the LIO default setting as per
http://linux-iscsi.org/wiki/VStorage_APIs_for_Array_Integration
We do have LIO tuning section in the ui during cluster creation, beyond this you need to do this manually on all nodes
nano /opt/petasan/config/tuning/current/lio_tunings
and set "emulate_tpu" attribute to 1
I am not sure that VMFS will send an unmap command in the case you described, but i believe it will. But also knowing that in either case the space will be re-used anyway it may not matter much if you leave it as is.
Even if space is shown as taken, if you add new storage on the data store it will be re-used. Generally a SAN works at the block layer, the VMWare VMFS file system layer sits above it, it may delete by writing some info at the filesystem layer but this is normally not understood by the SAN.
PetaSAN does support VMWare VAAI/ATS including the unmap/trim command, but the unmap command is disabled by default, this is the LIO default setting as per
http://linux-iscsi.org/wiki/VStorage_APIs_for_Array_Integration
We do have LIO tuning section in the ui during cluster creation, beyond this you need to do this manually on all nodes
nano /opt/petasan/config/tuning/current/lio_tunings
and set "emulate_tpu" attribute to 1
I am not sure that VMFS will send an unmap command in the case you described, but i believe it will. But also knowing that in either case the space will be re-used anyway it may not matter much if you leave it as is.
Last edited on November 20, 2018, 6:46 am by admin · #2
valio
17 Posts
November 20, 2018, 7:30 amQuote from valio on November 20, 2018, 7:30 amI have change this parameter on all Petasan nodes and restart iscsi disks. The resacan storage in ESX hosts.
Trying to unmap from esx cli:
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-1
Devices backing volume 5bb9acfd-561acb78-5784-ac1f6b946d16 do not support UNMAP
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-2
Devices backing volume 5bb9ad2c-50d2086c-8d85-ac1f6b946d16 do not support UNMAP
And then issue this command to find VAAI plugin:
esxcli storage core device vaai status get
naa.60014050000100000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
naa.60014050000900000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
But even delete status it shows supported, I can not do unmap on ESX datastore.
Than I copy back and forth the same VM couple of times and I am getting OSD near full, which means space have been not reused.
I have change this parameter on all Petasan nodes and restart iscsi disks. The resacan storage in ESX hosts.
Trying to unmap from esx cli:
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-1
Devices backing volume 5bb9acfd-561acb78-5784-ac1f6b946d16 do not support UNMAP
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-2
Devices backing volume 5bb9ad2c-50d2086c-8d85-ac1f6b946d16 do not support UNMAP
And then issue this command to find VAAI plugin:
esxcli storage core device vaai status get
naa.60014050000100000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
naa.60014050000900000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
But even delete status it shows supported, I can not do unmap on ESX datastore.
Than I copy back and forth the same VM couple of times and I am getting OSD near full, which means space have been not reused.
admin
2,930 Posts
November 20, 2018, 8:16 amQuote from admin on November 20, 2018, 8:16 amThe re-use of space should happen, maybe on a large datastore it will not be apparent immediately but VMFS should do this, you can probably test this with a small datastore size, it is quite unlikely that the space will keep adding.
I know the VAAI unmap has been tested before and working but maybe under different operations, i will check if it works in your scenario of vmotion migration.
The re-use of space should happen, maybe on a large datastore it will not be apparent immediately but VMFS should do this, you can probably test this with a small datastore size, it is quite unlikely that the space will keep adding.
I know the VAAI unmap has been tested before and working but maybe under different operations, i will check if it works in your scenario of vmotion migration.
valio
17 Posts
November 20, 2018, 11:11 amQuote from valio on November 20, 2018, 11:11 amOk it took about an hour since I have issued :
esxcli storage vmfs unmap -l PETASAN
esxcli storage vmfs unmap -l PETASAN-1
esxcli storage vmfs unmap -l PETASAN-2
Size of Petasan is 4TB, Petasan-1 is 4TB and Petasan-2 is 2TB
Only than dashboard start showing the real usage of the pool space. In feature when I deploy Petasan where is this parameter
"emulate_tpu" ?
Ok it took about an hour since I have issued :
esxcli storage vmfs unmap -l PETASAN
esxcli storage vmfs unmap -l PETASAN-1
esxcli storage vmfs unmap -l PETASAN-2
Size of Petasan is 4TB, Petasan-1 is 4TB and Petasan-2 is 2TB
Only than dashboard start showing the real usage of the pool space. In feature when I deploy Petasan where is this parameter
"emulate_tpu" ?
admin
2,930 Posts
November 20, 2018, 11:34 amQuote from admin on November 20, 2018, 11:34 amIt is during cluster creation in deployment of first node, in the Cluster Tuning page there is an advanced show/hide tab, it allows you to tune Ceph/LIO/sysctl etc. The LIO has all the settings/tunables including emulate_tpu.
It is during cluster creation in deployment of first node, in the Cluster Tuning page there is an advanced show/hide tab, it allows you to tune Ceph/LIO/sysctl etc. The LIO has all the settings/tunables including emulate_tpu.
valio
17 Posts
November 20, 2018, 12:22 pmQuote from valio on November 20, 2018, 12:22 pmThank you for your support!
Thank you for your support!
Reclaim space from iscsi disk
valio
17 Posts
Quote from valio on November 20, 2018, 5:38 amHi Admin,
I have installed Petasan with four nodes, each with two OSD's. Than I have created single ISCSI disk which I have presented to four ESXi hosts. I have moved couple of VM guests . Until this step everything works great as expected to be. Then I vmotion to another storage one VM guest and I realize that occupied specie in cluster does not reflect properly removed VM. It stays the same as before the movement. Than I removed all VM from Petasan ISCSI disk and space still shows the same as before the movement. If I browse the ESX i data store there is nothing in it, but Petasan dashboard still shows previously occupied space. My question is how to clean the space in ISCSI disk which is mounted as a data store on ESX i host?
Hi Admin,
I have installed Petasan with four nodes, each with two OSD's. Than I have created single ISCSI disk which I have presented to four ESXi hosts. I have moved couple of VM guests . Until this step everything works great as expected to be. Then I vmotion to another storage one VM guest and I realize that occupied specie in cluster does not reflect properly removed VM. It stays the same as before the movement. Than I removed all VM from Petasan ISCSI disk and space still shows the same as before the movement. If I browse the ESX i data store there is nothing in it, but Petasan dashboard still shows previously occupied space. My question is how to clean the space in ISCSI disk which is mounted as a data store on ESX i host?
admin
2,930 Posts
Quote from admin on November 20, 2018, 6:45 amEven if space is shown as taken, if you add new storage on the data store it will be re-used. Generally a SAN works at the block layer, the VMWare VMFS file system layer sits above it, it may delete by writing some info at the filesystem layer but this is normally not understood by the SAN.
PetaSAN does support VMWare VAAI/ATS including the unmap/trim command, but the unmap command is disabled by default, this is the LIO default setting as per
http://linux-iscsi.org/wiki/VStorage_APIs_for_Array_Integration
We do have LIO tuning section in the ui during cluster creation, beyond this you need to do this manually on all nodes
nano /opt/petasan/config/tuning/current/lio_tunings
and set "emulate_tpu" attribute to 1I am not sure that VMFS will send an unmap command in the case you described, but i believe it will. But also knowing that in either case the space will be re-used anyway it may not matter much if you leave it as is.
Even if space is shown as taken, if you add new storage on the data store it will be re-used. Generally a SAN works at the block layer, the VMWare VMFS file system layer sits above it, it may delete by writing some info at the filesystem layer but this is normally not understood by the SAN.
PetaSAN does support VMWare VAAI/ATS including the unmap/trim command, but the unmap command is disabled by default, this is the LIO default setting as per
http://linux-iscsi.org/wiki/VStorage_APIs_for_Array_Integration
We do have LIO tuning section in the ui during cluster creation, beyond this you need to do this manually on all nodes
nano /opt/petasan/config/tuning/current/lio_tunings
and set "emulate_tpu" attribute to 1
I am not sure that VMFS will send an unmap command in the case you described, but i believe it will. But also knowing that in either case the space will be re-used anyway it may not matter much if you leave it as is.
valio
17 Posts
Quote from valio on November 20, 2018, 7:30 amI have change this parameter on all Petasan nodes and restart iscsi disks. The resacan storage in ESX hosts.
Trying to unmap from esx cli:
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-1
Devices backing volume 5bb9acfd-561acb78-5784-ac1f6b946d16 do not support UNMAP
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-2
Devices backing volume 5bb9ad2c-50d2086c-8d85-ac1f6b946d16 do not support UNMAP
And then issue this command to find VAAI plugin:
esxcli storage core device vaai status get
naa.60014050000100000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supportednaa.60014050000900000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
But even delete status it shows supported, I can not do unmap on ESX datastore.
Than I copy back and forth the same VM couple of times and I am getting OSD near full, which means space have been not reused.
I have change this parameter on all Petasan nodes and restart iscsi disks. The resacan storage in ESX hosts.
Trying to unmap from esx cli:
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-1
Devices backing volume 5bb9acfd-561acb78-5784-ac1f6b946d16 do not support UNMAP
[root@NodeD:~] esxcli storage vmfs unmap -l PETASAN-2
Devices backing volume 5bb9ad2c-50d2086c-8d85-ac1f6b946d16 do not support UNMAP
And then issue this command to find VAAI plugin:
esxcli storage core device vaai status get
naa.60014050000100000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
naa.60014050000900000000000000000000
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
But even delete status it shows supported, I can not do unmap on ESX datastore.
Than I copy back and forth the same VM couple of times and I am getting OSD near full, which means space have been not reused.
admin
2,930 Posts
Quote from admin on November 20, 2018, 8:16 amThe re-use of space should happen, maybe on a large datastore it will not be apparent immediately but VMFS should do this, you can probably test this with a small datastore size, it is quite unlikely that the space will keep adding.
I know the VAAI unmap has been tested before and working but maybe under different operations, i will check if it works in your scenario of vmotion migration.
The re-use of space should happen, maybe on a large datastore it will not be apparent immediately but VMFS should do this, you can probably test this with a small datastore size, it is quite unlikely that the space will keep adding.
I know the VAAI unmap has been tested before and working but maybe under different operations, i will check if it works in your scenario of vmotion migration.
valio
17 Posts
Quote from valio on November 20, 2018, 11:11 amOk it took about an hour since I have issued :
esxcli storage vmfs unmap -l PETASAN
esxcli storage vmfs unmap -l PETASAN-1
esxcli storage vmfs unmap -l PETASAN-2
Size of Petasan is 4TB, Petasan-1 is 4TB and Petasan-2 is 2TB
Only than dashboard start showing the real usage of the pool space. In feature when I deploy Petasan where is this parameter
"emulate_tpu" ?
Ok it took about an hour since I have issued :
esxcli storage vmfs unmap -l PETASAN
esxcli storage vmfs unmap -l PETASAN-1
esxcli storage vmfs unmap -l PETASAN-2
Size of Petasan is 4TB, Petasan-1 is 4TB and Petasan-2 is 2TB
Only than dashboard start showing the real usage of the pool space. In feature when I deploy Petasan where is this parameter
"emulate_tpu" ?
admin
2,930 Posts
Quote from admin on November 20, 2018, 11:34 amIt is during cluster creation in deployment of first node, in the Cluster Tuning page there is an advanced show/hide tab, it allows you to tune Ceph/LIO/sysctl etc. The LIO has all the settings/tunables including emulate_tpu.
It is during cluster creation in deployment of first node, in the Cluster Tuning page there is an advanced show/hide tab, it allows you to tune Ceph/LIO/sysctl etc. The LIO has all the settings/tunables including emulate_tpu.
valio
17 Posts
Quote from valio on November 20, 2018, 12:22 pmThank you for your support!
Thank you for your support!