set emulate_tpu=1 to enable DISCARD/TRIM
bearish
1 Post
July 1, 2017, 11:23 amQuote from bearish on July 1, 2017, 11:23 amWhat are the chances of being able to set emulate_tpu=1, either through a tickbox in the gui on a per-iscsi-disk basis, or by default for all?
It means that deleted space can be re-allocated as free space in pool. I've tested it for a lun formatted with ext4, and it seems to work.
What are the chances of being able to set emulate_tpu=1, either through a tickbox in the gui on a per-iscsi-disk basis, or by default for all?
It means that deleted space can be re-allocated as free space in pool. I've tested it for a lun formatted with ext4, and it seems to work.
admin
2,921 Posts
July 1, 2017, 2:21 pmQuote from admin on July 1, 2017, 2:21 pmYou will be able to do this in version 1.4 http://www.petasan.org/next-release/ : it will include a feature called Cluster Tuning Templates, this provides tunings for Ceph/LIO/sysctl/udev configurations for various setups. There will be a couple of built in templates such as entry level/high end/all flash hardware, but an advanced user can create his own in the advanced settings tab. The LIO configuration is a json file with all parameters/attributes LIO understands including the emulate_tpu parameter, this will apply to all iSCSI disks created.
The custom kernel used in PetaSAN (upstreamed from SUSE Enterprise Storage ) does provide support for VMWare VAAI operations: compare-write,write-same,xcopy, unmap (which is controlled by the emulate_tpu flag) when talking to a Ceph backend.
Currently you can make this somewhat work via the command like :
targetcli /backstores/rbd/image-XXXX set attribute emulate_tpu=1
where XXXX is the disk id in the PetaSAN ui. You would need to do this on all nodes serving the disk after the disk is started but before you connects your client to the different paths. However this will break if a node fails and the system will automatically transfer the path to another node.
You will be able to do this in version 1.4 http://www.petasan.org/next-release/ : it will include a feature called Cluster Tuning Templates, this provides tunings for Ceph/LIO/sysctl/udev configurations for various setups. There will be a couple of built in templates such as entry level/high end/all flash hardware, but an advanced user can create his own in the advanced settings tab. The LIO configuration is a json file with all parameters/attributes LIO understands including the emulate_tpu parameter, this will apply to all iSCSI disks created.
The custom kernel used in PetaSAN (upstreamed from SUSE Enterprise Storage ) does provide support for VMWare VAAI operations: compare-write,write-same,xcopy, unmap (which is controlled by the emulate_tpu flag) when talking to a Ceph backend.
Currently you can make this somewhat work via the command like :
targetcli /backstores/rbd/image-XXXX set attribute emulate_tpu=1
where XXXX is the disk id in the PetaSAN ui. You would need to do this on all nodes serving the disk after the disk is started but before you connects your client to the different paths. However this will break if a node fails and the system will automatically transfer the path to another node.
Last edited on July 1, 2017, 2:27 pm · #2
set emulate_tpu=1 to enable DISCARD/TRIM
bearish
1 Post
Quote from bearish on July 1, 2017, 11:23 amWhat are the chances of being able to set emulate_tpu=1, either through a tickbox in the gui on a per-iscsi-disk basis, or by default for all?
It means that deleted space can be re-allocated as free space in pool. I've tested it for a lun formatted with ext4, and it seems to work.
What are the chances of being able to set emulate_tpu=1, either through a tickbox in the gui on a per-iscsi-disk basis, or by default for all?
It means that deleted space can be re-allocated as free space in pool. I've tested it for a lun formatted with ext4, and it seems to work.
admin
2,921 Posts
Quote from admin on July 1, 2017, 2:21 pmYou will be able to do this in version 1.4 http://www.petasan.org/next-release/ : it will include a feature called Cluster Tuning Templates, this provides tunings for Ceph/LIO/sysctl/udev configurations for various setups. There will be a couple of built in templates such as entry level/high end/all flash hardware, but an advanced user can create his own in the advanced settings tab. The LIO configuration is a json file with all parameters/attributes LIO understands including the emulate_tpu parameter, this will apply to all iSCSI disks created.
The custom kernel used in PetaSAN (upstreamed from SUSE Enterprise Storage ) does provide support for VMWare VAAI operations: compare-write,write-same,xcopy, unmap (which is controlled by the emulate_tpu flag) when talking to a Ceph backend.
Currently you can make this somewhat work via the command like :
targetcli /backstores/rbd/image-XXXX set attribute emulate_tpu=1
where XXXX is the disk id in the PetaSAN ui. You would need to do this on all nodes serving the disk after the disk is started but before you connects your client to the different paths. However this will break if a node fails and the system will automatically transfer the path to another node.
You will be able to do this in version 1.4 http://www.petasan.org/next-release/ : it will include a feature called Cluster Tuning Templates, this provides tunings for Ceph/LIO/sysctl/udev configurations for various setups. There will be a couple of built in templates such as entry level/high end/all flash hardware, but an advanced user can create his own in the advanced settings tab. The LIO configuration is a json file with all parameters/attributes LIO understands including the emulate_tpu parameter, this will apply to all iSCSI disks created.
The custom kernel used in PetaSAN (upstreamed from SUSE Enterprise Storage ) does provide support for VMWare VAAI operations: compare-write,write-same,xcopy, unmap (which is controlled by the emulate_tpu flag) when talking to a Ceph backend.
Currently you can make this somewhat work via the command like :
targetcli /backstores/rbd/image-XXXX set attribute emulate_tpu=1
where XXXX is the disk id in the PetaSAN ui. You would need to do this on all nodes serving the disk after the disk is started but before you connects your client to the different paths. However this will break if a node fails and the system will automatically transfer the path to another node.