low speed with thin disks
Aliaksei Nazarenka
9 Posts
June 26, 2017, 8:49 amQuote from Aliaksei Nazarenka on June 26, 2017, 8:49 amI noticed one interesting thing - when using a thin or thick lazy zeroed disk in a vmvare, then there is a significant decrease in performance, if you use a thick eager zeroed disk, then the performance is quite normal. I think the problem is in LIOs.
I noticed one interesting thing - when using a thin or thick lazy zeroed disk in a vmvare, then there is a significant decrease in performance, if you use a thick eager zeroed disk, then the performance is quite normal. I think the problem is in LIOs.
admin
2,930 Posts
June 26, 2017, 1:37 pmQuote from admin on June 26, 2017, 1:37 pmInteresting observation. We have not noticed speed issues ourselves, we use the default settings which i believe is thick lazy. But we will try this in the next couple of days.
Note that Ceph block devices are inherently thin provisioned, consuming just the storage that was actually written, so it does not make sense to use thin provisioning at the VMWare level.
Interesting observation. We have not noticed speed issues ourselves, we use the default settings which i believe is thick lazy. But we will try this in the next couple of days.
Note that Ceph block devices are inherently thin provisioned, consuming just the storage that was actually written, so it does not make sense to use thin provisioning at the VMWare level.
Aliaksei Nazarenka
9 Posts
June 28, 2017, 8:25 amQuote from Aliaksei Nazarenka on June 28, 2017, 8:25 amThe problem seems to be in VAAI, sort of like the LIO supports it, but in fact it is a brake.
The problem seems to be in VAAI, sort of like the LIO supports it, but in fact it is a brake.
admin
2,930 Posts
June 29, 2017, 7:27 pmQuote from admin on June 29, 2017, 7:27 pmWe did some tests between thick lazy zero and thick eager zero and things seems to work as expected. In 4M block writes there was no difference, in 4k block sequential writes there was also no difference but for 4k block random writes there was a significant difference at first but it will relatively quickly vanish as the disk fills up.
- Thick eager zero will zero all the disk when you create/add it, it takes its time during this step.
- Thick lazy zero will zero on demand on first block access, for 4k writes VMWare will realize it needs to first issue a zero write request which to our observation is at least 1MB in size, in case of random 4k writes the iops will be totally controlled by the large block size zero writes, sequential writes will not be affected as much since they execute many write ops per large block zero op.
You are also correct the zero writes are handled by the VAAI implementation which is customized to use Ceph OSD operation ( writesame ) but it is working as expected.
But it is good observation 🙂 and may let more people prefer eager zero...they will initially get better random small block iops at the expense of allocating all the space from the beginning.
We did some tests between thick lazy zero and thick eager zero and things seems to work as expected. In 4M block writes there was no difference, in 4k block sequential writes there was also no difference but for 4k block random writes there was a significant difference at first but it will relatively quickly vanish as the disk fills up.
- Thick eager zero will zero all the disk when you create/add it, it takes its time during this step.
- Thick lazy zero will zero on demand on first block access, for 4k writes VMWare will realize it needs to first issue a zero write request which to our observation is at least 1MB in size, in case of random 4k writes the iops will be totally controlled by the large block size zero writes, sequential writes will not be affected as much since they execute many write ops per large block zero op.
You are also correct the zero writes are handled by the VAAI implementation which is customized to use Ceph OSD operation ( writesame ) but it is working as expected.
But it is good observation 🙂 and may let more people prefer eager zero...they will initially get better random small block iops at the expense of allocating all the space from the beginning.
Last edited on June 29, 2017, 8:18 pm · #4
low speed with thin disks
Aliaksei Nazarenka
9 Posts
Quote from Aliaksei Nazarenka on June 26, 2017, 8:49 amI noticed one interesting thing - when using a thin or thick lazy zeroed disk in a vmvare, then there is a significant decrease in performance, if you use a thick eager zeroed disk, then the performance is quite normal. I think the problem is in LIOs.
I noticed one interesting thing - when using a thin or thick lazy zeroed disk in a vmvare, then there is a significant decrease in performance, if you use a thick eager zeroed disk, then the performance is quite normal. I think the problem is in LIOs.
admin
2,930 Posts
Quote from admin on June 26, 2017, 1:37 pmInteresting observation. We have not noticed speed issues ourselves, we use the default settings which i believe is thick lazy. But we will try this in the next couple of days.
Note that Ceph block devices are inherently thin provisioned, consuming just the storage that was actually written, so it does not make sense to use thin provisioning at the VMWare level.
Interesting observation. We have not noticed speed issues ourselves, we use the default settings which i believe is thick lazy. But we will try this in the next couple of days.
Note that Ceph block devices are inherently thin provisioned, consuming just the storage that was actually written, so it does not make sense to use thin provisioning at the VMWare level.
Aliaksei Nazarenka
9 Posts
Quote from Aliaksei Nazarenka on June 28, 2017, 8:25 amThe problem seems to be in VAAI, sort of like the LIO supports it, but in fact it is a brake.
The problem seems to be in VAAI, sort of like the LIO supports it, but in fact it is a brake.
admin
2,930 Posts
Quote from admin on June 29, 2017, 7:27 pmWe did some tests between thick lazy zero and thick eager zero and things seems to work as expected. In 4M block writes there was no difference, in 4k block sequential writes there was also no difference but for 4k block random writes there was a significant difference at first but it will relatively quickly vanish as the disk fills up.
- Thick eager zero will zero all the disk when you create/add it, it takes its time during this step.
- Thick lazy zero will zero on demand on first block access, for 4k writes VMWare will realize it needs to first issue a zero write request which to our observation is at least 1MB in size, in case of random 4k writes the iops will be totally controlled by the large block size zero writes, sequential writes will not be affected as much since they execute many write ops per large block zero op.
You are also correct the zero writes are handled by the VAAI implementation which is customized to use Ceph OSD operation ( writesame ) but it is working as expected.
But it is good observation 🙂 and may let more people prefer eager zero...they will initially get better random small block iops at the expense of allocating all the space from the beginning.
We did some tests between thick lazy zero and thick eager zero and things seems to work as expected. In 4M block writes there was no difference, in 4k block sequential writes there was also no difference but for 4k block random writes there was a significant difference at first but it will relatively quickly vanish as the disk fills up.
- Thick eager zero will zero all the disk when you create/add it, it takes its time during this step.
- Thick lazy zero will zero on demand on first block access, for 4k writes VMWare will realize it needs to first issue a zero write request which to our observation is at least 1MB in size, in case of random 4k writes the iops will be totally controlled by the large block size zero writes, sequential writes will not be affected as much since they execute many write ops per large block zero op.
You are also correct the zero writes are handled by the VAAI implementation which is customized to use Ceph OSD operation ( writesame ) but it is working as expected.
But it is good observation 🙂 and may let more people prefer eager zero...they will initially get better random small block iops at the expense of allocating all the space from the beginning.