PetaSAN in our 2 datacenters for production
Syscon
23 Posts
January 10, 2020, 9:43 amQuote from Syscon on January 10, 2020, 9:43 amYes, this seems to give the info i need, thanks!
My output is:
ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 42 TiB 26 TiB 16 TiB 16 TiB 38.43
TOTAL 42 TiB 26 TiB 16 TiB 16 TiB 38.43
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
rbd 1 8.0 TiB 2.11M 16 TiB 44.10 10 TiB N/A N/A 2.11M 0 B 0 B
At the OSD level:
root@NODE01_AM5-2:~# for osd in `seq 0 10`; do echo osd.$osd; sudo ceph daemon osd.$osd perf dump | grep 'bluestore_compressed'; done
osd.0
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.1
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.2
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.3
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.4
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.5
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.6
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.7
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.8
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
osd.9
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
osd.10
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
So can i conclude NO compression took place? If so, what could be the reason?
Yes, this seems to give the info i need, thanks!
My output is:
ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 42 TiB 26 TiB 16 TiB 16 TiB 38.43
TOTAL 42 TiB 26 TiB 16 TiB 16 TiB 38.43
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
rbd 1 8.0 TiB 2.11M 16 TiB 44.10 10 TiB N/A N/A 2.11M 0 B 0 B
At the OSD level:
root@NODE01_AM5-2:~# for osd in `seq 0 10`; do echo osd.$osd; sudo ceph daemon osd.$osd perf dump | grep 'bluestore_compressed'; done
osd.0
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.1
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.2
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.3
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.4
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.5
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.6
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.7
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.8
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
osd.9
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
osd.10
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
So can i conclude NO compression took place? If so, what could be the reason?
admin
2,930 Posts
January 10, 2020, 10:08 amQuote from admin on January 10, 2020, 10:08 amcan you double check the pool has compression enabled:
ceph osd pool get POOL_NAME compression_mode
ceph osd pool get POOL_NAME compression_algorithm
to a small write test writing zero
dd if=/dev/zero of=data.bin bs=100M count=1
rados put object1 data.bin --pool=POOL_NAME
then check with ceph df detail
can you double check the pool has compression enabled:
ceph osd pool get POOL_NAME compression_mode
ceph osd pool get POOL_NAME compression_algorithm
to a small write test writing zero
dd if=/dev/zero of=data.bin bs=100M count=1
rados put object1 data.bin --pool=POOL_NAME
then check with ceph df detail
Syscon
23 Posts
January 10, 2020, 10:30 amQuote from Syscon on January 10, 2020, 10:30 amThe results:
root@NODE01_AM5-2:~# ceph osd pool get rbd compression_mode
compression_mode: force
root@NODE01_AM5-2:~# ceph osd pool get rbd compression_algorithm
compression_algorithm: zstd
root@NODE01_AM5-2:~# dd if=/dev/zero of=data.bin bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.22728 s, 461 MB/s
root@NODE01_AM5-2:~# rados put object1 data.bin --pool=rbd
root@NODE01_AM5-2:~# ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 42 TiB 26 TiB 16 TiB 16 TiB 39.06
TOTAL 42 TiB 26 TiB 16 TiB 16 TiB 39.06
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
rbd 1 8.2 TiB 2.14M 16 TiB 44.89 10 TiB N/A N/A 2.14M 0 B 0 B
The results:
root@NODE01_AM5-2:~# ceph osd pool get rbd compression_mode
compression_mode: force
root@NODE01_AM5-2:~# ceph osd pool get rbd compression_algorithm
compression_algorithm: zstd
root@NODE01_AM5-2:~# dd if=/dev/zero of=data.bin bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.22728 s, 461 MB/s
root@NODE01_AM5-2:~# rados put object1 data.bin --pool=rbd
root@NODE01_AM5-2:~# ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 42 TiB 26 TiB 16 TiB 16 TiB 39.06
TOTAL 42 TiB 26 TiB 16 TiB 16 TiB 39.06
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
rbd 1 8.2 TiB 2.14M 16 TiB 44.89 10 TiB N/A N/A 2.14M 0 B 0 B
admin
2,930 Posts
January 10, 2020, 11:04 amQuote from admin on January 10, 2020, 11:04 amhard to say, it is working here...can you change the algorithm to snappy from the ui under pool and try a similar test.
also have you done any cli commands yourself relating to compression or changes to the conf file ?
hard to say, it is working here...can you change the algorithm to snappy from the ui under pool and try a similar test.
also have you done any cli commands yourself relating to compression or changes to the conf file ?
Last edited on January 10, 2020, 11:10 am by admin · #24
Syscon
23 Posts
January 10, 2020, 11:56 amQuote from Syscon on January 10, 2020, 11:56 amOk, I will give it a try.
Will it have an impact on the running VM's? Is it dangerous to change (any) compression settings by the time we have PetaSAN in production?
Is Snappy better/worse than zstd (compression and performance wise)?
I haven't done any cli commands relating to compression. Compression on our second storage we're testing does work too! (zstd)
Ok, I will give it a try.
Will it have an impact on the running VM's? Is it dangerous to change (any) compression settings by the time we have PetaSAN in production?
Is Snappy better/worse than zstd (compression and performance wise)?
I haven't done any cli commands relating to compression. Compression on our second storage we're testing does work too! (zstd)
admin
2,930 Posts
January 10, 2020, 12:15 pmQuote from admin on January 10, 2020, 12:15 pmThere is no issue of turning this off and on during production. I suggested this just in case there was some incorrect manual settings that the ui will over-write and just in case there was something wrong with the zstd library or plugin. zstd gives better compression than snappy, with slightly higher performance impact,
Good to know it works on your other setup.
There is no issue of turning this off and on during production. I suggested this just in case there was some incorrect manual settings that the ui will over-write and just in case there was something wrong with the zstd library or plugin. zstd gives better compression than snappy, with slightly higher performance impact,
Good to know it works on your other setup.
Syscon
23 Posts
January 10, 2020, 12:48 pmQuote from Syscon on January 10, 2020, 12:48 pmChanged to snappy and back (zstd), now i can see the values changing (Used Compr and Under Compr), so it seems to work now! Thanks for your help!
To have the existing VM's/data also compressed I quess they have to be removed and placed back again on the storage, right?
Changed to snappy and back (zstd), now i can see the values changing (Used Compr and Under Compr), so it seems to work now! Thanks for your help!
To have the existing VM's/data also compressed I quess they have to be removed and placed back again on the storage, right?
admin
2,930 Posts
January 10, 2020, 1:06 pmQuote from admin on January 10, 2020, 1:06 pmYes turning compression on/off affects newly written data, if you need to compress existing data it has to be re-written.
Yes turning compression on/off affects newly written data, if you need to compress existing data it has to be re-written.
Last edited on January 10, 2020, 1:07 pm by admin · #28
Syscon
23 Posts
January 13, 2020, 9:51 amQuote from Syscon on January 13, 2020, 9:51 amHi Admin,
While taking VM's off the storage i realized that NO disk space is 'reclaimed'. So the storage shows the same amount of used space after removing (100GB's 0f) data as before via the webinterface, also when using the command "ceph df detail"
Is there a (easy) way to reclaim this space?
Thanks in advance!
Hi Admin,
While taking VM's off the storage i realized that NO disk space is 'reclaimed'. So the storage shows the same amount of used space after removing (100GB's 0f) data as before via the webinterface, also when using the command "ceph df detail"
Is there a (easy) way to reclaim this space?
Thanks in advance!
admin
2,930 Posts
January 13, 2020, 4:14 pmQuote from admin on January 13, 2020, 4:14 pmCan you check "Delete Status" via
esxcli storage core device vaai status get
it should list "Supported", then invoke unmap via
esxcli storage vmfs unmap -l
You can also use async unmap
esxcli storage vmfs reclaim config set --volume-label XXX --reclaim-priority low
esxcli storage vmfs reclaim config get --volume-label XXX
Note that even if you do not reclaim space, vmfs will re-use this space for its future allocations..
Can you check "Delete Status" via
esxcli storage core device vaai status get
it should list "Supported", then invoke unmap via
esxcli storage vmfs unmap -l
You can also use async unmap
esxcli storage vmfs reclaim config set --volume-label XXX --reclaim-priority low
esxcli storage vmfs reclaim config get --volume-label XXX
Note that even if you do not reclaim space, vmfs will re-use this space for its future allocations..
Last edited on January 13, 2020, 4:55 pm by admin · #30
PetaSAN in our 2 datacenters for production
Syscon
23 Posts
Quote from Syscon on January 10, 2020, 9:43 amYes, this seems to give the info i need, thanks!
My output is:
ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 42 TiB 26 TiB 16 TiB 16 TiB 38.43
TOTAL 42 TiB 26 TiB 16 TiB 16 TiB 38.43
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
rbd 1 8.0 TiB 2.11M 16 TiB 44.10 10 TiB N/A N/A 2.11M 0 B 0 B
At the OSD level:
root@NODE01_AM5-2:~# for osd in `seq 0 10`; do echo osd.$osd; sudo ceph daemon osd.$osd perf dump | grep 'bluestore_compressed'; done
osd.0
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.1
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.2
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.3
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.4
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.5
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.6
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.7
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.8
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
osd.9
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
osd.10
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
So can i conclude NO compression took place? If so, what could be the reason?
Yes, this seems to give the info i need, thanks!
My output is:
ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 42 TiB 26 TiB 16 TiB 16 TiB 38.43
TOTAL 42 TiB 26 TiB 16 TiB 16 TiB 38.43
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
rbd 1 8.0 TiB 2.11M 16 TiB 44.10 10 TiB N/A N/A 2.11M 0 B 0 B
At the OSD level:
root@NODE01_AM5-2:~# for osd in `seq 0 10`; do echo osd.$osd; sudo ceph daemon osd.$osd perf dump | grep 'bluestore_compressed'; done
osd.0
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.1
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.2
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.3
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.4
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.5
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.6
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.7
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
osd.8
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
osd.9
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
osd.10
"bluestore_compressed": 0,
"bluestore_compressed_allocated": 0,
"bluestore_compressed_original": 0,
So can i conclude NO compression took place? If so, what could be the reason?
admin
2,930 Posts
Quote from admin on January 10, 2020, 10:08 amcan you double check the pool has compression enabled:
ceph osd pool get POOL_NAME compression_mode
ceph osd pool get POOL_NAME compression_algorithmto a small write test writing zero
dd if=/dev/zero of=data.bin bs=100M count=1
rados put object1 data.bin --pool=POOL_NAME
then check with ceph df detail
can you double check the pool has compression enabled:
ceph osd pool get POOL_NAME compression_mode
ceph osd pool get POOL_NAME compression_algorithm
to a small write test writing zero
dd if=/dev/zero of=data.bin bs=100M count=1
rados put object1 data.bin --pool=POOL_NAME
then check with ceph df detail
Syscon
23 Posts
Quote from Syscon on January 10, 2020, 10:30 amThe results:
root@NODE01_AM5-2:~# ceph osd pool get rbd compression_mode
compression_mode: force
root@NODE01_AM5-2:~# ceph osd pool get rbd compression_algorithm
compression_algorithm: zstd
root@NODE01_AM5-2:~# dd if=/dev/zero of=data.bin bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.22728 s, 461 MB/s
root@NODE01_AM5-2:~# rados put object1 data.bin --pool=rbd
root@NODE01_AM5-2:~# ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 42 TiB 26 TiB 16 TiB 16 TiB 39.06
TOTAL 42 TiB 26 TiB 16 TiB 16 TiB 39.06
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
rbd 1 8.2 TiB 2.14M 16 TiB 44.89 10 TiB N/A N/A 2.14M 0 B 0 B
The results:
root@NODE01_AM5-2:~# ceph osd pool get rbd compression_mode
compression_mode: force
root@NODE01_AM5-2:~# ceph osd pool get rbd compression_algorithm
compression_algorithm: zstd
root@NODE01_AM5-2:~# dd if=/dev/zero of=data.bin bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.22728 s, 461 MB/s
root@NODE01_AM5-2:~# rados put object1 data.bin --pool=rbd
root@NODE01_AM5-2:~# ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 42 TiB 26 TiB 16 TiB 16 TiB 39.06
TOTAL 42 TiB 26 TiB 16 TiB 16 TiB 39.06
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
rbd 1 8.2 TiB 2.14M 16 TiB 44.89 10 TiB N/A N/A 2.14M 0 B 0 B
admin
2,930 Posts
Quote from admin on January 10, 2020, 11:04 amhard to say, it is working here...can you change the algorithm to snappy from the ui under pool and try a similar test.
also have you done any cli commands yourself relating to compression or changes to the conf file ?
hard to say, it is working here...can you change the algorithm to snappy from the ui under pool and try a similar test.
also have you done any cli commands yourself relating to compression or changes to the conf file ?
Syscon
23 Posts
Quote from Syscon on January 10, 2020, 11:56 amOk, I will give it a try.
Will it have an impact on the running VM's? Is it dangerous to change (any) compression settings by the time we have PetaSAN in production?
Is Snappy better/worse than zstd (compression and performance wise)?
I haven't done any cli commands relating to compression. Compression on our second storage we're testing does work too! (zstd)
Ok, I will give it a try.
Will it have an impact on the running VM's? Is it dangerous to change (any) compression settings by the time we have PetaSAN in production?
Is Snappy better/worse than zstd (compression and performance wise)?
I haven't done any cli commands relating to compression. Compression on our second storage we're testing does work too! (zstd)
admin
2,930 Posts
Quote from admin on January 10, 2020, 12:15 pmThere is no issue of turning this off and on during production. I suggested this just in case there was some incorrect manual settings that the ui will over-write and just in case there was something wrong with the zstd library or plugin. zstd gives better compression than snappy, with slightly higher performance impact,
Good to know it works on your other setup.
There is no issue of turning this off and on during production. I suggested this just in case there was some incorrect manual settings that the ui will over-write and just in case there was something wrong with the zstd library or plugin. zstd gives better compression than snappy, with slightly higher performance impact,
Good to know it works on your other setup.
Syscon
23 Posts
Quote from Syscon on January 10, 2020, 12:48 pmChanged to snappy and back (zstd), now i can see the values changing (Used Compr and Under Compr), so it seems to work now! Thanks for your help!
To have the existing VM's/data also compressed I quess they have to be removed and placed back again on the storage, right?
Changed to snappy and back (zstd), now i can see the values changing (Used Compr and Under Compr), so it seems to work now! Thanks for your help!
To have the existing VM's/data also compressed I quess they have to be removed and placed back again on the storage, right?
admin
2,930 Posts
Quote from admin on January 10, 2020, 1:06 pmYes turning compression on/off affects newly written data, if you need to compress existing data it has to be re-written.
Yes turning compression on/off affects newly written data, if you need to compress existing data it has to be re-written.
Syscon
23 Posts
Quote from Syscon on January 13, 2020, 9:51 amHi Admin,
While taking VM's off the storage i realized that NO disk space is 'reclaimed'. So the storage shows the same amount of used space after removing (100GB's 0f) data as before via the webinterface, also when using the command "ceph df detail"
Is there a (easy) way to reclaim this space?
Thanks in advance!
Hi Admin,
While taking VM's off the storage i realized that NO disk space is 'reclaimed'. So the storage shows the same amount of used space after removing (100GB's 0f) data as before via the webinterface, also when using the command "ceph df detail"
Is there a (easy) way to reclaim this space?
Thanks in advance!
admin
2,930 Posts
Quote from admin on January 13, 2020, 4:14 pmCan you check "Delete Status" via
esxcli storage core device vaai status get
it should list "Supported", then invoke unmap via
esxcli storage vmfs unmap -l
You can also use async unmap
esxcli storage vmfs reclaim config set --volume-label XXX --reclaim-priority low
esxcli storage vmfs reclaim config get --volume-label XXXNote that even if you do not reclaim space, vmfs will re-use this space for its future allocations..
Can you check "Delete Status" via
esxcli storage core device vaai status get
it should list "Supported", then invoke unmap via
esxcli storage vmfs unmap -l
You can also use async unmap
esxcli storage vmfs reclaim config set --volume-label XXX --reclaim-priority low
esxcli storage vmfs reclaim config get --volume-label XXX
Note that even if you do not reclaim space, vmfs will re-use this space for its future allocations..