Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

PetaSAN in our 2 datacenters for production

Pages: 1 2 3 4 5

Yes, this seems to give the info i need, thanks!

My output is:

ceph df detail

RAW STORAGE:

CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED

hdd           42 TiB      26 TiB      16 TiB        16 TiB                38.43

TOTAL     42 TiB      26 TiB      16 TiB        16 TiB                38.43

 

POOLS:

POOL     ID     STORED      OBJECTS     USED       %USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY     USED COMPR     UNDER COMPR

rbd           1     8.0 TiB       2.11M          16 TiB       44.10        10 TiB                         N/A                         N/A             2.11M            0 B                        0 B

 

At the OSD level:

root@NODE01_AM5-2:~# for osd in `seq 0 10`; do echo osd.$osd; sudo ceph daemon osd.$osd perf dump | grep 'bluestore_compressed'; done

osd.0

admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

osd.1

admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

osd.2

admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

osd.3

admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

osd.4

admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

osd.5

admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

osd.6

admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

osd.7

admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

osd.8

"bluestore_compressed": 0,

"bluestore_compressed_allocated": 0,

"bluestore_compressed_original": 0,

osd.9

"bluestore_compressed": 0,

"bluestore_compressed_allocated": 0,

"bluestore_compressed_original": 0,

osd.10

"bluestore_compressed": 0,

"bluestore_compressed_allocated": 0,

"bluestore_compressed_original": 0,

 

So can i conclude NO compression took place? If so, what could be the reason?

 

can you double check the pool has compression enabled:

ceph osd pool get  POOL_NAME compression_mode
ceph osd pool get POOL_NAME compression_algorithm

to a small write test writing zero

dd if=/dev/zero of=data.bin bs=100M count=1

rados put object1 data.bin  --pool=POOL_NAME

then check with ceph df detail

 

The results:

 

root@NODE01_AM5-2:~# ceph osd pool get rbd compression_mode

compression_mode: force

root@NODE01_AM5-2:~# ceph osd pool get rbd compression_algorithm

compression_algorithm: zstd

root@NODE01_AM5-2:~# dd if=/dev/zero of=data.bin bs=100M count=1

1+0 records in

1+0 records out

104857600 bytes (105 MB, 100 MiB) copied, 0.22728 s, 461 MB/s

root@NODE01_AM5-2:~# rados put object1 data.bin  --pool=rbd

root@NODE01_AM5-2:~# ceph df detail

RAW STORAGE:

CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED

hdd              42 TiB     26 TiB     16 TiB       16 TiB            39.06

TOTAL        42 TiB     26 TiB     16 TiB       16 TiB           39.06

 

POOLS:

POOL     ID     STORED      OBJECTS     USED       %USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY     USED COMPR     UNDER COMPR

rbd             1       8.2 TiB           2.14M          16 TiB         44.89        10 TiB                     N/A                             N/A                     2.14M            0 B                        0 B

hard to say, it is working here...can you change the algorithm to snappy from the ui under pool and try a similar test.

also have you done any cli commands yourself relating to compression  or changes to the conf file ?

Ok, I will give it a try.

Will it have an impact on the running VM's? Is it dangerous to change (any) compression settings by the time we have PetaSAN in production?

Is Snappy better/worse than zstd (compression and performance wise)?

 

I haven't done any cli commands relating to compression. Compression on our second storage we're testing does work too! (zstd)

There is no issue of turning this off and on during production. I suggested this just in case there was some incorrect manual settings that the ui will over-write and just in case there was something wrong with the zstd library or plugin. zstd gives better compression than snappy, with slightly higher performance impact,

Good to know it works on your other setup.

Changed to snappy and back (zstd), now i can see the values changing (Used Compr and Under Compr), so it seems to work now! Thanks for your help!

To have the existing VM's/data also compressed I quess they have to be removed and placed back  again on the storage, right?

 

 

Yes turning compression on/off affects newly written data, if you need to compress existing data it has to be re-written.

Hi Admin,

While taking VM's off the storage i realized that NO disk space is 'reclaimed'. So the storage shows the same amount of used space after removing  (100GB's 0f) data as before via the webinterface, also when using the command  "ceph df detail"

Is there a (easy) way to reclaim this space?

 

Thanks in advance!

Can you check "Delete Status" via

esxcli storage core device vaai status get

it should list "Supported", then invoke unmap via

esxcli storage vmfs unmap -l

You can also use async unmap

esxcli storage vmfs reclaim config set --volume-label XXX --reclaim-priority low
esxcli storage vmfs reclaim config get --volume-label XXX

Note that even if you do not reclaim space, vmfs will re-use this space for its future allocations..

 

Pages: 1 2 3 4 5