Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

OSD(s) have broken BlueStore compression

Hi,

I recently activated the snappy compression for one of our low-used Petasan clusters to see the impact on the VM side.

The system is a 4-node cluster with SSD disks, and 43 OSD.

After some time of being activated the compression, I started seeing this warning from Ceph Health "30 OSD(s) have broken BlueStore compression"

  1. What is the impact of this warning? Does it mean only that the data has not been compressed into the OSD, or does it mean something more worrying?
  2. How could I fix the issue and effectively start using the compression?
  3. If we have mainly VM load what is the preferred compression between snappy, zlib, zstd in terms of performance?

ceph osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 8946 flags hashpspool stripe_width 0 pg_num_min 1 application mgr,mgr_devicehealth
pool 2 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1024 pgp_num 1024 autoscale_mode on last_change 27950 lfor 0/8300/10469 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm snappy compression_mode force target_size_ratio 0.5 application rbd

Ceph health

ceph -s
cluster:
id: e1678f7f-08a7-47d7-a293-a00cf7ca3288
health: HEALTH_WARN
30 OSD(s) have broken BlueStore compression

services:
mon: 3 daemons, quorum xxx-03,xxx-02,xxx-01 (age 7w)
mgr: xxx-03(active, since 7w), standbys: xxx-01, xxx-02
osd: 43 osds: 43 up (since 7w), 43 in (since 7w)

data:
pools: 2 pools, 1025 pgs
objects: 332.19k objects, 1.3 TiB
usage: 3.5 TiB used, 34 TiB / 38 TiB avail
pgs: 1025 active+clean

io:
client: 79 KiB/s rd, 2.8 MiB/s wr, 19 op/s rd, 100 op/s wr

Thank you