Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

BlueFS_Spillover

shows a warning "1 OSD(s) experiencing BlueFS spillover" .I took the osd id from the cluster. ceph health shows "osd.18 spilled over 66 MiB metadata from 'db' device (31 GiB used of 60 GiB) to slow device"

How to fix this issue??

Look at following script to migrate db to a larger partition:
/opt/petasan/scripts/util/migrate_db.sh
Recommend to test script in a test setup first.

For future OSD creation:
Increase bluestore_block_db_size to 300 GB

Please find the following details :

In petasan cluster, spillover issue affected in osd.18

##ceph health detail

[WRN] BLUEFS_SPILLOVER: 1 OSD(s) experiencing BlueFS spillover
osd.18 spilled over 329 MiB metadata from 'db' device (31 GiB used of 60 GiB) to slow device

##ceph daemon osd.18 perf dump | grep bluefs

"bluefs": {
"bluefs_bytes": 189912,
"bluefs_items": 4174,
"bluefs_file_reader_bytes": 20461568,
"bluefs_file_reader_items": 1002,
"bluefs_file_writer_bytes": 672,
"bluefs_file_writer_items": 3,

 

###lsblk

sda 8:0 0 447.1G 0 disk
+-sda1 8:1 0 50G 0 part
Š +-ps--6d6ddb00--f307--4fa6--9e85--f2b777cec22d--wc--osd.9-cache_cvol 254:15 0 50G 0 lvm
Š +-ps--6d6ddb00--f307--4fa6--9e85--f2b777cec22d--wc--osd.9-main 254:17 0 14.6T 0 lvm
+-sda2 8:2 0 50G 0 part
Š +-ps--6d6ddb00--f307--4fa6--9e85--f2b777cec22d--wc--osd.10-cache_cvol 254:0 0 50G 0 lvm
Š +-ps--6d6ddb00--f307--4fa6--9e85--f2b777cec22d--wc--osd.10-main 254:2 0 14.6T 0 lvm
+-sda3 8:3 0 50G 0 part
Š +-ps--6d6ddb00--f307--4fa6--9e85--f2b777cec22d--wc--osd.11-ca

##df -Th

Filesystem Type       Size Used Avail Use% Mounted on
udev          devtmpfs 63G 0 63G 0% /dev
tmpfs         tmpfs 13G 2.9M 13G 1% /run
/dev/sdc3   ext4 15G 11G 4.7G 69% /
tmpfs          tmpfs 63G 84K 63G 1% /dev/shm
tmpfs          tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs          tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sdc4    ext4 30G 196M 30G 1% /var/lib/ceph
/dev/sdc5    ext4 1.6T 284M 1.6T 1% /opt/petasan/config
/dev/sdc2    vfat 127M 260K 126M 1% /boot/efi
tmpfs          tmpfs 63G 28K 63G 1% /var/lib/ceph/osd/ceph-11
tmpfs          tmpfs 63G 28K 63G 1% /var/lib/ceph/osd/ceph-10
tmpfs          tmpfs 63G 28K 63G 1% /var/lib/ceph/osd/ceph-19
tmpfs          tmpfs 63G 28K 63G 1% /var/lib/ceph/osd/ceph-20
tmpfs          tmpfs 63G 28K 63G 1% /var/lib/ceph/osd/ceph-9
tmpfs          tmpfs 63G 28K 63G 1% /var/lib/ceph/osd/ceph

 

##ceph-volume lvm list

====== osd.18 ======

[block] /dev/ps-6d6ddb00-f307-4fa6-9e85-f2b777cec22d-wc-osd.18/main

block device /dev/ps-6d6ddb00-f307-4fa6-9e85-f2b777cec22d-wc-osd.18/main
block uuid FQuGMh-AoO8-QsRn-i73N-gd0c-UdSM-E5YkXS
cephx lockbox secret
cluster fsid 6d6ddb00-f307-4fa6-9e85-f2b777cec22d
cluster name ceph
crush device class None
db device /dev/sdb2
db uuid 8fbfc14b-dc64-48c3-9713-924ca28572a9
encrypted 0
osd fsid d1993e38-1176-489f-8143-38ccb3d228aa
osd id 18
osdspec affinity
type block
vdo 0

[db] /dev/sdb2

##lvdisplay

--- Logical volume ---
LV Path /dev/ps-6d6ddb00-f307-4fa6-9e85-f2b777cec22d-wc-osd.18/main
LV Name main
VG Name ps-6d6ddb00-f307-4fa6-9e85-f2b777cec22d-wc-osd.18
LV UUID FQuGMh-AoO8-QsRn-i73N-gd0c-UdSM-E5YkXS
LV Write Access read/write
LV Creation host, time petasan-node1, 2023-06-14 16:53:57 +0530
LV Status available
# open 24
LV Size 14.55 TiB
Current LE 3814911
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 254:8

##vgdisplay

--- Volume group ---
VG Name ps-6d6ddb00-f307-4fa6-9e85-f2b777cec22d-wc-osd.18
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 14.60 TiB
PE Size 4.00 MiB
Total PE 3827710
Alloc PE / Size 3827710 / 14.60 TiB
Free PE / Size 0 / 0
VG UUID ROrhN2-d6Wu-4NGE-dSiD-Jp9s-EcDz-dJ9oBp

##fdisk /dev/sda

Unpartitioned space /dev/sda: 47.13 GiB, 50606185984 bytes, 98840207 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes

If any possibility to extend osd.18 lvm size from the given details  ??? OR Have any option for reducing metadata size in affected osd???

you need to provide a 300 GB partition.

metadata could be reduced if you reduce/delete existing data