Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Jornaul sizing.

Hello Admin,

In the past we had an issue when OSD's being added to the cluster, and after you reviewing the logs - you have stated that each OSD creates a 60GB partition on the Jornaul and the space there is limited.

 

Can we reduce the the size to 40GB or 30GB ?

 

Thanks

It is a performance recommendation. if you do really have no space, you can change it to 40 GB

Thanks Admin,

Can you please put the steps for us to avoid any issues or break any configs .

 

Thanks

just reduce the value of bluestore_block_db_size on the cluster config on all nodes ,  there is no need to re-boot, it will be read when you add a new osd.

Hey Admin,

 

I really needed the commands and other details to perform the tasks, since the GUI does not have these options to remove and re-add the disk.

Thanks

See my prev post on how to lower the journal partition size, again it is not recommended. It is much better to use a larger journal disk or add a new disk. The ratio of OSDs to journals is 4:1, so a 60GB journal partiton only requires 240 GB disk.

As for deleting and re-adding OSDs, this is highly not recommended given what you are trying to save. If you do need it:

1-Make sure your pool size is 3 for replicated or m=2 for EC
2-Make sure the cluster state is OK active / clean
3-Mark the OSD as out so Ceph can move data from it before replacing:
4-ceph osd out OSD_ID --cluster CLUSTER_NAME
5-Observe the PG status progress from dashboard, once all PGs are active and clean, stop the OSD
6-systemctl stop ceph-osd@OSD_ID
7-From the admin web application, delete the OSD
8-Repeat from 2 to 7 until all OSDs attached to the journal have been deleted
9-delete all partitions on journal ( can use ceph-disk zap )
10-From the admin web application, re-add the journal
11-From the admin web application, re-add the OSDs