Jornaul sizing.
msalem
87 Posts
December 19, 2018, 5:58 amQuote from msalem on December 19, 2018, 5:58 amHello Admin,
In the past we had an issue when OSD's being added to the cluster, and after you reviewing the logs - you have stated that each OSD creates a 60GB partition on the Jornaul and the space there is limited.
Can we reduce the the size to 40GB or 30GB ?
Thanks
Hello Admin,
In the past we had an issue when OSD's being added to the cluster, and after you reviewing the logs - you have stated that each OSD creates a 60GB partition on the Jornaul and the space there is limited.
Can we reduce the the size to 40GB or 30GB ?
Thanks
admin
2,930 Posts
December 19, 2018, 3:26 pmQuote from admin on December 19, 2018, 3:26 pmIt is a performance recommendation. if you do really have no space, you can change it to 40 GB
It is a performance recommendation. if you do really have no space, you can change it to 40 GB
msalem
87 Posts
December 19, 2018, 5:03 pmQuote from msalem on December 19, 2018, 5:03 pmThanks Admin,
Can you please put the steps for us to avoid any issues or break any configs .
Thanks
Thanks Admin,
Can you please put the steps for us to avoid any issues or break any configs .
Thanks
admin
2,930 Posts
December 19, 2018, 9:11 pmQuote from admin on December 19, 2018, 9:11 pmjust reduce the value of bluestore_block_db_size on the cluster config on all nodes , there is no need to re-boot, it will be read when you add a new osd.
just reduce the value of bluestore_block_db_size on the cluster config on all nodes , there is no need to re-boot, it will be read when you add a new osd.
msalem
87 Posts
December 28, 2018, 12:04 amQuote from msalem on December 28, 2018, 12:04 amHey Admin,
I really needed the commands and other details to perform the tasks, since the GUI does not have these options to remove and re-add the disk.
Thanks
Hey Admin,
I really needed the commands and other details to perform the tasks, since the GUI does not have these options to remove and re-add the disk.
Thanks
admin
2,930 Posts
December 30, 2018, 10:18 amQuote from admin on December 30, 2018, 10:18 amSee my prev post on how to lower the journal partition size, again it is not recommended. It is much better to use a larger journal disk or add a new disk. The ratio of OSDs to journals is 4:1, so a 60GB journal partiton only requires 240 GB disk.
As for deleting and re-adding OSDs, this is highly not recommended given what you are trying to save. If you do need it:
1-Make sure your pool size is 3 for replicated or m=2 for EC
2-Make sure the cluster state is OK active / clean
3-Mark the OSD as out so Ceph can move data from it before replacing:
4-ceph osd out OSD_ID --cluster CLUSTER_NAME
5-Observe the PG status progress from dashboard, once all PGs are active and clean, stop the OSD
6-systemctl stop ceph-osd@OSD_ID
7-From the admin web application, delete the OSD
8-Repeat from 2 to 7 until all OSDs attached to the journal have been deleted
9-delete all partitions on journal ( can use ceph-disk zap )
10-From the admin web application, re-add the journal
11-From the admin web application, re-add the OSDs
See my prev post on how to lower the journal partition size, again it is not recommended. It is much better to use a larger journal disk or add a new disk. The ratio of OSDs to journals is 4:1, so a 60GB journal partiton only requires 240 GB disk.
As for deleting and re-adding OSDs, this is highly not recommended given what you are trying to save. If you do need it:
1-Make sure your pool size is 3 for replicated or m=2 for EC
2-Make sure the cluster state is OK active / clean
3-Mark the OSD as out so Ceph can move data from it before replacing:
4-ceph osd out OSD_ID --cluster CLUSTER_NAME
5-Observe the PG status progress from dashboard, once all PGs are active and clean, stop the OSD
6-systemctl stop ceph-osd@OSD_ID
7-From the admin web application, delete the OSD
8-Repeat from 2 to 7 until all OSDs attached to the journal have been deleted
9-delete all partitions on journal ( can use ceph-disk zap )
10-From the admin web application, re-add the journal
11-From the admin web application, re-add the OSDs
Last edited on December 30, 2018, 10:19 am by admin · #6
Jornaul sizing.
msalem
87 Posts
Quote from msalem on December 19, 2018, 5:58 amHello Admin,
In the past we had an issue when OSD's being added to the cluster, and after you reviewing the logs - you have stated that each OSD creates a 60GB partition on the Jornaul and the space there is limited.
Can we reduce the the size to 40GB or 30GB ?
Thanks
Hello Admin,
In the past we had an issue when OSD's being added to the cluster, and after you reviewing the logs - you have stated that each OSD creates a 60GB partition on the Jornaul and the space there is limited.
Can we reduce the the size to 40GB or 30GB ?
Thanks
admin
2,930 Posts
Quote from admin on December 19, 2018, 3:26 pmIt is a performance recommendation. if you do really have no space, you can change it to 40 GB
It is a performance recommendation. if you do really have no space, you can change it to 40 GB
msalem
87 Posts
Quote from msalem on December 19, 2018, 5:03 pmThanks Admin,
Can you please put the steps for us to avoid any issues or break any configs .
Thanks
Thanks Admin,
Can you please put the steps for us to avoid any issues or break any configs .
Thanks
admin
2,930 Posts
Quote from admin on December 19, 2018, 9:11 pmjust reduce the value of bluestore_block_db_size on the cluster config on all nodes , there is no need to re-boot, it will be read when you add a new osd.
just reduce the value of bluestore_block_db_size on the cluster config on all nodes , there is no need to re-boot, it will be read when you add a new osd.
msalem
87 Posts
Quote from msalem on December 28, 2018, 12:04 amHey Admin,
I really needed the commands and other details to perform the tasks, since the GUI does not have these options to remove and re-add the disk.
Thanks
Hey Admin,
I really needed the commands and other details to perform the tasks, since the GUI does not have these options to remove and re-add the disk.
Thanks
admin
2,930 Posts
Quote from admin on December 30, 2018, 10:18 amSee my prev post on how to lower the journal partition size, again it is not recommended. It is much better to use a larger journal disk or add a new disk. The ratio of OSDs to journals is 4:1, so a 60GB journal partiton only requires 240 GB disk.
As for deleting and re-adding OSDs, this is highly not recommended given what you are trying to save. If you do need it:
1-Make sure your pool size is 3 for replicated or m=2 for EC
2-Make sure the cluster state is OK active / clean
3-Mark the OSD as out so Ceph can move data from it before replacing:
4-ceph osd out OSD_ID --cluster CLUSTER_NAME
5-Observe the PG status progress from dashboard, once all PGs are active and clean, stop the OSD
6-systemctl stop ceph-osd@OSD_ID
7-From the admin web application, delete the OSD
8-Repeat from 2 to 7 until all OSDs attached to the journal have been deleted
9-delete all partitions on journal ( can use ceph-disk zap )
10-From the admin web application, re-add the journal
11-From the admin web application, re-add the OSDs
See my prev post on how to lower the journal partition size, again it is not recommended. It is much better to use a larger journal disk or add a new disk. The ratio of OSDs to journals is 4:1, so a 60GB journal partiton only requires 240 GB disk.
As for deleting and re-adding OSDs, this is highly not recommended given what you are trying to save. If you do need it:
1-Make sure your pool size is 3 for replicated or m=2 for EC
2-Make sure the cluster state is OK active / clean
3-Mark the OSD as out so Ceph can move data from it before replacing:
4-ceph osd out OSD_ID --cluster CLUSTER_NAME
5-Observe the PG status progress from dashboard, once all PGs are active and clean, stop the OSD
6-systemctl stop ceph-osd@OSD_ID
7-From the admin web application, delete the OSD
8-Repeat from 2 to 7 until all OSDs attached to the journal have been deleted
9-delete all partitions on journal ( can use ceph-disk zap )
10-From the admin web application, re-add the journal
11-From the admin web application, re-add the OSDs