PG count with multiple pools
brent
11 Posts
May 5, 2020, 4:14 pmQuote from brent on May 5, 2020, 4:14 pmHi, when completing the installation we found some pools for cephfs already created that cannot be removed by the web UI. This leads to the question, do we need to account for those when calculating how many placement groups to select? What would be the calculation to use since we're using crush to split up SSD and HDD into separate pools like shown below?
Hi, when completing the installation we found some pools for cephfs already created that cannot be removed by the web UI. This leads to the question, do we need to account for those when calculating how many placement groups to select? What would be the calculation to use since we're using crush to split up SSD and HDD into separate pools like shown below?
admin
2,930 Posts
May 5, 2020, 6:04 pmQuote from admin on May 5, 2020, 6:04 pmYou can delete them. You ger an error message when you try to delete the pools that there is a filesystem using them, and you should delete the filesystem. You deleet the fiksesystem from the Filesystems page.
For your other question, you should try to make it that each OSD will have a total of 100 PGs, this is the ideal value. The OSD can be part of several pools, and each pool will create more than 1 replica / ec chunk per PG, but the total of all this per OSD should ideally be 100.
You can delete them. You ger an error message when you try to delete the pools that there is a filesystem using them, and you should delete the filesystem. You deleet the fiksesystem from the Filesystems page.
For your other question, you should try to make it that each OSD will have a total of 100 PGs, this is the ideal value. The OSD can be part of several pools, and each pool will create more than 1 replica / ec chunk per PG, but the total of all this per OSD should ideally be 100.
PG count with multiple pools
brent
11 Posts
Quote from brent on May 5, 2020, 4:14 pmHi, when completing the installation we found some pools for cephfs already created that cannot be removed by the web UI. This leads to the question, do we need to account for those when calculating how many placement groups to select? What would be the calculation to use since we're using crush to split up SSD and HDD into separate pools like shown below?
Hi, when completing the installation we found some pools for cephfs already created that cannot be removed by the web UI. This leads to the question, do we need to account for those when calculating how many placement groups to select? What would be the calculation to use since we're using crush to split up SSD and HDD into separate pools like shown below?
admin
2,930 Posts
Quote from admin on May 5, 2020, 6:04 pmYou can delete them. You ger an error message when you try to delete the pools that there is a filesystem using them, and you should delete the filesystem. You deleet the fiksesystem from the Filesystems page.
For your other question, you should try to make it that each OSD will have a total of 100 PGs, this is the ideal value. The OSD can be part of several pools, and each pool will create more than 1 replica / ec chunk per PG, but the total of all this per OSD should ideally be 100.
You can delete them. You ger an error message when you try to delete the pools that there is a filesystem using them, and you should delete the filesystem. You deleet the fiksesystem from the Filesystems page.
For your other question, you should try to make it that each OSD will have a total of 100 PGs, this is the ideal value. The OSD can be part of several pools, and each pool will create more than 1 replica / ec chunk per PG, but the total of all this per OSD should ideally be 100.