1 pools have many more objects per pg than average
wluke
66 Posts
November 9, 2022, 9:35 pmQuote from wluke on November 9, 2022, 9:35 pmHi there,
We're seeing the following warning: "1 pools have many more objects per pg than average"
root@gl-san-02d:~# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average
[WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average
pool cephfs_metadata objects per pg (260207) is more than 12.9883 times cluster average (20034)
root@gl-san-02d:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
device_health_metrics 75409k 3.0 114.6T 0.0000 1.0 1 on
cephfs_data 2674M 3.0 114.6T 0.0001 1.0 32 on
cephfs_metadata 15516M 4.0 20794G 0.0029 4.0 16 on
rbd 9535G 4.0 20794G 1.8342 1.0 256 on
cephfs_data_callrecordings 0 1.6666666269302368 114.6T 0.0000 1.0 64 on
cephfs_data_logs 87172M 1.6666666269302368 114.6T 0.0012 1.0 64 on
cephfs_data_certificatestore 6896 4.0 20794G 0.0000 1.0 64 on
cephfs_data_vault 112.9M 4.0 20794G 0.0000 1.0 64 on
cephfs_data_webfarmsharedconfig 3659k 4.0 20794G 0.0000 1.0 64 on
cephfs_data_webstore 2613G 1.6666666269302368 114.6T 0.0371 1.0 64 on
rbd_hdd 4169G 3.0 114.6T 0.1066 1.0 64 on
rbd_esxi_production_ssd01 973.0G 4.0 20794G 0.1872 1.0 64 on
test-ssd-hybrid 0 4.0 20794G 0.0000 1.0 64 on
cephfs_data_general 24793M 1.75 114.6T 0.0004 1.0 64 on
cephfs_data_webfarmsharedconfig22 0 4.0 20794G 0.0000 1.0 64 on
It does seem that this pool should have more PGs, as the warning suggests, why is the autoscaler not adjusting this? Is there anything we can do the force the autoscaler to adjust?
Thanks,
Will
Hi there,
We're seeing the following warning: "1 pools have many more objects per pg than average"
root@gl-san-02d:~# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average
[WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average
pool cephfs_metadata objects per pg (260207) is more than 12.9883 times cluster average (20034)
root@gl-san-02d:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
device_health_metrics 75409k 3.0 114.6T 0.0000 1.0 1 on
cephfs_data 2674M 3.0 114.6T 0.0001 1.0 32 on
cephfs_metadata 15516M 4.0 20794G 0.0029 4.0 16 on
rbd 9535G 4.0 20794G 1.8342 1.0 256 on
cephfs_data_callrecordings 0 1.6666666269302368 114.6T 0.0000 1.0 64 on
cephfs_data_logs 87172M 1.6666666269302368 114.6T 0.0012 1.0 64 on
cephfs_data_certificatestore 6896 4.0 20794G 0.0000 1.0 64 on
cephfs_data_vault 112.9M 4.0 20794G 0.0000 1.0 64 on
cephfs_data_webfarmsharedconfig 3659k 4.0 20794G 0.0000 1.0 64 on
cephfs_data_webstore 2613G 1.6666666269302368 114.6T 0.0371 1.0 64 on
rbd_hdd 4169G 3.0 114.6T 0.1066 1.0 64 on
rbd_esxi_production_ssd01 973.0G 4.0 20794G 0.1872 1.0 64 on
test-ssd-hybrid 0 4.0 20794G 0.0000 1.0 64 on
cephfs_data_general 24793M 1.75 114.6T 0.0004 1.0 64 on
cephfs_data_webfarmsharedconfig22 0 4.0 20794G 0.0000 1.0 64 on
It does seem that this pool should have more PGs, as the warning suggests, why is the autoscaler not adjusting this? Is there anything we can do the force the autoscaler to adjust?
Thanks,
Will
admin
2,930 Posts
November 10, 2022, 4:28 pmQuote from admin on November 10, 2022, 4:28 pmtry this
ceph osd pool set cephfs_metadata target_size_ratio .5
ceph osd pool set cephfs_metadata pg_num 64
ceph osd pool set cephfs_metadata pg_num_min 64
i think since you have a lot of pools and maybe not too many osds, the auto scaler is not able to optimize things.
try this
ceph osd pool set cephfs_metadata target_size_ratio .5
ceph osd pool set cephfs_metadata pg_num 64
ceph osd pool set cephfs_metadata pg_num_min 64
i think since you have a lot of pools and maybe not too many osds, the auto scaler is not able to optimize things.
wluke
66 Posts
November 11, 2022, 6:33 pmQuote from wluke on November 11, 2022, 6:33 pmThis worked perfectly, thanks!
This worked perfectly, thanks!
1 pools have many more objects per pg than average
wluke
66 Posts
Quote from wluke on November 9, 2022, 9:35 pmHi there,
We're seeing the following warning: "1 pools have many more objects per pg than average"
root@gl-san-02d:~# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average
[WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average
pool cephfs_metadata objects per pg (260207) is more than 12.9883 times cluster average (20034)root@gl-san-02d:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
device_health_metrics 75409k 3.0 114.6T 0.0000 1.0 1 on
cephfs_data 2674M 3.0 114.6T 0.0001 1.0 32 on
cephfs_metadata 15516M 4.0 20794G 0.0029 4.0 16 on
rbd 9535G 4.0 20794G 1.8342 1.0 256 on
cephfs_data_callrecordings 0 1.6666666269302368 114.6T 0.0000 1.0 64 on
cephfs_data_logs 87172M 1.6666666269302368 114.6T 0.0012 1.0 64 on
cephfs_data_certificatestore 6896 4.0 20794G 0.0000 1.0 64 on
cephfs_data_vault 112.9M 4.0 20794G 0.0000 1.0 64 on
cephfs_data_webfarmsharedconfig 3659k 4.0 20794G 0.0000 1.0 64 on
cephfs_data_webstore 2613G 1.6666666269302368 114.6T 0.0371 1.0 64 on
rbd_hdd 4169G 3.0 114.6T 0.1066 1.0 64 on
rbd_esxi_production_ssd01 973.0G 4.0 20794G 0.1872 1.0 64 on
test-ssd-hybrid 0 4.0 20794G 0.0000 1.0 64 on
cephfs_data_general 24793M 1.75 114.6T 0.0004 1.0 64 on
cephfs_data_webfarmsharedconfig22 0 4.0 20794G 0.0000 1.0 64 onIt does seem that this pool should have more PGs, as the warning suggests, why is the autoscaler not adjusting this? Is there anything we can do the force the autoscaler to adjust?
Thanks,
Will
Hi there,
We're seeing the following warning: "1 pools have many more objects per pg than average"
root@gl-san-02d:~# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average
[WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average
pool cephfs_metadata objects per pg (260207) is more than 12.9883 times cluster average (20034)
root@gl-san-02d:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
device_health_metrics 75409k 3.0 114.6T 0.0000 1.0 1 on
cephfs_data 2674M 3.0 114.6T 0.0001 1.0 32 on
cephfs_metadata 15516M 4.0 20794G 0.0029 4.0 16 on
rbd 9535G 4.0 20794G 1.8342 1.0 256 on
cephfs_data_callrecordings 0 1.6666666269302368 114.6T 0.0000 1.0 64 on
cephfs_data_logs 87172M 1.6666666269302368 114.6T 0.0012 1.0 64 on
cephfs_data_certificatestore 6896 4.0 20794G 0.0000 1.0 64 on
cephfs_data_vault 112.9M 4.0 20794G 0.0000 1.0 64 on
cephfs_data_webfarmsharedconfig 3659k 4.0 20794G 0.0000 1.0 64 on
cephfs_data_webstore 2613G 1.6666666269302368 114.6T 0.0371 1.0 64 on
rbd_hdd 4169G 3.0 114.6T 0.1066 1.0 64 on
rbd_esxi_production_ssd01 973.0G 4.0 20794G 0.1872 1.0 64 on
test-ssd-hybrid 0 4.0 20794G 0.0000 1.0 64 on
cephfs_data_general 24793M 1.75 114.6T 0.0004 1.0 64 on
cephfs_data_webfarmsharedconfig22 0 4.0 20794G 0.0000 1.0 64 on
It does seem that this pool should have more PGs, as the warning suggests, why is the autoscaler not adjusting this? Is there anything we can do the force the autoscaler to adjust?
Thanks,
Will
admin
2,930 Posts
Quote from admin on November 10, 2022, 4:28 pmtry this
ceph osd pool set cephfs_metadata target_size_ratio .5
ceph osd pool set cephfs_metadata pg_num 64
ceph osd pool set cephfs_metadata pg_num_min 64i think since you have a lot of pools and maybe not too many osds, the auto scaler is not able to optimize things.
try this
ceph osd pool set cephfs_metadata target_size_ratio .5
ceph osd pool set cephfs_metadata pg_num 64
ceph osd pool set cephfs_metadata pg_num_min 64
i think since you have a lot of pools and maybe not too many osds, the auto scaler is not able to optimize things.
wluke
66 Posts
Quote from wluke on November 11, 2022, 6:33 pmThis worked perfectly, thanks!
This worked perfectly, thanks!