Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Ceph Health: Too many PGs per OSD

Pages: 1 2

the commands are safe, whether it will fix the values coming back it is not certain but safe to try.

Ok, now the statement is

mon_max_pg_per_osd  = 400

Still have the warning of "too many PGs per OSD (357 > max 300)"

Also noticed the number of PG's are now "2024" instead of the usual "1024" even though the

osd_pool_default_pg_num  = 1024

How do we get rid of the warning?

Thanks

 

Under what section did you add the mon_max_pg_per_osd  = 400 ?

Whete do you see the PGs are now 2024 ? did you add a new pool ? is 2024 shows in the pools view list in 1 pool ? in the PG Status chart does it show the number jump from 1024 to 2024 suddenly ? when ?

the max number of osd's  was added in the MGR section and its showing up in the "all" listing as well as the "MGR" listing.

Ah, I added an NFS pools, OK

 

Hello,

Going back to the warning of too many pg's, changed it from 400 to 380 to see if it changes in the file, it did, but still showing "too many PGs per OSD (357 > max 300)"  on warning, even though ceph config shows "

mon_max_pg_per_osd  =380

can you check all the /etc/ceph/ceph.conf files on all nodes do not have this value set.

Here's the output for the file, it doesn't look like it used to.  This is on all of them

cat ceph.conf
# minimal ceph.conf for ce1e9009-8fd6-4c75-aeb6-14b280383aa3
[global]
fsid = ce1e9009-8fd6-4c75-aeb6-14b280383aa3
mon_host = [v2:10.20.4.55:3300/0,v1:10.20.4.55:6789/0] [v2:10.20.4.56:3300/0,v1:10.20.4.56:6789/0] [v2:10.20.4.57:3300/0,v1:10.20.4.57:6789/0]
auth_client_required = cephx

Hello,  still having the issue of

"too many PGs per OSD (357 > max 300)"

When the file has

"mon_max_pg_per_osd  =380"

Question, when a change is made on the config, how long does it take to reflect the change?

Any way we can clean this up?

Thanks,

Pages: 1 2