Ceph Health: Too many PGs per OSD
Pages: 1 2
admin
2,930 Posts
October 9, 2020, 7:05 pmQuote from admin on October 9, 2020, 7:05 pmthe commands are safe, whether it will fix the values coming back it is not certain but safe to try.
the commands are safe, whether it will fix the values coming back it is not certain but safe to try.
khopkins
96 Posts
October 13, 2020, 12:55 pmQuote from khopkins on October 13, 2020, 12:55 pmOk, now the statement is
mon_max_pg_per_osd = 400
Still have the warning of "too many PGs per OSD (357 > max 300)"
Also noticed the number of PG's are now "2024" instead of the usual "1024" even though the
osd_pool_default_pg_num = 1024
How do we get rid of the warning?
Thanks
Ok, now the statement is
mon_max_pg_per_osd = 400
Still have the warning of "too many PGs per OSD (357 > max 300)"
Also noticed the number of PG's are now "2024" instead of the usual "1024" even though the
osd_pool_default_pg_num = 1024
How do we get rid of the warning?
Thanks
admin
2,930 Posts
October 13, 2020, 1:10 pmQuote from admin on October 13, 2020, 1:10 pmUnder what section did you add the mon_max_pg_per_osd = 400 ?
Whete do you see the PGs are now 2024 ? did you add a new pool ? is 2024 shows in the pools view list in 1 pool ? in the PG Status chart does it show the number jump from 1024 to 2024 suddenly ? when ?
Under what section did you add the mon_max_pg_per_osd = 400 ?
Whete do you see the PGs are now 2024 ? did you add a new pool ? is 2024 shows in the pools view list in 1 pool ? in the PG Status chart does it show the number jump from 1024 to 2024 suddenly ? when ?
Last edited on October 13, 2020, 1:12 pm by admin · #13
khopkins
96 Posts
October 13, 2020, 5:35 pmQuote from khopkins on October 13, 2020, 5:35 pmthe max number of osd's was added in the MGR section and its showing up in the "all" listing as well as the "MGR" listing.
Ah, I added an NFS pools, OK
the max number of osd's was added in the MGR section and its showing up in the "all" listing as well as the "MGR" listing.
Ah, I added an NFS pools, OK
khopkins
96 Posts
October 26, 2020, 2:56 pmQuote from khopkins on October 26, 2020, 2:56 pmHello,
Going back to the warning of too many pg's, changed it from 400 to 380 to see if it changes in the file, it did, but still showing "too many PGs per OSD (357 > max 300)" on warning, even though ceph config shows "
mon_max_pg_per_osd =380
Hello,
Going back to the warning of too many pg's, changed it from 400 to 380 to see if it changes in the file, it did, but still showing "too many PGs per OSD (357 > max 300)" on warning, even though ceph config shows "
mon_max_pg_per_osd =380
admin
2,930 Posts
October 27, 2020, 1:44 pmQuote from admin on October 27, 2020, 1:44 pmcan you check all the /etc/ceph/ceph.conf files on all nodes do not have this value set.
can you check all the /etc/ceph/ceph.conf files on all nodes do not have this value set.
khopkins
96 Posts
October 27, 2020, 2:09 pmQuote from khopkins on October 27, 2020, 2:09 pmHere's the output for the file, it doesn't look like it used to. This is on all of them
cat ceph.conf
# minimal ceph.conf for ce1e9009-8fd6-4c75-aeb6-14b280383aa3
[global]
fsid = ce1e9009-8fd6-4c75-aeb6-14b280383aa3
mon_host = [v2:10.20.4.55:3300/0,v1:10.20.4.55:6789/0] [v2:10.20.4.56:3300/0,v1:10.20.4.56:6789/0] [v2:10.20.4.57:3300/0,v1:10.20.4.57:6789/0]
auth_client_required = cephx
Here's the output for the file, it doesn't look like it used to. This is on all of them
cat ceph.conf
# minimal ceph.conf for ce1e9009-8fd6-4c75-aeb6-14b280383aa3
[global]
fsid = ce1e9009-8fd6-4c75-aeb6-14b280383aa3
mon_host = [v2:10.20.4.55:3300/0,v1:10.20.4.55:6789/0] [v2:10.20.4.56:3300/0,v1:10.20.4.56:6789/0] [v2:10.20.4.57:3300/0,v1:10.20.4.57:6789/0]
auth_client_required = cephx
khopkins
96 Posts
November 4, 2020, 2:55 pmQuote from khopkins on November 4, 2020, 2:55 pmHello, still having the issue of
"too many PGs per OSD (357 > max 300)"
When the file has
"mon_max_pg_per_osd =380"
Question, when a change is made on the config, how long does it take to reflect the change?
Any way we can clean this up?
Thanks,
Hello, still having the issue of
"too many PGs per OSD (357 > max 300)"
When the file has
"mon_max_pg_per_osd =380"
Question, when a change is made on the config, how long does it take to reflect the change?
Any way we can clean this up?
Thanks,
Pages: 1 2
Ceph Health: Too many PGs per OSD
admin
2,930 Posts
Quote from admin on October 9, 2020, 7:05 pmthe commands are safe, whether it will fix the values coming back it is not certain but safe to try.
the commands are safe, whether it will fix the values coming back it is not certain but safe to try.
khopkins
96 Posts
Quote from khopkins on October 13, 2020, 12:55 pmOk, now the statement is
mon_max_pg_per_osd = 400
Still have the warning of "too many PGs per OSD (357 > max 300)"
Also noticed the number of PG's are now "2024" instead of the usual "1024" even though the
osd_pool_default_pg_num = 1024
How do we get rid of the warning?
Thanks
Ok, now the statement is
mon_max_pg_per_osd = 400
Still have the warning of "too many PGs per OSD (357 > max 300)"
Also noticed the number of PG's are now "2024" instead of the usual "1024" even though the
osd_pool_default_pg_num = 1024
How do we get rid of the warning?
Thanks
admin
2,930 Posts
Quote from admin on October 13, 2020, 1:10 pmUnder what section did you add the mon_max_pg_per_osd = 400 ?
Whete do you see the PGs are now 2024 ? did you add a new pool ? is 2024 shows in the pools view list in 1 pool ? in the PG Status chart does it show the number jump from 1024 to 2024 suddenly ? when ?
Under what section did you add the mon_max_pg_per_osd = 400 ?
Whete do you see the PGs are now 2024 ? did you add a new pool ? is 2024 shows in the pools view list in 1 pool ? in the PG Status chart does it show the number jump from 1024 to 2024 suddenly ? when ?
khopkins
96 Posts
Quote from khopkins on October 13, 2020, 5:35 pmthe max number of osd's was added in the MGR section and its showing up in the "all" listing as well as the "MGR" listing.
Ah, I added an NFS pools, OK
the max number of osd's was added in the MGR section and its showing up in the "all" listing as well as the "MGR" listing.
Ah, I added an NFS pools, OK
khopkins
96 Posts
Quote from khopkins on October 26, 2020, 2:56 pmHello,
Going back to the warning of too many pg's, changed it from 400 to 380 to see if it changes in the file, it did, but still showing "too many PGs per OSD (357 > max 300)" on warning, even though ceph config shows "
mon_max_pg_per_osd =380
Hello,
Going back to the warning of too many pg's, changed it from 400 to 380 to see if it changes in the file, it did, but still showing "too many PGs per OSD (357 > max 300)" on warning, even though ceph config shows "
mon_max_pg_per_osd =380
admin
2,930 Posts
Quote from admin on October 27, 2020, 1:44 pmcan you check all the /etc/ceph/ceph.conf files on all nodes do not have this value set.
can you check all the /etc/ceph/ceph.conf files on all nodes do not have this value set.
khopkins
96 Posts
Quote from khopkins on October 27, 2020, 2:09 pmHere's the output for the file, it doesn't look like it used to. This is on all of them
cat ceph.conf
# minimal ceph.conf for ce1e9009-8fd6-4c75-aeb6-14b280383aa3
[global]
fsid = ce1e9009-8fd6-4c75-aeb6-14b280383aa3
mon_host = [v2:10.20.4.55:3300/0,v1:10.20.4.55:6789/0] [v2:10.20.4.56:3300/0,v1:10.20.4.56:6789/0] [v2:10.20.4.57:3300/0,v1:10.20.4.57:6789/0]
auth_client_required = cephx
Here's the output for the file, it doesn't look like it used to. This is on all of them
cat ceph.conf
# minimal ceph.conf for ce1e9009-8fd6-4c75-aeb6-14b280383aa3
[global]
fsid = ce1e9009-8fd6-4c75-aeb6-14b280383aa3
mon_host = [v2:10.20.4.55:3300/0,v1:10.20.4.55:6789/0] [v2:10.20.4.56:3300/0,v1:10.20.4.56:6789/0] [v2:10.20.4.57:3300/0,v1:10.20.4.57:6789/0]
auth_client_required = cephx
khopkins
96 Posts
Quote from khopkins on November 4, 2020, 2:55 pmHello, still having the issue of
"too many PGs per OSD (357 > max 300)"
When the file has
"mon_max_pg_per_osd =380"
Question, when a change is made on the config, how long does it take to reflect the change?
Any way we can clean this up?
Thanks,
Hello, still having the issue of
"too many PGs per OSD (357 > max 300)"
When the file has
"mon_max_pg_per_osd =380"
Question, when a change is made on the config, how long does it take to reflect the change?
Any way we can clean this up?
Thanks,