too many PGs per OSD (341 > max 300)
R3LZX
50 Posts
May 10, 2020, 1:26 amQuote from R3LZX on May 10, 2020, 1:26 amI have this because I chose too many at creation, but am planning on adding enough to correctly coincide (but not for a month or so)
what's the best course of action in the interim? I an going to be adding more, but not yet
I have this because I chose too many at creation, but am planning on adding enough to correctly coincide (but not for a month or so)
what's the best course of action in the interim? I an going to be adding more, but not yet
admin
2,930 Posts
May 10, 2020, 2:35 amQuote from admin on May 10, 2020, 2:35 amFrom the Ceph Configuration web page, increase the value
mon_max_pg_per_osd from 300 to 350
note this is a temporary setting so you do not see the warning, and not a fix.
From the Ceph Configuration web page, increase the value
mon_max_pg_per_osd from 300 to 350
note this is a temporary setting so you do not see the warning, and not a fix.
R3LZX
50 Posts
May 10, 2020, 6:11 amQuote from R3LZX on May 10, 2020, 6:11 amthank you! is there any risk running it this way temporarily?
thank you! is there any risk running it this way temporarily?
admin
2,930 Posts
May 10, 2020, 3:14 pmQuote from admin on May 10, 2020, 3:14 pmThere is a risk if you have a node down, its PGs will be distributed on remaining OSDs so it will increase depending on node count and replication count.
Too many PGs puts load on the OSDs, depending on hardware, it may too much load. Before production, test your hardware can handle such failure cases with recovery and max expected client io load going on.
There is a risk if you have a node down, its PGs will be distributed on remaining OSDs so it will increase depending on node count and replication count.
Too many PGs puts load on the OSDs, depending on hardware, it may too much load. Before production, test your hardware can handle such failure cases with recovery and max expected client io load going on.
too many PGs per OSD (341 > max 300)
R3LZX
50 Posts
Quote from R3LZX on May 10, 2020, 1:26 amI have this because I chose too many at creation, but am planning on adding enough to correctly coincide (but not for a month or so)
what's the best course of action in the interim? I an going to be adding more, but not yet
I have this because I chose too many at creation, but am planning on adding enough to correctly coincide (but not for a month or so)
what's the best course of action in the interim? I an going to be adding more, but not yet
admin
2,930 Posts
Quote from admin on May 10, 2020, 2:35 amFrom the Ceph Configuration web page, increase the value
mon_max_pg_per_osd from 300 to 350
note this is a temporary setting so you do not see the warning, and not a fix.
From the Ceph Configuration web page, increase the value
mon_max_pg_per_osd from 300 to 350
note this is a temporary setting so you do not see the warning, and not a fix.
R3LZX
50 Posts
Quote from R3LZX on May 10, 2020, 6:11 amthank you! is there any risk running it this way temporarily?
thank you! is there any risk running it this way temporarily?
admin
2,930 Posts
Quote from admin on May 10, 2020, 3:14 pmThere is a risk if you have a node down, its PGs will be distributed on remaining OSDs so it will increase depending on node count and replication count.
Too many PGs puts load on the OSDs, depending on hardware, it may too much load. Before production, test your hardware can handle such failure cases with recovery and max expected client io load going on.
There is a risk if you have a node down, its PGs will be distributed on remaining OSDs so it will increase depending on node count and replication count.
Too many PGs puts load on the OSDs, depending on hardware, it may too much load. Before production, test your hardware can handle such failure cases with recovery and max expected client io load going on.