Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Balancing inefficient and resize PG's

Pages: 1 2

Thanks for the answer 🙂

I've set the target_size_ratio to 1.0. Now it looks like this:

POOL                                     SIZE         TARGET SIZE        RATE       RAW CAPACITY      RATIO     TARGET RATIO    EFFECTIVE RATIO    BIAS        PG_NUM    NEW PG_NUM    AUTOSCALE
Pool-700C                            50946G                                      3.0            185.1T                         0.8063     1.0000                      1.0000                           1.0            900                                                on
device_health_metrics      142.1M                                       3.0            185.1T                          0.0000                                                                               1.0             1                                                     on

Could it be that the PG's are not automatically enlarged because it is not necessary for the autoscaler?

How should I proceed if I want to adjust the OSD Crush Wights? Simply increase the crush weight of the OSD with the most free space slowly or do I also have to increase the value of the rather full OSD at the same time?

 

Hello

Can I somehow make my pool bigger? I have a raw size of 185 TiB and 34 TiB available. But my pool has only 3 TiB. That should be more. Slowly my pool is getting full and I am not sure how to change this. The last 7 TiB OSD already did not change the size of the pool.

Thanks for any reply

if you have inbalance and some OSDs will fill and you will not be able to write anymore in pool. if you  balance the usage, the available space will increase.

But that's the thing. The balancer says there are no more optimizations and the autoscaler does not increase the PG's. And my question how exactly to proceed with the Crush Weights was not answered. Whereas there is the effective storage space specified for each OSD. If I do increase that, that means Ceph will try to store more there than is physically possible, doesn't it?

 

 ceph balancer status
{
"active": true,
"last_optimize_duration": "0:00:00.008760",
"last_optimize_started": "Mon Nov 21 12:54:39 2022",
"mode": "upmap",
"optimize_result": "Unable to find further optimization, or pool(s) pg_num is decreasing, or distribution is already perfect",
"plans": []
}

ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 52 KiB 6.8 GiB 754 GiB 78.92 0.97 51 up
1 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 121 MiB 6.8 GiB 658 GiB 81.60 1.00 52 up
2 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 2.9 TiB 1.3 MiB 7.0 GiB 555 GiB 84.49 1.04 51 up
9 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 3.1 MiB 6.9 GiB 707 GiB 80.24 0.98 51 up
12 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 0 B 6.9 GiB 458 GiB 87.18 1.07 51 up
23 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 916 KiB 7.3 GiB 652 GiB 81.78 1.00 52 up
26 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 81 KiB 6.7 GiB 657 GiB 81.62 1.00 51 up
28 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 3.0 MiB 7.2 GiB 558 GiB 84.40 1.03 50 up
29 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 3.7 MiB 6.3 GiB 655 GiB 81.69 1.00 53 up
30 ssd 3.49199 1.00000 3.5 TiB 3.0 TiB 2.9 TiB 2.0 MiB 6.6 GiB 554 GiB 84.52 1.04 51 up
37 ssd 3.49199 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 1.3 MiB 6.6 GiB 548 GiB 84.68 1.04 52 up
41 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.5 TiB 210 KiB 11 GiB 1.4 TiB 79.51 0.97 101 up
3 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 7.2 GiB 659 GiB 81.59 1.00 51 up
4 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 4.1 MiB 6.8 GiB 657 GiB 81.64 1.00 51 up
5 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 3.6 MiB 7.3 GiB 559 GiB 84.37 1.03 53 up
10 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 3.2 MiB 6.8 GiB 652 GiB 81.78 1.00 51 up
13 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 3.9 MiB 6.7 GiB 706 GiB 80.26 0.98 51 up
24 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 3.1 MiB 6.9 GiB 750 GiB 79.04 0.97 52 up
27 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.7 MiB 6.9 GiB 700 GiB 80.43 0.99 52 up
31 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.0 MiB 6.4 GiB 605 GiB 83.08 1.02 51 up
32 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 1.3 MiB 6.5 GiB 652 GiB 81.76 1.00 50 up
33 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 116 MiB 6.6 GiB 700 GiB 80.42 0.99 53 up
38 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.7 TiB 3.5 MiB 6.2 GiB 758 GiB 78.81 0.97 52 up
42 ssd 6.98599 1.00000 7.0 TiB 5.8 TiB 5.8 TiB 1.6 MiB 12 GiB 1.2 TiB 82.99 1.02 102 up
6 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.0 MiB 7.5 GiB 459 GiB 87.18 1.07 53 up
7 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 117 MiB 7.1 GiB 501 GiB 86.00 1.05 52 up
8 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 0 B 6.9 GiB 652 GiB 81.78 1.00 52 up
11 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 0 B 7.0 GiB 655 GiB 81.69 1.00 52 up
14 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 2.9 MiB 7.1 GiB 352 GiB 90.15 1.10 53 up
25 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 334 KiB 6.6 GiB 805 GiB 77.50 0.95 51 up
34 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.1 MiB 6.3 GiB 712 GiB 80.10 0.98 51 up
35 ssd 3.49199 1.00000 3.5 TiB 2.7 TiB 2.6 TiB 4.8 MiB 5.9 GiB 858 GiB 76.00 0.93 53 up
36 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 47 KiB 6.7 GiB 610 GiB 82.94 1.02 53 up
39 ssd 3.49199 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 2.9 MiB 6.8 GiB 401 GiB 88.78 1.09 52 up
40 ssd 3.49300 1.00000 3.5 TiB 3.2 TiB 3.1 TiB 3.0 MiB 7.2 GiB 351 GiB 90.19 1.11 52 up
43 ssd 6.98599 1.00000 7.0 TiB 5.9 TiB 5.9 TiB 81 MiB 12 GiB 1.1 TiB 84.50 1.04 103 up
15 ssd 6.98630 1.00000 7.0 TiB 5.3 TiB 5.3 TiB 5.6 MiB 9.7 GiB 1.7 TiB 75.98 0.93 100 up
44 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 2.4 MiB 11 GiB 1.6 TiB 77.33 0.95 97 up
45 ssd 6.98599 1.00000 7.0 TiB 5.3 TiB 5.3 TiB 3.7 MiB 11 GiB 1.7 TiB 75.94 0.93 97 up
46 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.6 TiB 259 KiB 12 GiB 1.3 TiB 80.97 0.99 99 up
47 ssd 6.98599 1.00000 7.0 TiB 5.5 TiB 5.4 TiB 2.2 MiB 11 GiB 1.5 TiB 78.06 0.96 97 up
48 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 5.4 MiB 12 GiB 1.3 TiB 81.63 1.00 101 up
49 ssd 6.98599 1.00000 7.0 TiB 5.8 TiB 5.7 TiB 1.5 MiB 12 GiB 1.2 TiB 82.39 1.01 100 up
TOTAL 185 TiB 151 TiB 151 TiB 522 MiB 339 GiB 34 TiB 81.61
MIN/MAX VAR: 0.93/1.11 STDDEV: 3.46

 

per prev answer

You can also adjust the OSD crush weights from the UI under Maintenance menu

per prev question

How should I proceed if I want to adjust the OSD Crush Wights? Simply increase the crush weight of the OSD with the most free space slowly or do I also have to increase the value of the rather full OSD at the same time?

 

decrease value of rather full OSD. change the values which are in TB.

Ok, I have now reduced the CRUSH Weight of the 6 fullest OSD's from 3,492 to 3,399. Now I also have misplaced objects. So something is being done.
How far do I have to balance this manually until the pool becomes larger?

i'd recommend you do this till you get acceptable balance.

then i'd check if the auto-balancer can start optimizing by itself. I'd test switching between crush mode and upmap modes and see if you get better result. If you have unused crush rules, recommend you delete them. same applies to unused pools, deleting them may help the autobalancer.

Ok, I'll try that then and would get back to you if I get stuck.
Thanks a lot

Pages: 1 2