Balancing inefficient and resize PG's
Pages: 1 2
killerodin
33 Posts
November 8, 2022, 10:50 amQuote from killerodin on November 8, 2022, 10:50 amWe have a pool with 43 OSD's (10x 7TB & 33x 3.5TB if it's important), on this pool we've started with 900PG's because of 9 OSD's at the beginning.
When we added the last OSD (also a 7TB disc), the pool did not increase.
I also have the feeling that the balancer is not really optimal.
ceph osd df shows this
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 4.8 MiB 7.2 GiB 794 GiB 77.79 0.97 51 up
1 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 116 MiB 7.3 GiB 652 GiB 81.78 1.02 53 up
2 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.4 MiB 7.3 GiB 500 GiB 86.02 1.07 53 up
9 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.9 MiB 6.9 GiB 700 GiB 80.43 1.00 52 up
12 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 2.9 TiB 0 B 6.7 GiB 553 GiB 84.55 1.05 50 up
23 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 916 KiB 6.8 GiB 695 GiB 80.57 1.00 52 up
26 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 81 KiB 7.0 GiB 701 GiB 80.42 1.00 51 up
28 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 2.4 MiB 6.5 GiB 653 GiB 81.75 1.02 49 up
29 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 3.8 MiB 6.5 GiB 698 GiB 80.48 1.00 53 up
30 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 2.1 MiB 6.6 GiB 598 GiB 83.28 1.04 51 up
37 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.7 MiB 6.4 GiB 593 GiB 83.43 1.04 52 up
41 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 210 KiB 11 GiB 1.6 TiB 77.67 0.97 100 up
3 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 7.5 GiB 750 GiB 79.02 0.98 50 up
4 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.1 MiB 6.9 GiB 700 GiB 80.43 1.00 51 up
5 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.0 MiB 7.1 GiB 603 GiB 83.13 1.03 53 up
10 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 3.2 MiB 6.9 GiB 645 GiB 81.97 1.02 52 up
13 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 5.7 MiB 6.8 GiB 698 GiB 80.48 1.00 52 up
24 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 3.1 MiB 6.8 GiB 792 GiB 77.87 0.97 52 up
27 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.8 MiB 7.1 GiB 742 GiB 79.26 0.99 52 up
31 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.8 MiB 6.4 GiB 698 GiB 80.47 1.00 50 up
32 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.2 MiB 6.7 GiB 694 GiB 80.59 1.00 50 up
33 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 115 MiB 6.3 GiB 742 GiB 79.24 0.99 53 up
38 ssd 3.49199 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 3.2 MiB 6.2 GiB 799 GiB 77.65 0.97 52 up
42 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 2.1 MiB 11 GiB 1.3 TiB 81.76 1.02 102 up
6 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.0 MiB 7.3 GiB 455 GiB 87.29 1.09 54 up
7 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 108 MiB 6.9 GiB 546 GiB 84.73 1.05 52 up
8 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 6.9 GiB 695 GiB 80.57 1.00 52 up
11 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 6.7 GiB 698 GiB 80.49 1.00 52 up
14 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 3.1 MiB 7.3 GiB 399 GiB 88.85 1.10 53 up
25 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 334 KiB 6.7 GiB 846 GiB 76.35 0.95 51 up
34 ssd 3.49199 1.00000 3.5 TiB 2.6 TiB 2.6 TiB 562 KiB 6.4 GiB 902 GiB 74.76 0.93 48 up
35 ssd 3.49199 1.00000 3.5 TiB 2.6 TiB 2.6 TiB 2.8 MiB 6.0 GiB 898 GiB 74.90 0.93 53 up
36 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.8 MiB 6.3 GiB 604 GiB 83.10 1.03 54 up
39 ssd 3.49199 1.00000 3.5 TiB 3.1 TiB 3.0 TiB 678 KiB 6.7 GiB 448 GiB 87.47 1.09 52 up
40 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 1.6 MiB 7.3 GiB 398 GiB 88.88 1.11 52 up
43 ssd 6.98599 1.00000 7.0 TiB 5.9 TiB 5.9 TiB 80 MiB 12 GiB 1.1 TiB 84.61 1.05 105 up
15 ssd 6.98630 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 5.3 MiB 8.9 GiB 1.8 TiB 74.85 0.93 100 up
44 ssd 6.98599 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 867 KiB 11 GiB 1.8 TiB 74.80 0.93 95 up
45 ssd 6.98599 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 3.7 MiB 11 GiB 1.8 TiB 74.85 0.93 97 up
46 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.6 TiB 3.9 MiB 12 GiB 1.4 TiB 79.79 0.99 99 up
47 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 2.2 MiB 11 GiB 1.6 TiB 76.90 0.96 97 up
48 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.6 TiB 5.4 MiB 12 GiB 1.4 TiB 80.43 1.00 101 up
49 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 1.5 MiB 12 GiB 1.3 TiB 81.19 1.01 100 up
TOTAL 185 TiB 149 TiB 149 TiB 518 MiB 336 GiB 36 TiB 80.41
MIN/MAX VAR: 0.93/1.11 STDDEV: 3.68
ceph balancer status shows this
{
"active": true,
"last_optimize_duration": "0:00:00.739720",
"last_optimize_started": "Tue Nov 8 11:16:52 2022",
"mode": "crush-compat",
"optimize_result": "Unable to find further optimization, change balancer mode and retry might help",
"plans": []
}
ceph df detail shows this
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 185 TiB 36 TiB 149 TiB 149 TiB 80.41
TOTAL 185 TiB 36 TiB 149 TiB 149 TiB 80.41
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
Pool-700C 7 900 50 TiB 50 TiB 16 MiB 14.23M 149 TiB 149 TiB 49 MiB 92.92 3.8 TiB N/A N/A 14.23M 0 B 0 B
device_health_metrics 9 1 139 MiB 0 B 139 MiB 56 417 MiB 0 B 417 MiB 0 3.8 TiB N/A N/A 56 0 B 0 B
If I try to change the balancer mode to upmap I got the error "Error, min_compat_client "luminous" is required for pg-upmap."
ceph features shows this
{
"mon": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"mds": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"osd": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 43
}
],
"client": [
{
"features": "0x2f018fb86aa42ada",
"release": "luminous",
"num": 3
},
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"mgr": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
]
}
Did we need to resize the number of PG's? If yes, can I change this on the productive system during operation? Or is there something else we should change?
We have a pool with 43 OSD's (10x 7TB & 33x 3.5TB if it's important), on this pool we've started with 900PG's because of 9 OSD's at the beginning.
When we added the last OSD (also a 7TB disc), the pool did not increase.
I also have the feeling that the balancer is not really optimal.
ceph osd df shows this
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 4.8 MiB 7.2 GiB 794 GiB 77.79 0.97 51 up
1 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 116 MiB 7.3 GiB 652 GiB 81.78 1.02 53 up
2 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.4 MiB 7.3 GiB 500 GiB 86.02 1.07 53 up
9 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.9 MiB 6.9 GiB 700 GiB 80.43 1.00 52 up
12 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 2.9 TiB 0 B 6.7 GiB 553 GiB 84.55 1.05 50 up
23 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 916 KiB 6.8 GiB 695 GiB 80.57 1.00 52 up
26 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 81 KiB 7.0 GiB 701 GiB 80.42 1.00 51 up
28 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 2.4 MiB 6.5 GiB 653 GiB 81.75 1.02 49 up
29 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 3.8 MiB 6.5 GiB 698 GiB 80.48 1.00 53 up
30 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 2.1 MiB 6.6 GiB 598 GiB 83.28 1.04 51 up
37 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.7 MiB 6.4 GiB 593 GiB 83.43 1.04 52 up
41 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 210 KiB 11 GiB 1.6 TiB 77.67 0.97 100 up
3 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 7.5 GiB 750 GiB 79.02 0.98 50 up
4 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.1 MiB 6.9 GiB 700 GiB 80.43 1.00 51 up
5 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.0 MiB 7.1 GiB 603 GiB 83.13 1.03 53 up
10 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 3.2 MiB 6.9 GiB 645 GiB 81.97 1.02 52 up
13 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 5.7 MiB 6.8 GiB 698 GiB 80.48 1.00 52 up
24 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 3.1 MiB 6.8 GiB 792 GiB 77.87 0.97 52 up
27 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.8 MiB 7.1 GiB 742 GiB 79.26 0.99 52 up
31 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.8 MiB 6.4 GiB 698 GiB 80.47 1.00 50 up
32 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.2 MiB 6.7 GiB 694 GiB 80.59 1.00 50 up
33 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 115 MiB 6.3 GiB 742 GiB 79.24 0.99 53 up
38 ssd 3.49199 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 3.2 MiB 6.2 GiB 799 GiB 77.65 0.97 52 up
42 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 2.1 MiB 11 GiB 1.3 TiB 81.76 1.02 102 up
6 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.0 MiB 7.3 GiB 455 GiB 87.29 1.09 54 up
7 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 108 MiB 6.9 GiB 546 GiB 84.73 1.05 52 up
8 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 6.9 GiB 695 GiB 80.57 1.00 52 up
11 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 6.7 GiB 698 GiB 80.49 1.00 52 up
14 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 3.1 MiB 7.3 GiB 399 GiB 88.85 1.10 53 up
25 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 334 KiB 6.7 GiB 846 GiB 76.35 0.95 51 up
34 ssd 3.49199 1.00000 3.5 TiB 2.6 TiB 2.6 TiB 562 KiB 6.4 GiB 902 GiB 74.76 0.93 48 up
35 ssd 3.49199 1.00000 3.5 TiB 2.6 TiB 2.6 TiB 2.8 MiB 6.0 GiB 898 GiB 74.90 0.93 53 up
36 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.8 MiB 6.3 GiB 604 GiB 83.10 1.03 54 up
39 ssd 3.49199 1.00000 3.5 TiB 3.1 TiB 3.0 TiB 678 KiB 6.7 GiB 448 GiB 87.47 1.09 52 up
40 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 1.6 MiB 7.3 GiB 398 GiB 88.88 1.11 52 up
43 ssd 6.98599 1.00000 7.0 TiB 5.9 TiB 5.9 TiB 80 MiB 12 GiB 1.1 TiB 84.61 1.05 105 up
15 ssd 6.98630 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 5.3 MiB 8.9 GiB 1.8 TiB 74.85 0.93 100 up
44 ssd 6.98599 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 867 KiB 11 GiB 1.8 TiB 74.80 0.93 95 up
45 ssd 6.98599 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 3.7 MiB 11 GiB 1.8 TiB 74.85 0.93 97 up
46 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.6 TiB 3.9 MiB 12 GiB 1.4 TiB 79.79 0.99 99 up
47 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 2.2 MiB 11 GiB 1.6 TiB 76.90 0.96 97 up
48 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.6 TiB 5.4 MiB 12 GiB 1.4 TiB 80.43 1.00 101 up
49 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 1.5 MiB 12 GiB 1.3 TiB 81.19 1.01 100 up
TOTAL 185 TiB 149 TiB 149 TiB 518 MiB 336 GiB 36 TiB 80.41
MIN/MAX VAR: 0.93/1.11 STDDEV: 3.68
ceph balancer status shows this
{
"active": true,
"last_optimize_duration": "0:00:00.739720",
"last_optimize_started": "Tue Nov 8 11:16:52 2022",
"mode": "crush-compat",
"optimize_result": "Unable to find further optimization, change balancer mode and retry might help",
"plans": []
}
ceph df detail shows this
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 185 TiB 36 TiB 149 TiB 149 TiB 80.41
TOTAL 185 TiB 36 TiB 149 TiB 149 TiB 80.41
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
Pool-700C 7 900 50 TiB 50 TiB 16 MiB 14.23M 149 TiB 149 TiB 49 MiB 92.92 3.8 TiB N/A N/A 14.23M 0 B 0 B
device_health_metrics 9 1 139 MiB 0 B 139 MiB 56 417 MiB 0 B 417 MiB 0 3.8 TiB N/A N/A 56 0 B 0 B
If I try to change the balancer mode to upmap I got the error "Error, min_compat_client "luminous" is required for pg-upmap."
ceph features shows this
{
"mon": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"mds": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"osd": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 43
}
],
"client": [
{
"features": "0x2f018fb86aa42ada",
"release": "luminous",
"num": 3
},
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"mgr": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
]
}
Did we need to resize the number of PG's? If yes, can I change this on the productive system during operation? Or is there something else we should change?
admin
2,930 Posts
November 8, 2022, 8:26 pmQuote from admin on November 8, 2022, 8:26 pmWhat version of PetaSAN are you using ? was it upgraded ? if so what was initial installation version ?
What version of PetaSAN are you using ? was it upgraded ? if so what was initial installation version ?
killerodin
33 Posts
November 9, 2022, 7:44 amQuote from killerodin on November 9, 2022, 7:44 amWe're now using version 3.1.0.
Yes, we've upgraded it several times but I don't know which version we installed initialy. Is was round about october 2020 so I think it should be something like 2.6.0 or a minor release from this.
We're now using version 3.1.0.
Yes, we've upgraded it several times but I don't know which version we installed initialy. Is was round about october 2020 so I think it should be something like 2.6.0 or a minor release from this.
admin
2,930 Posts
November 9, 2022, 1:52 pmQuote from admin on November 9, 2022, 1:52 pmCan you try
ceph osd set-require-min-compat-client luminous
ceph balancer mode upmap
do you get an error ?
Is your pool PG autoscale is on ? if not you can set it via command line. The current PG count is in the low end which could effect accurate distribution.
You can also adjust the OSD crush weights from the UI under Maintenance menu
Can you try
ceph osd set-require-min-compat-client luminous
ceph balancer mode upmap
do you get an error ?
Is your pool PG autoscale is on ? if not you can set it via command line. The current PG count is in the low end which could effect accurate distribution.
You can also adjust the OSD crush weights from the UI under Maintenance menu
killerodin
33 Posts
November 10, 2022, 7:45 amQuote from killerodin on November 10, 2022, 7:45 amCan I make all changes in productive use or do I have to announce a maintenance window because the pool goes offline for a short time?
PG autoscale is off
Can I make all changes in productive use or do I have to announce a maintenance window because the pool goes offline for a short time?
PG autoscale is off
admin
2,930 Posts
November 10, 2022, 4:32 pmQuote from admin on November 10, 2022, 4:32 pmYou can make changes to production One recommendation since changing pg counts will case re-balance load, is to lowe the backfill speed from maintenance page to slow, then monitor things for an hour and look at the disk % util dashboard charts as well as cpu and network, if not to stressed then you can increase speed to average.
You can make changes to production One recommendation since changing pg counts will case re-balance load, is to lowe the backfill speed from maintenance page to slow, then monitor things for an hour and look at the disk % util dashboard charts as well as cpu and network, if not to stressed then you can increase speed to average.
killerodin
33 Posts
November 11, 2022, 12:58 pmQuote from killerodin on November 11, 2022, 12:58 pmok, the changing of the balancer mode to upmap was successful. Now ceph balancer status shows
{
"active": true,
"last_optimize_duration": "0:00:00.005992",
"last_optimize_started": "Fri Nov 11 13:45:07 2022",
"mode": "upmap",
"optimize_result": "Unable to find further optimization, or pool(s) pg_num is decreasing, or distribution is already perfect",
"plans": []
}
autoscale I could also activate on the pool, but can it be that this takes some time until the PG's are effectively adjusted? So far nothing has changed and I activated autoscale about an hour ago.
ok, the changing of the balancer mode to upmap was successful. Now ceph balancer status shows
{
"active": true,
"last_optimize_duration": "0:00:00.005992",
"last_optimize_started": "Fri Nov 11 13:45:07 2022",
"mode": "upmap",
"optimize_result": "Unable to find further optimization, or pool(s) pg_num is decreasing, or distribution is already perfect",
"plans": []
}
autoscale I could also activate on the pool, but can it be that this takes some time until the PG's are effectively adjusted? So far nothing has changed and I activated autoscale about an hour ago.
killerodin
33 Posts
November 11, 2022, 1:01 pmQuote from killerodin on November 11, 2022, 1:01 pmceph osd pool autoscale-status shows
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
Pool-700C 50789G 3.0 185.1T 0.8038 1.0 900 on
device_health_metrics 141.0M 3.0 185.1T 0.0000 1.0 1 on
ceph osd pool autoscale-status shows
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
Pool-700C 50789G 3.0 185.1T 0.8038 1.0 900 on
device_health_metrics 141.0M 3.0 185.1T 0.0000 1.0 1 on
killerodin
33 Posts
November 15, 2022, 4:31 pmQuote from killerodin on November 15, 2022, 4:31 pmsorry that I'm asking again but is there anything else I need to do that the autoscaler start to do something?
sorry that I'm asking again but is there anything else I need to do that the autoscaler start to do something?
admin
2,930 Posts
November 15, 2022, 7:24 pmQuote from admin on November 15, 2022, 7:24 pmCan you try using increasing target_size_ratio
ceph osd pool set XX target_size_ratio 1.0
or
ceph osd pool set XX target_size_bytes YYY
Also one sure way to control the balance is adjusting the OSD Crush Weights under the Maintenance menu
Can you try using increasing target_size_ratio
ceph osd pool set XX target_size_ratio 1.0
or
ceph osd pool set XX target_size_bytes YYY
Also one sure way to control the balance is adjusting the OSD Crush Weights under the Maintenance menu
Pages: 1 2
Balancing inefficient and resize PG's
killerodin
33 Posts
Quote from killerodin on November 8, 2022, 10:50 amWe have a pool with 43 OSD's (10x 7TB & 33x 3.5TB if it's important), on this pool we've started with 900PG's because of 9 OSD's at the beginning.
When we added the last OSD (also a 7TB disc), the pool did not increase.
I also have the feeling that the balancer is not really optimal.ceph osd df shows this
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 4.8 MiB 7.2 GiB 794 GiB 77.79 0.97 51 up
1 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 116 MiB 7.3 GiB 652 GiB 81.78 1.02 53 up
2 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.4 MiB 7.3 GiB 500 GiB 86.02 1.07 53 up
9 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.9 MiB 6.9 GiB 700 GiB 80.43 1.00 52 up
12 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 2.9 TiB 0 B 6.7 GiB 553 GiB 84.55 1.05 50 up
23 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 916 KiB 6.8 GiB 695 GiB 80.57 1.00 52 up
26 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 81 KiB 7.0 GiB 701 GiB 80.42 1.00 51 up
28 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 2.4 MiB 6.5 GiB 653 GiB 81.75 1.02 49 up
29 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 3.8 MiB 6.5 GiB 698 GiB 80.48 1.00 53 up
30 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 2.1 MiB 6.6 GiB 598 GiB 83.28 1.04 51 up
37 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.7 MiB 6.4 GiB 593 GiB 83.43 1.04 52 up
41 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 210 KiB 11 GiB 1.6 TiB 77.67 0.97 100 up
3 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 7.5 GiB 750 GiB 79.02 0.98 50 up
4 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.1 MiB 6.9 GiB 700 GiB 80.43 1.00 51 up
5 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.0 MiB 7.1 GiB 603 GiB 83.13 1.03 53 up
10 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 3.2 MiB 6.9 GiB 645 GiB 81.97 1.02 52 up
13 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 5.7 MiB 6.8 GiB 698 GiB 80.48 1.00 52 up
24 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 3.1 MiB 6.8 GiB 792 GiB 77.87 0.97 52 up
27 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.8 MiB 7.1 GiB 742 GiB 79.26 0.99 52 up
31 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.8 MiB 6.4 GiB 698 GiB 80.47 1.00 50 up
32 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.2 MiB 6.7 GiB 694 GiB 80.59 1.00 50 up
33 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 115 MiB 6.3 GiB 742 GiB 79.24 0.99 53 up
38 ssd 3.49199 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 3.2 MiB 6.2 GiB 799 GiB 77.65 0.97 52 up
42 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 2.1 MiB 11 GiB 1.3 TiB 81.76 1.02 102 up
6 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.0 MiB 7.3 GiB 455 GiB 87.29 1.09 54 up
7 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 108 MiB 6.9 GiB 546 GiB 84.73 1.05 52 up
8 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 6.9 GiB 695 GiB 80.57 1.00 52 up
11 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 6.7 GiB 698 GiB 80.49 1.00 52 up
14 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 3.1 MiB 7.3 GiB 399 GiB 88.85 1.10 53 up
25 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 334 KiB 6.7 GiB 846 GiB 76.35 0.95 51 up
34 ssd 3.49199 1.00000 3.5 TiB 2.6 TiB 2.6 TiB 562 KiB 6.4 GiB 902 GiB 74.76 0.93 48 up
35 ssd 3.49199 1.00000 3.5 TiB 2.6 TiB 2.6 TiB 2.8 MiB 6.0 GiB 898 GiB 74.90 0.93 53 up
36 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.8 MiB 6.3 GiB 604 GiB 83.10 1.03 54 up
39 ssd 3.49199 1.00000 3.5 TiB 3.1 TiB 3.0 TiB 678 KiB 6.7 GiB 448 GiB 87.47 1.09 52 up
40 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 1.6 MiB 7.3 GiB 398 GiB 88.88 1.11 52 up
43 ssd 6.98599 1.00000 7.0 TiB 5.9 TiB 5.9 TiB 80 MiB 12 GiB 1.1 TiB 84.61 1.05 105 up
15 ssd 6.98630 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 5.3 MiB 8.9 GiB 1.8 TiB 74.85 0.93 100 up
44 ssd 6.98599 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 867 KiB 11 GiB 1.8 TiB 74.80 0.93 95 up
45 ssd 6.98599 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 3.7 MiB 11 GiB 1.8 TiB 74.85 0.93 97 up
46 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.6 TiB 3.9 MiB 12 GiB 1.4 TiB 79.79 0.99 99 up
47 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 2.2 MiB 11 GiB 1.6 TiB 76.90 0.96 97 up
48 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.6 TiB 5.4 MiB 12 GiB 1.4 TiB 80.43 1.00 101 up
49 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 1.5 MiB 12 GiB 1.3 TiB 81.19 1.01 100 up
TOTAL 185 TiB 149 TiB 149 TiB 518 MiB 336 GiB 36 TiB 80.41
MIN/MAX VAR: 0.93/1.11 STDDEV: 3.68
ceph balancer status shows this
{
"active": true,
"last_optimize_duration": "0:00:00.739720",
"last_optimize_started": "Tue Nov 8 11:16:52 2022",
"mode": "crush-compat",
"optimize_result": "Unable to find further optimization, change balancer mode and retry might help",
"plans": []
}
ceph df detail shows this
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 185 TiB 36 TiB 149 TiB 149 TiB 80.41
TOTAL 185 TiB 36 TiB 149 TiB 149 TiB 80.41--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
Pool-700C 7 900 50 TiB 50 TiB 16 MiB 14.23M 149 TiB 149 TiB 49 MiB 92.92 3.8 TiB N/A N/A 14.23M 0 B 0 B
device_health_metrics 9 1 139 MiB 0 B 139 MiB 56 417 MiB 0 B 417 MiB 0 3.8 TiB N/A N/A 56 0 B 0 B
If I try to change the balancer mode to upmap I got the error "Error, min_compat_client "luminous" is required for pg-upmap."
ceph features shows this
{
"mon": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"mds": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"osd": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 43
}
],
"client": [
{
"features": "0x2f018fb86aa42ada",
"release": "luminous",
"num": 3
},
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"mgr": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
]
}
Did we need to resize the number of PG's? If yes, can I change this on the productive system during operation? Or is there something else we should change?
We have a pool with 43 OSD's (10x 7TB & 33x 3.5TB if it's important), on this pool we've started with 900PG's because of 9 OSD's at the beginning.
When we added the last OSD (also a 7TB disc), the pool did not increase.
I also have the feeling that the balancer is not really optimal.
ceph osd df shows this
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 4.8 MiB 7.2 GiB 794 GiB 77.79 0.97 51 up
1 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 116 MiB 7.3 GiB 652 GiB 81.78 1.02 53 up
2 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.4 MiB 7.3 GiB 500 GiB 86.02 1.07 53 up
9 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.9 MiB 6.9 GiB 700 GiB 80.43 1.00 52 up
12 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 2.9 TiB 0 B 6.7 GiB 553 GiB 84.55 1.05 50 up
23 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 916 KiB 6.8 GiB 695 GiB 80.57 1.00 52 up
26 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 81 KiB 7.0 GiB 701 GiB 80.42 1.00 51 up
28 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.8 TiB 2.4 MiB 6.5 GiB 653 GiB 81.75 1.02 49 up
29 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 3.8 MiB 6.5 GiB 698 GiB 80.48 1.00 53 up
30 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 2.1 MiB 6.6 GiB 598 GiB 83.28 1.04 51 up
37 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.7 MiB 6.4 GiB 593 GiB 83.43 1.04 52 up
41 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 210 KiB 11 GiB 1.6 TiB 77.67 0.97 100 up
3 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 7.5 GiB 750 GiB 79.02 0.98 50 up
4 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.1 MiB 6.9 GiB 700 GiB 80.43 1.00 51 up
5 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.0 MiB 7.1 GiB 603 GiB 83.13 1.03 53 up
10 ssd 3.49300 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 3.2 MiB 6.9 GiB 645 GiB 81.97 1.02 52 up
13 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 5.7 MiB 6.8 GiB 698 GiB 80.48 1.00 52 up
24 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 3.1 MiB 6.8 GiB 792 GiB 77.87 0.97 52 up
27 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.8 MiB 7.1 GiB 742 GiB 79.26 0.99 52 up
31 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 2.8 MiB 6.4 GiB 698 GiB 80.47 1.00 50 up
32 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 4.2 MiB 6.7 GiB 694 GiB 80.59 1.00 50 up
33 ssd 3.49199 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 115 MiB 6.3 GiB 742 GiB 79.24 0.99 53 up
38 ssd 3.49199 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 3.2 MiB 6.2 GiB 799 GiB 77.65 0.97 52 up
42 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 2.1 MiB 11 GiB 1.3 TiB 81.76 1.02 102 up
6 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 2.0 MiB 7.3 GiB 455 GiB 87.29 1.09 54 up
7 ssd 3.49300 1.00000 3.5 TiB 3.0 TiB 3.0 TiB 108 MiB 6.9 GiB 546 GiB 84.73 1.05 52 up
8 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 6.9 GiB 695 GiB 80.57 1.00 52 up
11 ssd 3.49300 1.00000 3.5 TiB 2.8 TiB 2.8 TiB 0 B 6.7 GiB 698 GiB 80.49 1.00 52 up
14 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 3.1 MiB 7.3 GiB 399 GiB 88.85 1.10 53 up
25 ssd 3.49300 1.00000 3.5 TiB 2.7 TiB 2.7 TiB 334 KiB 6.7 GiB 846 GiB 76.35 0.95 51 up
34 ssd 3.49199 1.00000 3.5 TiB 2.6 TiB 2.6 TiB 562 KiB 6.4 GiB 902 GiB 74.76 0.93 48 up
35 ssd 3.49199 1.00000 3.5 TiB 2.6 TiB 2.6 TiB 2.8 MiB 6.0 GiB 898 GiB 74.90 0.93 53 up
36 ssd 3.49199 1.00000 3.5 TiB 2.9 TiB 2.9 TiB 4.8 MiB 6.3 GiB 604 GiB 83.10 1.03 54 up
39 ssd 3.49199 1.00000 3.5 TiB 3.1 TiB 3.0 TiB 678 KiB 6.7 GiB 448 GiB 87.47 1.09 52 up
40 ssd 3.49300 1.00000 3.5 TiB 3.1 TiB 3.1 TiB 1.6 MiB 7.3 GiB 398 GiB 88.88 1.11 52 up
43 ssd 6.98599 1.00000 7.0 TiB 5.9 TiB 5.9 TiB 80 MiB 12 GiB 1.1 TiB 84.61 1.05 105 up
15 ssd 6.98630 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 5.3 MiB 8.9 GiB 1.8 TiB 74.85 0.93 100 up
44 ssd 6.98599 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 867 KiB 11 GiB 1.8 TiB 74.80 0.93 95 up
45 ssd 6.98599 1.00000 7.0 TiB 5.2 TiB 5.2 TiB 3.7 MiB 11 GiB 1.8 TiB 74.85 0.93 97 up
46 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.6 TiB 3.9 MiB 12 GiB 1.4 TiB 79.79 0.99 99 up
47 ssd 6.98599 1.00000 7.0 TiB 5.4 TiB 5.4 TiB 2.2 MiB 11 GiB 1.6 TiB 76.90 0.96 97 up
48 ssd 6.98599 1.00000 7.0 TiB 5.6 TiB 5.6 TiB 5.4 MiB 12 GiB 1.4 TiB 80.43 1.00 101 up
49 ssd 6.98599 1.00000 7.0 TiB 5.7 TiB 5.7 TiB 1.5 MiB 12 GiB 1.3 TiB 81.19 1.01 100 up
TOTAL 185 TiB 149 TiB 149 TiB 518 MiB 336 GiB 36 TiB 80.41
MIN/MAX VAR: 0.93/1.11 STDDEV: 3.68
ceph balancer status shows this
{
"active": true,
"last_optimize_duration": "0:00:00.739720",
"last_optimize_started": "Tue Nov 8 11:16:52 2022",
"mode": "crush-compat",
"optimize_result": "Unable to find further optimization, change balancer mode and retry might help",
"plans": []
}
ceph df detail shows this
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 185 TiB 36 TiB 149 TiB 149 TiB 80.41
TOTAL 185 TiB 36 TiB 149 TiB 149 TiB 80.41
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
Pool-700C 7 900 50 TiB 50 TiB 16 MiB 14.23M 149 TiB 149 TiB 49 MiB 92.92 3.8 TiB N/A N/A 14.23M 0 B 0 B
device_health_metrics 9 1 139 MiB 0 B 139 MiB 56 417 MiB 0 B 417 MiB 0 3.8 TiB N/A N/A 56 0 B 0 B
If I try to change the balancer mode to upmap I got the error "Error, min_compat_client "luminous" is required for pg-upmap."
ceph features shows this
{
"mon": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"mds": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"osd": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 43
}
],
"client": [
{
"features": "0x2f018fb86aa42ada",
"release": "luminous",
"num": 3
},
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
],
"mgr": [
{
"features": "0x3f01cfb8ffedffff",
"release": "luminous",
"num": 3
}
]
}
Did we need to resize the number of PG's? If yes, can I change this on the productive system during operation? Or is there something else we should change?
admin
2,930 Posts
Quote from admin on November 8, 2022, 8:26 pmWhat version of PetaSAN are you using ? was it upgraded ? if so what was initial installation version ?
What version of PetaSAN are you using ? was it upgraded ? if so what was initial installation version ?
killerodin
33 Posts
Quote from killerodin on November 9, 2022, 7:44 amWe're now using version 3.1.0.
Yes, we've upgraded it several times but I don't know which version we installed initialy. Is was round about october 2020 so I think it should be something like 2.6.0 or a minor release from this.
We're now using version 3.1.0.
Yes, we've upgraded it several times but I don't know which version we installed initialy. Is was round about october 2020 so I think it should be something like 2.6.0 or a minor release from this.
admin
2,930 Posts
Quote from admin on November 9, 2022, 1:52 pmCan you try
ceph osd set-require-min-compat-client luminous
ceph balancer mode upmapdo you get an error ?
Is your pool PG autoscale is on ? if not you can set it via command line. The current PG count is in the low end which could effect accurate distribution.
You can also adjust the OSD crush weights from the UI under Maintenance menu
Can you try
ceph osd set-require-min-compat-client luminous
ceph balancer mode upmap
do you get an error ?
Is your pool PG autoscale is on ? if not you can set it via command line. The current PG count is in the low end which could effect accurate distribution.
You can also adjust the OSD crush weights from the UI under Maintenance menu
killerodin
33 Posts
Quote from killerodin on November 10, 2022, 7:45 amCan I make all changes in productive use or do I have to announce a maintenance window because the pool goes offline for a short time?
PG autoscale is off
Can I make all changes in productive use or do I have to announce a maintenance window because the pool goes offline for a short time?
PG autoscale is off
admin
2,930 Posts
Quote from admin on November 10, 2022, 4:32 pmYou can make changes to production One recommendation since changing pg counts will case re-balance load, is to lowe the backfill speed from maintenance page to slow, then monitor things for an hour and look at the disk % util dashboard charts as well as cpu and network, if not to stressed then you can increase speed to average.
You can make changes to production One recommendation since changing pg counts will case re-balance load, is to lowe the backfill speed from maintenance page to slow, then monitor things for an hour and look at the disk % util dashboard charts as well as cpu and network, if not to stressed then you can increase speed to average.
killerodin
33 Posts
Quote from killerodin on November 11, 2022, 12:58 pmok, the changing of the balancer mode to upmap was successful. Now ceph balancer status shows
{
"active": true,
"last_optimize_duration": "0:00:00.005992",
"last_optimize_started": "Fri Nov 11 13:45:07 2022",
"mode": "upmap",
"optimize_result": "Unable to find further optimization, or pool(s) pg_num is decreasing, or distribution is already perfect",
"plans": []
}
autoscale I could also activate on the pool, but can it be that this takes some time until the PG's are effectively adjusted? So far nothing has changed and I activated autoscale about an hour ago.
ok, the changing of the balancer mode to upmap was successful. Now ceph balancer status shows
{
"active": true,
"last_optimize_duration": "0:00:00.005992",
"last_optimize_started": "Fri Nov 11 13:45:07 2022",
"mode": "upmap",
"optimize_result": "Unable to find further optimization, or pool(s) pg_num is decreasing, or distribution is already perfect",
"plans": []
}
autoscale I could also activate on the pool, but can it be that this takes some time until the PG's are effectively adjusted? So far nothing has changed and I activated autoscale about an hour ago.
killerodin
33 Posts
Quote from killerodin on November 11, 2022, 1:01 pmceph osd pool autoscale-status shows
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
Pool-700C 50789G 3.0 185.1T 0.8038 1.0 900 on
device_health_metrics 141.0M 3.0 185.1T 0.0000 1.0 1 on
ceph osd pool autoscale-status shows
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
Pool-700C 50789G 3.0 185.1T 0.8038 1.0 900 on
device_health_metrics 141.0M 3.0 185.1T 0.0000 1.0 1 on
killerodin
33 Posts
Quote from killerodin on November 15, 2022, 4:31 pmsorry that I'm asking again but is there anything else I need to do that the autoscaler start to do something?
sorry that I'm asking again but is there anything else I need to do that the autoscaler start to do something?
admin
2,930 Posts
Quote from admin on November 15, 2022, 7:24 pmCan you try using increasing target_size_ratio
ceph osd pool set XX target_size_ratio 1.0or
ceph osd pool set XX target_size_bytes YYY
Also one sure way to control the balance is adjusting the OSD Crush Weights under the Maintenance menu
Can you try using increasing target_size_ratio
ceph osd pool set XX target_size_ratio 1.0
or
ceph osd pool set XX target_size_bytes YYY
Also one sure way to control the balance is adjusting the OSD Crush Weights under the Maintenance menu