change (increase) PG_COUNT
tb
9 Posts
November 2, 2017, 9:33 amQuote from tb on November 2, 2017, 9:33 amHi.
disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192
I have a cluster petasan 1.4.0 (test with virt-manager) running. It's work perfectly. His name is kin.
Number of Replicas : 2
I currently have 3 nodes with 5 physical disks per node. That's a total of 15 physical disks. My Ceph Health is green on the management interface.
I want to add a node of 5 physical disks. The total of physical disks will increase to 20. So, I need to increase PG_COUNT from 256 to 1024.
How to do this on the command line?
Can I do this without a break in service?
I try :
get Pools list :
ceph --cluster kin osd lspools
Response:
1, rbd,
ceph --cluster kin osd pool get rbd pg_num
ceph --cluster kin osd pool get rbd pgp_num
Response : pg_num: 256
pgp_num: 256
This commands line on node 1, or 2, or 3 is correct for increase PG_COUNT ? :
ceph --cluster kin osd pool set rbd pg_num 1024
ceph--cluster kin osd pool set rbd pgp_num 1024
without a break in service ?
Thanks
Hi.
disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192
I have a cluster petasan 1.4.0 (test with virt-manager) running. It's work perfectly. His name is kin.
Number of Replicas : 2
I currently have 3 nodes with 5 physical disks per node. That's a total of 15 physical disks. My Ceph Health is green on the management interface.
I want to add a node of 5 physical disks. The total of physical disks will increase to 20. So, I need to increase PG_COUNT from 256 to 1024.
How to do this on the command line?
Can I do this without a break in service?
I try :
get Pools list :
ceph --cluster kin osd lspools
Response:
1, rbd,
ceph --cluster kin osd pool get rbd pg_num
ceph --cluster kin osd pool get rbd pgp_num
Response : pg_num: 256
pgp_num: 256
This commands line on node 1, or 2, or 3 is correct for increase PG_COUNT ? :
ceph --cluster kin osd pool set rbd pg_num 1024
ceph--cluster kin osd pool set rbd pgp_num 1024
without a break in service ?
Thanks
Last edited on November 2, 2017, 10:58 am by tb · #1
admin
2,930 Posts
November 2, 2017, 12:12 pmQuote from admin on November 2, 2017, 12:12 pmGenerally changing PG count in Ceph is not a pleasant thing in Ceph, as it involves data re-balance. So depending on how much data you have, it could take a long time to complete moving it. It is always best to try to set this up correctly early.
Also Ceph does put a limit to how much you can increase it at a time,so you may need to do this in steps. It will not allow 4x increase in a single step,so try:
ceph osd pool set rbd pg_num 512 --cluster kin
ceph osd pool set rbd pgp_num 512 --cluster kin
OR
ceph osd pool set rbd pg_num 384 --cluster kin
ceph osd pool set rbd pgp_num 384 --cluster kin
Then after re balance completes, you can perform another with higher values. You can view the progress by observing the PG Status chart on the dashboard.
Try to do it at night/ with min client load. Although if you have a lot of data it mat take days to move. Unless you have very weak hardware, it should go well and client io will not be affected too much.
Another option is to keep the 256 PGs as is, it will have some drawback that having few PGs per disk can lead to different storage utilization between the disks. In this case you should add the following line:
mon_pg_warn_min_per_osd = 20
To /etc/ceph/kin.conf on all machines just below [global] line at top
Generally changing PG count in Ceph is not a pleasant thing in Ceph, as it involves data re-balance. So depending on how much data you have, it could take a long time to complete moving it. It is always best to try to set this up correctly early.
Also Ceph does put a limit to how much you can increase it at a time,so you may need to do this in steps. It will not allow 4x increase in a single step,so try:
ceph osd pool set rbd pg_num 512 --cluster kin
ceph osd pool set rbd pgp_num 512 --cluster kin
OR
ceph osd pool set rbd pg_num 384 --cluster kin
ceph osd pool set rbd pgp_num 384 --cluster kin
Then after re balance completes, you can perform another with higher values. You can view the progress by observing the PG Status chart on the dashboard.
Try to do it at night/ with min client load. Although if you have a lot of data it mat take days to move. Unless you have very weak hardware, it should go well and client io will not be affected too much.
Another option is to keep the 256 PGs as is, it will have some drawback that having few PGs per disk can lead to different storage utilization between the disks. In this case you should add the following line:
mon_pg_warn_min_per_osd = 20
To /etc/ceph/kin.conf on all machines just below [global] line at top
Last edited on November 2, 2017, 12:17 pm by admin · #2
tb
9 Posts
November 2, 2017, 1:09 pmQuote from tb on November 2, 2017, 1:09 pmwell understood.
Thank you for your reply.
Best regards.
well understood.
Thank you for your reply.
Best regards.
tb
9 Posts
January 14, 2018, 11:39 amQuote from tb on January 14, 2018, 11:39 ami have successfully updated petasan 1.4.0 to 1.5.0 without any break in service. So, thank for good job 🙂
i have successfully updated petasan 1.4.0 to 1.5.0 without any break in service. So, thank for good job 🙂
admin
2,930 Posts
change (increase) PG_COUNT
tb
9 Posts
Quote from tb on November 2, 2017, 9:33 amHi.
disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192I have a cluster petasan 1.4.0 (test with virt-manager) running. It's work perfectly. His name is kin.
Number of Replicas : 2I currently have 3 nodes with 5 physical disks per node. That's a total of 15 physical disks. My Ceph Health is green on the management interface.
I want to add a node of 5 physical disks. The total of physical disks will increase to 20. So, I need to increase PG_COUNT from 256 to 1024.
How to do this on the command line?
Can I do this without a break in service?I try :
get Pools list :
ceph --cluster kin osd lspoolsResponse:
1, rbd,ceph --cluster kin osd pool get rbd pg_num
ceph --cluster kin osd pool get rbd pgp_numResponse : pg_num: 256
pgp_num: 256This commands line on node 1, or 2, or 3 is correct for increase PG_COUNT ? :
ceph --cluster kin osd pool set rbd pg_num 1024
ceph--cluster kin osd pool set rbd pgp_num 1024without a break in service ?
Thanks
Hi.
disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192
I have a cluster petasan 1.4.0 (test with virt-manager) running. It's work perfectly. His name is kin.
Number of Replicas : 2
I currently have 3 nodes with 5 physical disks per node. That's a total of 15 physical disks. My Ceph Health is green on the management interface.
I want to add a node of 5 physical disks. The total of physical disks will increase to 20. So, I need to increase PG_COUNT from 256 to 1024.
How to do this on the command line?
Can I do this without a break in service?
I try :
get Pools list :
ceph --cluster kin osd lspools
Response:
1, rbd,
ceph --cluster kin osd pool get rbd pg_num
ceph --cluster kin osd pool get rbd pgp_num
Response : pg_num: 256
pgp_num: 256
This commands line on node 1, or 2, or 3 is correct for increase PG_COUNT ? :
ceph --cluster kin osd pool set rbd pg_num 1024
ceph--cluster kin osd pool set rbd pgp_num 1024
without a break in service ?
Thanks
admin
2,930 Posts
Quote from admin on November 2, 2017, 12:12 pmGenerally changing PG count in Ceph is not a pleasant thing in Ceph, as it involves data re-balance. So depending on how much data you have, it could take a long time to complete moving it. It is always best to try to set this up correctly early.
Also Ceph does put a limit to how much you can increase it at a time,so you may need to do this in steps. It will not allow 4x increase in a single step,so try:
ceph osd pool set rbd pg_num 512 --cluster kin
ceph osd pool set rbd pgp_num 512 --cluster kinOR
ceph osd pool set rbd pg_num 384 --cluster kin
ceph osd pool set rbd pgp_num 384 --cluster kinThen after re balance completes, you can perform another with higher values. You can view the progress by observing the PG Status chart on the dashboard.
Try to do it at night/ with min client load. Although if you have a lot of data it mat take days to move. Unless you have very weak hardware, it should go well and client io will not be affected too much.
Another option is to keep the 256 PGs as is, it will have some drawback that having few PGs per disk can lead to different storage utilization between the disks. In this case you should add the following line:
mon_pg_warn_min_per_osd = 20
To /etc/ceph/kin.conf on all machines just below [global] line at top
Generally changing PG count in Ceph is not a pleasant thing in Ceph, as it involves data re-balance. So depending on how much data you have, it could take a long time to complete moving it. It is always best to try to set this up correctly early.
Also Ceph does put a limit to how much you can increase it at a time,so you may need to do this in steps. It will not allow 4x increase in a single step,so try:
ceph osd pool set rbd pg_num 512 --cluster kin
ceph osd pool set rbd pgp_num 512 --cluster kin
OR
ceph osd pool set rbd pg_num 384 --cluster kin
ceph osd pool set rbd pgp_num 384 --cluster kin
Then after re balance completes, you can perform another with higher values. You can view the progress by observing the PG Status chart on the dashboard.
Try to do it at night/ with min client load. Although if you have a lot of data it mat take days to move. Unless you have very weak hardware, it should go well and client io will not be affected too much.
Another option is to keep the 256 PGs as is, it will have some drawback that having few PGs per disk can lead to different storage utilization between the disks. In this case you should add the following line:
mon_pg_warn_min_per_osd = 20
To /etc/ceph/kin.conf on all machines just below [global] line at top
tb
9 Posts
Quote from tb on November 2, 2017, 1:09 pmwell understood.
Thank you for your reply.Best regards.
well understood.
Thank you for your reply.
Best regards.
tb
9 Posts
Quote from tb on January 14, 2018, 11:39 ami have successfully updated petasan 1.4.0 to 1.5.0 without any break in service. So, thank for good job 🙂
i have successfully updated petasan 1.4.0 to 1.5.0 without any break in service. So, thank for good job 🙂
admin
2,930 Posts