Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

change (increase) PG_COUNT

Hi.

disks -> PG_COUNT
3-15 -> 256
15-50 -> 1024
50-200 -> 4096
More than 200 -> 8192

I have a cluster petasan 1.4.0 (test with virt-manager) running. It's work perfectly.  His name is kin.
Number of Replicas : 2

I currently have 3 nodes with 5 physical disks per node. That's a total of 15 physical disks. My Ceph Health is green on the management interface.

I want to add a node of 5 physical disks. The total of physical disks will increase to 20. So, I need to increase PG_COUNT from 256 to 1024.

How to do this on the command line?
Can I do this without a break in service?

I try :

get Pools list :

ceph --cluster kin osd lspools

Response:
1, rbd,

ceph --cluster kin osd pool get rbd pg_num
ceph --cluster kin osd pool get rbd pgp_num

Response : pg_num: 256
pgp_num: 256

This commands line on node 1, or 2, or 3 is correct for increase PG_COUNT ? :

ceph --cluster kin osd pool set rbd pg_num 1024
ceph--cluster kin osd pool set rbd pgp_num 1024

without a break in service ?

Thanks

 

Generally changing PG count in Ceph is not a pleasant thing in Ceph, as it involves data re-balance. So depending on how much data you have, it could take a long time to complete moving it. It is always best to try to set this up correctly early.

Also Ceph does put a limit to how much you can increase it at a time,so you may need to do this in steps. It will not allow 4x increase in a single step,so try:

ceph osd pool set rbd pg_num 512 --cluster kin
ceph osd pool set rbd pgp_num 512 --cluster kin

OR
ceph osd pool set rbd pg_num 384 --cluster kin
ceph osd pool set rbd pgp_num 384 --cluster kin

Then after re balance completes, you can perform another with higher values. You can view the progress by observing the PG Status chart on the dashboard.

Try to do it at night/ with min client load. Although if you have a lot of data it mat take days to move.  Unless you have very weak hardware, it should go well  and client io will not be affected too much.

Another option is to keep the 256 PGs as is, it will have some drawback that having few PGs per disk can lead to different storage utilization between the disks. In this case you should add the following line:

mon_pg_warn_min_per_osd = 20

To /etc/ceph/kin.conf on all machines just below  [global] line at top

well understood.
Thank you for your reply.

Best regards.

i have successfully updated petasan 1.4.0 to 1.5.0 without any break in service. So, thank for good job 🙂

Thanks !