Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Configuration option 'bluestore_block_db_size' may not be modified at runtime

Pages: 1 2

Hi team,

 

I am building an image synchronization script between 2 clusters, production and DR sites.

As an example as see this when I run: rbd snap ls ...

"7f66dd2f0700 -1 set_mon_vals failed to set bluestore_block_db_size = 64424509440: Configuration option 'bluestore_block_db_size' may not be modified at runtime"

 

Any hint?

Thank you,

Charlie

What version are you using ?

Do you get this on all commands ? all rbd commands ? or rbd snap commands ?

Did you add/modify any config keys yourself, either via the ui or cli ?

Do you have any of these keys in the global section of the central Ceph config (check vi ui)

cluster_network
public_network

Did you modify any log levels yourself ?

Are you aware we support replication via snapshots  built-in 🙂  ?

Hi Admin,

Thank you for your reply.

I've 2 clusters, one on prem and the second on DR site.

Both were version 2.4 ( maybe 2.3 ) upgraded gradually upgraded 2.51.

Yes I see this message on all rbd command, I haven't added or modified any config key.

I tried to add ( they were not present ) public_network, cluster_network and mon_crush_min_required_version as it's advised on multiple places.

But even after having rebooted one of the 4 petasan nodes ( the one from where I run my script ), it didn't help so I removed those keys.

Those 2 clusters are providing iSCSI LUNs and they also provide Ceph RBD storage for a few proxmox clusters. What I did is creating one DR dedicated pool in both petasan cluster, shared them with 2 proxmox clusters. All on prem proxmox VMs stored in "petasan/proxmox01" pool are replicated and kept in sync every hour on the DR site petasan pool "replicat01".

I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed.

I look forward to the new Petasan version with NFS exports.

Thank you again for your great work.

Charlie

 

1) We do not see such error in our environment, maybe it is a key you added in the past, can you email to contact-us at petasan.org the following:

ceph config dump
content of /etc/ceph/ceph.conf on node with error

2) In the past we had a similar issue with 2 config keys:
cluster_network
public_network
as per bug  https://tracker.ceph.com/issues/40649

PetaSAN does handle this bug as per the fix mentioned, but i asked because the error looks so similar.

3) I am not sure why in your case you see bluestore_block_db_size key as offending, but it may be a solution to try is to remove it from the monitor database ( either in global or osd ) add add it to the /etc/ceph/ceph.conf on all nodes. Not ideal but could let us know if this is indeed the issue.

4)

I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed

PetaSAN replication does allow integrating custom scripts in the advanced options, the following is from the guide:
In case you need to run custom scripts during a replication job, you can define external URLs to be called at specific stages of replication, such as prior to performing disk snapshots or after job completion. These could be used in more advanced setups to flush files, lock database tables or send email on job completion

5) Are you using Bluestore or Filestore ?

6)

I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.

NFS is looking awesome 🙂

Hi,

Just sent you the information you've requested.

I'm using Bluestore.

 

Thank you,

Charlie

delete the osd  bluestore_block_db_size key from ui  under the global section

then use cli to add it under osd section

ceph config set osd  bluestore_block_db_size 64424509440

we use the cli since this settings is a "developer"  setting which we do not allow adding via ui (we allow basic/advanced).

It seems the key causes an issue if put under global section, it should not. It was probably imported from a older conf file, the more recent PetaSAN releases had this key under osd section rather than global, but again global should have worked.

 

 

 

Hi admin, I've deleted this key from the gui but it's back after a few sec.

 

Thank you,

Charlie

try reboot all nodes then do the same

Hi, I updated each petasan node from 2.5.1 to 2.5.2 and rebooted them one at a time.

Removed bluestore_block_db_size from the gui but it re-appeared after a few seconds.

Is it something propagated through consul? if so then what would be the correct way to fix it?

 

Thank you again.

Charlie

no, there is no consul sync

how long does it take for the key to re-appear ?

can you try these inidividually
systemctl stop petasan-config-upload
ceph config rm global bluestore_block_db_size
ceph config-key rm config/global/bluestore_block_db_size

Pages: 1 2