Configuration option 'bluestore_block_db_size' may not be modified at runtime
Pages: 1 2
charlie_mtl
9 Posts
April 20, 2020, 1:32 pmQuote from charlie_mtl on April 20, 2020, 1:32 pmHi team,
I am building an image synchronization script between 2 clusters, production and DR sites.
As an example as see this when I run: rbd snap ls ...
"7f66dd2f0700 -1 set_mon_vals failed to set bluestore_block_db_size = 64424509440: Configuration option 'bluestore_block_db_size' may not be modified at runtime"
Any hint?
Thank you,
Charlie
Hi team,
I am building an image synchronization script between 2 clusters, production and DR sites.
As an example as see this when I run: rbd snap ls ...
"7f66dd2f0700 -1 set_mon_vals failed to set bluestore_block_db_size = 64424509440: Configuration option 'bluestore_block_db_size' may not be modified at runtime"
Any hint?
Thank you,
Charlie
admin
2,930 Posts
April 20, 2020, 11:30 pmQuote from admin on April 20, 2020, 11:30 pmWhat version are you using ?
Do you get this on all commands ? all rbd commands ? or rbd snap commands ?
Did you add/modify any config keys yourself, either via the ui or cli ?
Do you have any of these keys in the global section of the central Ceph config (check vi ui)
cluster_network
public_network
Did you modify any log levels yourself ?
Are you aware we support replication via snapshots built-in 🙂 ?
What version are you using ?
Do you get this on all commands ? all rbd commands ? or rbd snap commands ?
Did you add/modify any config keys yourself, either via the ui or cli ?
Do you have any of these keys in the global section of the central Ceph config (check vi ui)
cluster_network
public_network
Did you modify any log levels yourself ?
Are you aware we support replication via snapshots built-in 🙂 ?
Last edited on April 20, 2020, 11:31 pm by admin · #2
charlie_mtl
9 Posts
April 21, 2020, 12:18 pmQuote from charlie_mtl on April 21, 2020, 12:18 pmHi Admin,
Thank you for your reply.
I've 2 clusters, one on prem and the second on DR site.
Both were version 2.4 ( maybe 2.3 ) upgraded gradually upgraded 2.51.
Yes I see this message on all rbd command, I haven't added or modified any config key.
I tried to add ( they were not present ) public_network, cluster_network and mon_crush_min_required_version as it's advised on multiple places.
But even after having rebooted one of the 4 petasan nodes ( the one from where I run my script ), it didn't help so I removed those keys.
Those 2 clusters are providing iSCSI LUNs and they also provide Ceph RBD storage for a few proxmox clusters. What I did is creating one DR dedicated pool in both petasan cluster, shared them with 2 proxmox clusters. All on prem proxmox VMs stored in "petasan/proxmox01" pool are replicated and kept in sync every hour on the DR site petasan pool "replicat01".
I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed.
I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.
Charlie
Hi Admin,
Thank you for your reply.
I've 2 clusters, one on prem and the second on DR site.
Both were version 2.4 ( maybe 2.3 ) upgraded gradually upgraded 2.51.
Yes I see this message on all rbd command, I haven't added or modified any config key.
I tried to add ( they were not present ) public_network, cluster_network and mon_crush_min_required_version as it's advised on multiple places.
But even after having rebooted one of the 4 petasan nodes ( the one from where I run my script ), it didn't help so I removed those keys.
Those 2 clusters are providing iSCSI LUNs and they also provide Ceph RBD storage for a few proxmox clusters. What I did is creating one DR dedicated pool in both petasan cluster, shared them with 2 proxmox clusters. All on prem proxmox VMs stored in "petasan/proxmox01" pool are replicated and kept in sync every hour on the DR site petasan pool "replicat01".
I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed.
I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.
Charlie
admin
2,930 Posts
April 21, 2020, 1:57 pmQuote from admin on April 21, 2020, 1:57 pm1) We do not see such error in our environment, maybe it is a key you added in the past, can you email to contact-us at petasan.org the following:
ceph config dump
content of /etc/ceph/ceph.conf on node with error
2) In the past we had a similar issue with 2 config keys:
cluster_network
public_network
as per bug https://tracker.ceph.com/issues/40649
PetaSAN does handle this bug as per the fix mentioned, but i asked because the error looks so similar.
3) I am not sure why in your case you see bluestore_block_db_size key as offending, but it may be a solution to try is to remove it from the monitor database ( either in global or osd ) add add it to the /etc/ceph/ceph.conf on all nodes. Not ideal but could let us know if this is indeed the issue.
4)
I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed
PetaSAN replication does allow integrating custom scripts in the advanced options, the following is from the guide:
In case you need to run custom scripts during a replication job, you can define external URLs to be called at specific stages of replication, such as prior to performing disk snapshots or after job completion. These could be used in more advanced setups to flush files, lock database tables or send email on job completion
5) Are you using Bluestore or Filestore ?
6)
I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.
NFS is looking awesome 🙂
1) We do not see such error in our environment, maybe it is a key you added in the past, can you email to contact-us at petasan.org the following:
ceph config dump
content of /etc/ceph/ceph.conf on node with error
2) In the past we had a similar issue with 2 config keys:
cluster_network
public_network
as per bug https://tracker.ceph.com/issues/40649
PetaSAN does handle this bug as per the fix mentioned, but i asked because the error looks so similar.
3) I am not sure why in your case you see bluestore_block_db_size key as offending, but it may be a solution to try is to remove it from the monitor database ( either in global or osd ) add add it to the /etc/ceph/ceph.conf on all nodes. Not ideal but could let us know if this is indeed the issue.
4)
I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed
PetaSAN replication does allow integrating custom scripts in the advanced options, the following is from the guide:
In case you need to run custom scripts during a replication job, you can define external URLs to be called at specific stages of replication, such as prior to performing disk snapshots or after job completion. These could be used in more advanced setups to flush files, lock database tables or send email on job completion
5) Are you using Bluestore or Filestore ?
6)
I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.
NFS is looking awesome 🙂
Last edited on April 21, 2020, 1:58 pm by admin · #4
charlie_mtl
9 Posts
April 21, 2020, 2:24 pmQuote from charlie_mtl on April 21, 2020, 2:24 pmHi,
Just sent you the information you've requested.
I'm using Bluestore.
Thank you,
Charlie
Hi,
Just sent you the information you've requested.
I'm using Bluestore.
Thank you,
Charlie
admin
2,930 Posts
April 21, 2020, 4:47 pmQuote from admin on April 21, 2020, 4:47 pmdelete the osd bluestore_block_db_size key from ui under the global section
then use cli to add it under osd section
ceph config set osd bluestore_block_db_size 64424509440
we use the cli since this settings is a "developer"Â setting which we do not allow adding via ui (we allow basic/advanced).
It seems the key causes an issue if put under global section, it should not. It was probably imported from a older conf file, the more recent PetaSAN releases had this key under osd section rather than global, but again global should have worked.
delete the osd bluestore_block_db_size key from ui under the global section
then use cli to add it under osd section
ceph config set osd bluestore_block_db_size 64424509440
we use the cli since this settings is a "developer"Â setting which we do not allow adding via ui (we allow basic/advanced).
It seems the key causes an issue if put under global section, it should not. It was probably imported from a older conf file, the more recent PetaSAN releases had this key under osd section rather than global, but again global should have worked.
charlie_mtl
9 Posts
April 21, 2020, 5:11 pmQuote from charlie_mtl on April 21, 2020, 5:11 pmHi admin, I've deleted this key from the gui but it's back after a few sec.
Thank you,
Charlie
Hi admin, I've deleted this key from the gui but it's back after a few sec.
Thank you,
Charlie
admin
2,930 Posts
April 21, 2020, 5:15 pmQuote from admin on April 21, 2020, 5:15 pmtry reboot all nodes then do the same
try reboot all nodes then do the same
charlie_mtl
9 Posts
April 22, 2020, 1:27 pmQuote from charlie_mtl on April 22, 2020, 1:27 pmHi, I updated each petasan node from 2.5.1 to 2.5.2 and rebooted them one at a time.
Removed bluestore_block_db_size from the gui but it re-appeared after a few seconds.
Is it something propagated through consul? if so then what would be the correct way to fix it?
Thank you again.
Charlie
Hi, I updated each petasan node from 2.5.1 to 2.5.2 and rebooted them one at a time.
Removed bluestore_block_db_size from the gui but it re-appeared after a few seconds.
Is it something propagated through consul? if so then what would be the correct way to fix it?
Thank you again.
Charlie
admin
2,930 Posts
April 22, 2020, 2:05 pmQuote from admin on April 22, 2020, 2:05 pmno, there is no consul sync
how long does it take for the key to re-appear ?
can you try these inidividually
systemctl stop petasan-config-upload
ceph config rm global bluestore_block_db_size
ceph config-key rm config/global/bluestore_block_db_size
no, there is no consul sync
how long does it take for the key to re-appear ?
can you try these inidividually
systemctl stop petasan-config-upload
ceph config rm global bluestore_block_db_size
ceph config-key rm config/global/bluestore_block_db_size
Pages: 1 2
Configuration option 'bluestore_block_db_size' may not be modified at runtime
charlie_mtl
9 Posts
Quote from charlie_mtl on April 20, 2020, 1:32 pmHi team,
I am building an image synchronization script between 2 clusters, production and DR sites.
As an example as see this when I run: rbd snap ls ...
"7f66dd2f0700 -1 set_mon_vals failed to set bluestore_block_db_size = 64424509440: Configuration option 'bluestore_block_db_size' may not be modified at runtime"
Any hint?
Thank you,
Charlie
Hi team,
I am building an image synchronization script between 2 clusters, production and DR sites.
As an example as see this when I run: rbd snap ls ...
"7f66dd2f0700 -1 set_mon_vals failed to set bluestore_block_db_size = 64424509440: Configuration option 'bluestore_block_db_size' may not be modified at runtime"
Any hint?
Thank you,
Charlie
admin
2,930 Posts
Quote from admin on April 20, 2020, 11:30 pmWhat version are you using ?
Do you get this on all commands ? all rbd commands ? or rbd snap commands ?
Did you add/modify any config keys yourself, either via the ui or cli ?
Do you have any of these keys in the global section of the central Ceph config (check vi ui)
cluster_network
public_networkDid you modify any log levels yourself ?
Are you aware we support replication via snapshots built-in 🙂 ?
What version are you using ?
Do you get this on all commands ? all rbd commands ? or rbd snap commands ?
Did you add/modify any config keys yourself, either via the ui or cli ?
Do you have any of these keys in the global section of the central Ceph config (check vi ui)
cluster_network
public_network
Did you modify any log levels yourself ?
Are you aware we support replication via snapshots built-in 🙂 ?
charlie_mtl
9 Posts
Quote from charlie_mtl on April 21, 2020, 12:18 pmHi Admin,
Thank you for your reply.
I've 2 clusters, one on prem and the second on DR site.
Both were version 2.4 ( maybe 2.3 ) upgraded gradually upgraded 2.51.
Yes I see this message on all rbd command, I haven't added or modified any config key.
I tried to add ( they were not present ) public_network, cluster_network and mon_crush_min_required_version as it's advised on multiple places.
But even after having rebooted one of the 4 petasan nodes ( the one from where I run my script ), it didn't help so I removed those keys.
Those 2 clusters are providing iSCSI LUNs and they also provide Ceph RBD storage for a few proxmox clusters. What I did is creating one DR dedicated pool in both petasan cluster, shared them with 2 proxmox clusters. All on prem proxmox VMs stored in "petasan/proxmox01" pool are replicated and kept in sync every hour on the DR site petasan pool "replicat01".
I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed.
I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.
Charlie
Hi Admin,
Thank you for your reply.
I've 2 clusters, one on prem and the second on DR site.
Both were version 2.4 ( maybe 2.3 ) upgraded gradually upgraded 2.51.
Yes I see this message on all rbd command, I haven't added or modified any config key.
I tried to add ( they were not present ) public_network, cluster_network and mon_crush_min_required_version as it's advised on multiple places.
But even after having rebooted one of the 4 petasan nodes ( the one from where I run my script ), it didn't help so I removed those keys.
Those 2 clusters are providing iSCSI LUNs and they also provide Ceph RBD storage for a few proxmox clusters. What I did is creating one DR dedicated pool in both petasan cluster, shared them with 2 proxmox clusters. All on prem proxmox VMs stored in "petasan/proxmox01" pool are replicated and kept in sync every hour on the DR site petasan pool "replicat01".
I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed.
I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.
Charlie
admin
2,930 Posts
Quote from admin on April 21, 2020, 1:57 pm1) We do not see such error in our environment, maybe it is a key you added in the past, can you email to contact-us at petasan.org the following:
ceph config dump
content of /etc/ceph/ceph.conf on node with error2) In the past we had a similar issue with 2 config keys:
cluster_network
public_network
as per bug https://tracker.ceph.com/issues/40649PetaSAN does handle this bug as per the fix mentioned, but i asked because the error looks so similar.
3) I am not sure why in your case you see bluestore_block_db_size key as offending, but it may be a solution to try is to remove it from the monitor database ( either in global or osd ) add add it to the /etc/ceph/ceph.conf on all nodes. Not ideal but could let us know if this is indeed the issue.
4)
I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed
PetaSAN replication does allow integrating custom scripts in the advanced options, the following is from the guide:
In case you need to run custom scripts during a replication job, you can define external URLs to be called at specific stages of replication, such as prior to performing disk snapshots or after job completion. These could be used in more advanced setups to flush files, lock database tables or send email on job completion5) Are you using Bluestore or Filestore ?
6)
I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.NFS is looking awesome 🙂
1) We do not see such error in our environment, maybe it is a key you added in the past, can you email to contact-us at petasan.org the following:
ceph config dump
content of /etc/ceph/ceph.conf on node with error
2) In the past we had a similar issue with 2 config keys:
cluster_network
public_network
as per bug https://tracker.ceph.com/issues/40649
PetaSAN does handle this bug as per the fix mentioned, but i asked because the error looks so similar.
3) I am not sure why in your case you see bluestore_block_db_size key as offending, but it may be a solution to try is to remove it from the monitor database ( either in global or osd ) add add it to the /etc/ceph/ceph.conf on all nodes. Not ideal but could let us know if this is indeed the issue.
4)
I've no choice but to do it with a bash script because each targeted VM must be frozen before the RBD image snapshot creation and a lot of other tasks before and after the sync process are needed
PetaSAN replication does allow integrating custom scripts in the advanced options, the following is from the guide:
In case you need to run custom scripts during a replication job, you can define external URLs to be called at specific stages of replication, such as prior to performing disk snapshots or after job completion. These could be used in more advanced setups to flush files, lock database tables or send email on job completion
5) Are you using Bluestore or Filestore ?
6)
I look forward to the new Petasan version with NFS exports.
Thank you again for your great work.
NFS is looking awesome 🙂
charlie_mtl
9 Posts
Quote from charlie_mtl on April 21, 2020, 2:24 pmHi,
Just sent you the information you've requested.
I'm using Bluestore.
Thank you,
Charlie
Hi,
Just sent you the information you've requested.
I'm using Bluestore.
Thank you,
Charlie
admin
2,930 Posts
Quote from admin on April 21, 2020, 4:47 pmdelete the osd bluestore_block_db_size key from ui under the global section
then use cli to add it under osd section
ceph config set osd bluestore_block_db_size 64424509440
we use the cli since this settings is a "developer"Â setting which we do not allow adding via ui (we allow basic/advanced).
It seems the key causes an issue if put under global section, it should not. It was probably imported from a older conf file, the more recent PetaSAN releases had this key under osd section rather than global, but again global should have worked.
delete the osd bluestore_block_db_size key from ui under the global section
then use cli to add it under osd section
ceph config set osd bluestore_block_db_size 64424509440
we use the cli since this settings is a "developer"Â setting which we do not allow adding via ui (we allow basic/advanced).
It seems the key causes an issue if put under global section, it should not. It was probably imported from a older conf file, the more recent PetaSAN releases had this key under osd section rather than global, but again global should have worked.
charlie_mtl
9 Posts
Quote from charlie_mtl on April 21, 2020, 5:11 pmHi admin, I've deleted this key from the gui but it's back after a few sec.
Thank you,
Charlie
Hi admin, I've deleted this key from the gui but it's back after a few sec.
Thank you,
Charlie
admin
2,930 Posts
Quote from admin on April 21, 2020, 5:15 pmtry reboot all nodes then do the same
try reboot all nodes then do the same
charlie_mtl
9 Posts
Quote from charlie_mtl on April 22, 2020, 1:27 pmHi, I updated each petasan node from 2.5.1 to 2.5.2 and rebooted them one at a time.
Removed bluestore_block_db_size from the gui but it re-appeared after a few seconds.
Is it something propagated through consul? if so then what would be the correct way to fix it?
Thank you again.
Charlie
Hi, I updated each petasan node from 2.5.1 to 2.5.2 and rebooted them one at a time.
Removed bluestore_block_db_size from the gui but it re-appeared after a few seconds.
Is it something propagated through consul? if so then what would be the correct way to fix it?
Thank you again.
Charlie
admin
2,930 Posts
Quote from admin on April 22, 2020, 2:05 pmno, there is no consul sync
how long does it take for the key to re-appear ?
can you try these inidividually
systemctl stop petasan-config-upload
ceph config rm global bluestore_block_db_size
ceph config-key rm config/global/bluestore_block_db_size
no, there is no consul sync
how long does it take for the key to re-appear ?
can you try these inidividually
systemctl stop petasan-config-upload
ceph config rm global bluestore_block_db_size
ceph config-key rm config/global/bluestore_block_db_size