Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Bluestore_Block_DB_Size fails back to default value 60 GB

Pages: 1 2

Can you confirm: your cluster health is Ok, the value is not defined in any conf file, you set it to 300 G from the ui Ceph Configuration page, it works but after a few hours it reverts to 60 G by itself. Can you double check, if so we will try to reproduce it.

For live migration see our vmware guide.

Can you confirm: your cluster health is Ok, the value is not defined in any conf file, you set it to 300 G from the ui Ceph Configuration page, it works but after a few hours it reverts to 60 G by itself. Can you double check, if so we will try to reproduce it.

exaclty

For live migration see our vmware guide.

I can't find anything about this problem in the guide.

i have 2 ceph clusters with 4 nodes each. when i want to move a vm from cluster a to b it doesn't work. it aborts, but as soon as i move a vm offline it works without problems, albeit slowly (20mb/s).

i don't want to move a vm within 2 iscsi disks, but from cluster a to cluster b.

 

Can you please retry the db size issue one more time and confirm it is reproducible.

For live migration issue: live migration requires good performance as data is being changed/live  in the source. Withing the same cluster PetaSAN supports VAAI acceleration so it speeds things up, but this does not work across different clusters where the migration is slower and may not keep up with changes and may timeout. You best bet is to try to lower the live traffic on the source, or maybe migrate to a temp fast SSD pool first or use Veeam which does live migration faster across different clusters.

Pages: 1 2