Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Upgrade 2.0 to 2.1

Hi,

What is the procedure to upgrade ?  Do we need to place the cluster in maintenance mode ?

Regards

Raf

Start from management node 1 in sequence:

  1. Make sure cluster is active/clean
  2. Move paths from node to be upgraded, this is not essential but will switch io quickly rather than a 20s failover
  3. Upgrade node, reboot
  4. Go to step 1 for another node

Do not place cluster in maintenance mode

 

 

 

 

Hello Admin

  • In case I need to upgrade my host ( adding new disks ) or remove a failed node for maintenance, what do I need to do ?
  • If I really need to put the host in Maintenance mode what are the steps to do it ?

 

Thanks

To add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.

To add node : join node from deployment wizard.

To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.

for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node"  in deployment wizard.

Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance,  others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.

Quote from admin on October 1, 2018, 2:30 pm

To add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.

To add node : join node from deployment wizard.

To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.

for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node"  in deployment wizard.

Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance,  others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.

Thanks for the reply,

I will test this and let you know.