Upgrade 2.0 to 2.1
RafS
32 Posts
September 30, 2018, 8:52 amQuote from RafS on September 30, 2018, 8:52 amHi,
What is the procedure to upgrade ? Do we need to place the cluster in maintenance mode ?
Regards
Raf
Hi,
What is the procedure to upgrade ? Do we need to place the cluster in maintenance mode ?
Regards
Raf
admin
2,930 Posts
September 30, 2018, 10:41 amQuote from admin on September 30, 2018, 10:41 amStart from management node 1 in sequence:
- Make sure cluster is active/clean
- Move paths from node to be upgraded, this is not essential but will switch io quickly rather than a 20s failover
- Upgrade node, reboot
- Go to step 1 for another node
Do not place cluster in maintenance mode
Start from management node 1 in sequence:
- Make sure cluster is active/clean
- Move paths from node to be upgraded, this is not essential but will switch io quickly rather than a 20s failover
- Upgrade node, reboot
- Go to step 1 for another node
Do not place cluster in maintenance mode
msalem
87 Posts
October 1, 2018, 1:10 pmQuote from msalem on October 1, 2018, 1:10 pmHello Admin
- In case I need to upgrade my host ( adding new disks ) or remove a failed node for maintenance, what do I need to do ?
- If I really need to put the host in Maintenance mode what are the steps to do it ?
Thanks
Hello Admin
- In case I need to upgrade my host ( adding new disks ) or remove a failed node for maintenance, what do I need to do ?
- If I really need to put the host in Maintenance mode what are the steps to do it ?
Thanks
admin
2,930 Posts
October 1, 2018, 2:30 pmQuote from admin on October 1, 2018, 2:30 pmTo add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.
To add node : join node from deployment wizard.
To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.
for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node" in deployment wizard.
Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance, others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.
To add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.
To add node : join node from deployment wizard.
To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.
for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node" in deployment wizard.
Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance, others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.
Last edited on October 1, 2018, 2:31 pm by admin · #4
msalem
87 Posts
October 1, 2018, 3:09 pmQuote from msalem on October 1, 2018, 3:09 pm
Quote from admin on October 1, 2018, 2:30 pm
To add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.
To add node : join node from deployment wizard.
To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.
for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node" in deployment wizard.
Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance, others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.
Thanks for the reply,
I will test this and let you know.
Quote from admin on October 1, 2018, 2:30 pm
To add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.
To add node : join node from deployment wizard.
To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.
for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node" in deployment wizard.
Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance, others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.
Thanks for the reply,
I will test this and let you know.
Upgrade 2.0 to 2.1
RafS
32 Posts
Quote from RafS on September 30, 2018, 8:52 amHi,
What is the procedure to upgrade ? Do we need to place the cluster in maintenance mode ?
Regards
Raf
Hi,
What is the procedure to upgrade ? Do we need to place the cluster in maintenance mode ?
Regards
Raf
admin
2,930 Posts
Quote from admin on September 30, 2018, 10:41 amStart from management node 1 in sequence:
- Make sure cluster is active/clean
- Move paths from node to be upgraded, this is not essential but will switch io quickly rather than a 20s failover
- Upgrade node, reboot
- Go to step 1 for another node
Do not place cluster in maintenance mode
Start from management node 1 in sequence:
- Make sure cluster is active/clean
- Move paths from node to be upgraded, this is not essential but will switch io quickly rather than a 20s failover
- Upgrade node, reboot
- Go to step 1 for another node
Do not place cluster in maintenance mode
msalem
87 Posts
Quote from msalem on October 1, 2018, 1:10 pmHello Admin
- In case I need to upgrade my host ( adding new disks ) or remove a failed node for maintenance, what do I need to do ?
- If I really need to put the host in Maintenance mode what are the steps to do it ?
Thanks
Hello Admin
- In case I need to upgrade my host ( adding new disks ) or remove a failed node for maintenance, what do I need to do ?
- If I really need to put the host in Maintenance mode what are the steps to do it ?
Thanks
admin
2,930 Posts
Quote from admin on October 1, 2018, 2:30 pmTo add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.
To add node : join node from deployment wizard.
To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.
for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node" in deployment wizard.
Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance, others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.
To add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.
To add node : join node from deployment wizard.
To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.
for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node" in deployment wizard.
Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance, others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.
msalem
87 Posts
Quote from msalem on October 1, 2018, 3:09 pmQuote from admin on October 1, 2018, 2:30 pmTo add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.
To add node : join node from deployment wizard.
To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.
for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node" in deployment wizard.
Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance, others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.
Thanks for the reply,
I will test this and let you know.
Quote from admin on October 1, 2018, 2:30 pmTo add/remove disks : shut down node, add/remove physical disks, bring up node, add/remove OSDs from ui. In case of deletion, do not delete disks unless the cluster is in active/clean state so that it has enough copies per object, also do not delete disks on more than 1 node at a time for same reason.
To add node : join node from deployment wizard.
To delete node ..for nodes > 3 ( storage nodes) : shut node then delete it from node list, this will also delete OSDs. Nodes must be shut..you cannot delete a running node. Make sure the cluster is in active/clean state before deletion and do not delete more than 1 node at a time.
for node 1-3 (management nodes): you can not delete them from cluster. If a management node experiences non repairable hardware failure, you replace it with a new node via "Replace Management Node" in deployment wizard.
Maintenance mode effectively tells ceph not flag OSDs as down and/or not to start its built in recovery. Many Ceph users switch to this mode when bringing a node down for a short period of time during maintenance, others leave Ceph in normal operating mode even if will start its recovery for a short period before the node is up again..we recommend the later, it is safer at the expense of incurring more io movement.
Thanks for the reply,
I will test this and let you know.