No of management nodes
Yipkaiwing
18 Posts
August 25, 2018, 1:31 amQuote from Yipkaiwing on August 25, 2018, 1:31 amHi admin
In some cases when 2 management nodes are down, the whole cluster will not be working.
Can petasan allow 5 management nodes setup?
Thanks
Hi admin
In some cases when 2 management nodes are down, the whole cluster will not be working.
Can petasan allow 5 management nodes setup?
Thanks
admin
2,930 Posts
August 25, 2018, 12:06 pmQuote from admin on August 25, 2018, 12:06 pmIn PetaSAN the management nodes are assigned a role to act as Ceph monitors, Ceph mangers, Consul servers as well as well other functions such as statistics gathering, sending email alarms, ntpd servers..etc.
Generally a failure of 2 nodes failing out of 3 is a low bet and should be good for the vast majority of cases, if you are concerned I would first recommend that you place the 3 nodes in different failure domains: ie different racks, rooms..etc.
If this is still not enough, you can easily use cli commands to manually add Ceph monitors to 2 extra nodes. One of the design goals in PetaSAN is that you can use manual cli commands outside the ui if you wish. You will also need to manually add Consul servers to those 2 nodes since Consul is required for the ISCSI layer. This should be very easy to do.
In PetaSAN the management nodes are assigned a role to act as Ceph monitors, Ceph mangers, Consul servers as well as well other functions such as statistics gathering, sending email alarms, ntpd servers..etc.
Generally a failure of 2 nodes failing out of 3 is a low bet and should be good for the vast majority of cases, if you are concerned I would first recommend that you place the 3 nodes in different failure domains: ie different racks, rooms..etc.
If this is still not enough, you can easily use cli commands to manually add Ceph monitors to 2 extra nodes. One of the design goals in PetaSAN is that you can use manual cli commands outside the ui if you wish. You will also need to manually add Consul servers to those 2 nodes since Consul is required for the ISCSI layer. This should be very easy to do.
Last edited on August 25, 2018, 12:15 pm by admin · #2
Yipkaiwing
18 Posts
August 25, 2018, 5:22 pmQuote from Yipkaiwing on August 25, 2018, 5:22 pmThanks for your reply.
Thanks for your reply.
yangsm
31 Posts
March 12, 2019, 3:04 amQuote from yangsm on March 12, 2019, 3:04 amAdd more than 3 admin nodes to the command line, can you tell me? Manually add the Ceph monitor to the command line of the two additional nodes using cli commands; By manually adding an Consul server to both node command lines, you've already added an early problem. Also, does the gluster cluster need to be added
What is the default user root password for SSH login after the petasan operating system is installed and before the cluster is created
Add more than 3 admin nodes to the command line, can you tell me? Manually add the Ceph monitor to the command line of the two additional nodes using cli commands; By manually adding an Consul server to both node command lines, you've already added an early problem. Also, does the gluster cluster need to be added
What is the default user root password for SSH login after the petasan operating system is installed and before the cluster is created
the.only.chaos.lucifer
31 Posts
March 10, 2024, 7:16 pmQuote from the.only.chaos.lucifer on March 10, 2024, 7:16 pm@Admin Ok I think i was able to add the 4th node. But the changes is not reflected in the gui. Also I have trouble updating the changes in the gui as well. Is there a file that pull data from ceph to petasan gui as it seem some of the data are not being pulled when i successfully added a 4th node.
Any guidance on adding 4th and more management node with MON, MGR, and MDS is greatly appreciated.
@Admin Ok I think i was able to add the 4th node. But the changes is not reflected in the gui. Also I have trouble updating the changes in the gui as well. Is there a file that pull data from ceph to petasan gui as it seem some of the data are not being pulled when i successfully added a 4th node.
Any guidance on adding 4th and more management node with MON, MGR, and MDS is greatly appreciated.
Last edited on March 11, 2024, 8:33 am by the.only.chaos.lucifer · #5
No of management nodes
Yipkaiwing
18 Posts
Quote from Yipkaiwing on August 25, 2018, 1:31 amHi admin
In some cases when 2 management nodes are down, the whole cluster will not be working.
Can petasan allow 5 management nodes setup?
Thanks
Hi admin
In some cases when 2 management nodes are down, the whole cluster will not be working.
Can petasan allow 5 management nodes setup?
Thanks
admin
2,930 Posts
Quote from admin on August 25, 2018, 12:06 pmIn PetaSAN the management nodes are assigned a role to act as Ceph monitors, Ceph mangers, Consul servers as well as well other functions such as statistics gathering, sending email alarms, ntpd servers..etc.
Generally a failure of 2 nodes failing out of 3 is a low bet and should be good for the vast majority of cases, if you are concerned I would first recommend that you place the 3 nodes in different failure domains: ie different racks, rooms..etc.
If this is still not enough, you can easily use cli commands to manually add Ceph monitors to 2 extra nodes. One of the design goals in PetaSAN is that you can use manual cli commands outside the ui if you wish. You will also need to manually add Consul servers to those 2 nodes since Consul is required for the ISCSI layer. This should be very easy to do.
In PetaSAN the management nodes are assigned a role to act as Ceph monitors, Ceph mangers, Consul servers as well as well other functions such as statistics gathering, sending email alarms, ntpd servers..etc.
Generally a failure of 2 nodes failing out of 3 is a low bet and should be good for the vast majority of cases, if you are concerned I would first recommend that you place the 3 nodes in different failure domains: ie different racks, rooms..etc.
If this is still not enough, you can easily use cli commands to manually add Ceph monitors to 2 extra nodes. One of the design goals in PetaSAN is that you can use manual cli commands outside the ui if you wish. You will also need to manually add Consul servers to those 2 nodes since Consul is required for the ISCSI layer. This should be very easy to do.
Yipkaiwing
18 Posts
Quote from Yipkaiwing on August 25, 2018, 5:22 pmThanks for your reply.
Thanks for your reply.
yangsm
31 Posts
Quote from yangsm on March 12, 2019, 3:04 amAdd more than 3 admin nodes to the command line, can you tell me? Manually add the Ceph monitor to the command line of the two additional nodes using cli commands; By manually adding an Consul server to both node command lines, you've already added an early problem. Also, does the gluster cluster need to be added
What is the default user root password for SSH login after the petasan operating system is installed and before the cluster is created
Add more than 3 admin nodes to the command line, can you tell me? Manually add the Ceph monitor to the command line of the two additional nodes using cli commands; By manually adding an Consul server to both node command lines, you've already added an early problem. Also, does the gluster cluster need to be added
What is the default user root password for SSH login after the petasan operating system is installed and before the cluster is created
the.only.chaos.lucifer
31 Posts
Quote from the.only.chaos.lucifer on March 10, 2024, 7:16 pm@Admin Ok I think i was able to add the 4th node. But the changes is not reflected in the gui. Also I have trouble updating the changes in the gui as well. Is there a file that pull data from ceph to petasan gui as it seem some of the data are not being pulled when i successfully added a 4th node.
Any guidance on adding 4th and more management node with MON, MGR, and MDS is greatly appreciated.
@Admin Ok I think i was able to add the 4th node. But the changes is not reflected in the gui. Also I have trouble updating the changes in the gui as well. Is there a file that pull data from ceph to petasan gui as it seem some of the data are not being pulled when i successfully added a 4th node.
Any guidance on adding 4th and more management node with MON, MGR, and MDS is greatly appreciated.