Change monitor hosts
Ste
125 Posts
September 17, 2020, 9:28 amQuote from Ste on September 17, 2020, 9:28 amHello, I have a couple of questions regarding how to choose the right configuration for monitoring nodes.
- In Ceph documentation it is recommended, for production, to run monitors on dedicated servers, do you confirm that it is that critical to have monitor and storage in the same node also in PetaSAN ? Which ones could be the issues ?
- Once I have a minimal 3 nodes cluster up and running, is it possible at a later time to add 3 new nodes only for monitoring and separate the monitoring role from the 3 original nodes leaving them only for storage role ?
I hope I''ve been clear enough. Bye.
Hello, I have a couple of questions regarding how to choose the right configuration for monitoring nodes.
- In Ceph documentation it is recommended, for production, to run monitors on dedicated servers, do you confirm that it is that critical to have monitor and storage in the same node also in PetaSAN ? Which ones could be the issues ?
- Once I have a minimal 3 nodes cluster up and running, is it possible at a later time to add 3 new nodes only for monitoring and separate the monitoring role from the 3 original nodes leaving them only for storage role ?
I hope I''ve been clear enough. Bye.
admin
2,930 Posts
September 17, 2020, 11:16 amQuote from admin on September 17, 2020, 11:16 am
- is it better to have specific roles for each node, but it is not critical. In case you intent to to build a large cluster with many nodes it is better.
- You can, you would need to create the ceph monitors yourself manually. Note that the PetaSAN management role does more than just ceph monitors, it includes consul, gluster sharedfs, statistics and other servers.
- is it better to have specific roles for each node, but it is not critical. In case you intent to to build a large cluster with many nodes it is better.
- You can, you would need to create the ceph monitors yourself manually. Note that the PetaSAN management role does more than just ceph monitors, it includes consul, gluster sharedfs, statistics and other servers.
Ste
125 Posts
September 17, 2020, 1:33 pmQuote from Ste on September 17, 2020, 1:33 pm
Quote from admin on September 17, 2020, 11:16 a
You can, you would need to create the ceph monitors yourself manually.
So, let's say the procedure would be:
- install petasan on the new 4th node
- add it to the cluster as only management node
- when I have the 4th management node, I should be able to unflag "Management and Monitoring Services" role in node #1 and leave the others flagged (now i can't do it I guess because i'm at the minimum number of mgmt nodes: 3)
Is this correct ? Thanks, S.
Quote from admin on September 17, 2020, 11:16 a
You can, you would need to create the ceph monitors yourself manually.
So, let's say the procedure would be:
- install petasan on the new 4th node
- add it to the cluster as only management node
- when I have the 4th management node, I should be able to unflag "Management and Monitoring Services" role in node #1 and leave the others flagged (now i can't do it I guess because i'm at the minimum number of mgmt nodes: 3)
Is this correct ? Thanks, S.
Last edited on September 17, 2020, 1:35 pm by Ste · #3
admin
2,930 Posts
September 17, 2020, 2:08 pmQuote from admin on September 17, 2020, 2:08 pmNo, in PetaSAN the first 3 nodes are the management nodes, they are required to build the cluster. I understood earlier post was regarding the ceph monitors which you can add to any node yourself via the ceph commands.
You could replace the management node with new hardware but with the same hostname and ips. You could remove other roles and can physically move the OSDs to a new node but mind you this will cause most data in the OSDs to be rebalanced.
No, in PetaSAN the first 3 nodes are the management nodes, they are required to build the cluster. I understood earlier post was regarding the ceph monitors which you can add to any node yourself via the ceph commands.
You could replace the management node with new hardware but with the same hostname and ips. You could remove other roles and can physically move the OSDs to a new node but mind you this will cause most data in the OSDs to be rebalanced.
Last edited on September 17, 2020, 2:13 pm by admin · #4
Change monitor hosts
Ste
125 Posts
Quote from Ste on September 17, 2020, 9:28 amHello, I have a couple of questions regarding how to choose the right configuration for monitoring nodes.
- In Ceph documentation it is recommended, for production, to run monitors on dedicated servers, do you confirm that it is that critical to have monitor and storage in the same node also in PetaSAN ? Which ones could be the issues ?
- Once I have a minimal 3 nodes cluster up and running, is it possible at a later time to add 3 new nodes only for monitoring and separate the monitoring role from the 3 original nodes leaving them only for storage role ?
I hope I''ve been clear enough. Bye.
Hello, I have a couple of questions regarding how to choose the right configuration for monitoring nodes.
- In Ceph documentation it is recommended, for production, to run monitors on dedicated servers, do you confirm that it is that critical to have monitor and storage in the same node also in PetaSAN ? Which ones could be the issues ?
- Once I have a minimal 3 nodes cluster up and running, is it possible at a later time to add 3 new nodes only for monitoring and separate the monitoring role from the 3 original nodes leaving them only for storage role ?
I hope I''ve been clear enough. Bye.
admin
2,930 Posts
Quote from admin on September 17, 2020, 11:16 am
- is it better to have specific roles for each node, but it is not critical. In case you intent to to build a large cluster with many nodes it is better.
- You can, you would need to create the ceph monitors yourself manually. Note that the PetaSAN management role does more than just ceph monitors, it includes consul, gluster sharedfs, statistics and other servers.
- is it better to have specific roles for each node, but it is not critical. In case you intent to to build a large cluster with many nodes it is better.
- You can, you would need to create the ceph monitors yourself manually. Note that the PetaSAN management role does more than just ceph monitors, it includes consul, gluster sharedfs, statistics and other servers.
Ste
125 Posts
Quote from Ste on September 17, 2020, 1:33 pmQuote from admin on September 17, 2020, 11:16 aYou can, you would need to create the ceph monitors yourself manually.
So, let's say the procedure would be:
- install petasan on the new 4th node
- add it to the cluster as only management node
- when I have the 4th management node, I should be able to unflag "Management and Monitoring Services" role in node #1 and leave the others flagged (now i can't do it I guess because i'm at the minimum number of mgmt nodes: 3)
Is this correct ? Thanks, S.
Quote from admin on September 17, 2020, 11:16 aYou can, you would need to create the ceph monitors yourself manually.
So, let's say the procedure would be:
- install petasan on the new 4th node
- add it to the cluster as only management node
- when I have the 4th management node, I should be able to unflag "Management and Monitoring Services" role in node #1 and leave the others flagged (now i can't do it I guess because i'm at the minimum number of mgmt nodes: 3)
Is this correct ? Thanks, S.
admin
2,930 Posts
Quote from admin on September 17, 2020, 2:08 pmNo, in PetaSAN the first 3 nodes are the management nodes, they are required to build the cluster. I understood earlier post was regarding the ceph monitors which you can add to any node yourself via the ceph commands.
You could replace the management node with new hardware but with the same hostname and ips. You could remove other roles and can physically move the OSDs to a new node but mind you this will cause most data in the OSDs to be rebalanced.
No, in PetaSAN the first 3 nodes are the management nodes, they are required to build the cluster. I understood earlier post was regarding the ceph monitors which you can add to any node yourself via the ceph commands.
You could replace the management node with new hardware but with the same hostname and ips. You could remove other roles and can physically move the OSDs to a new node but mind you this will cause most data in the OSDs to be rebalanced.