Node interfaces do not match cluster interface settings.
Pages: 1 2
6gvr7766
15 Posts
February 15, 2017, 9:30 amQuote from 6gvr7766 on February 15, 2017, 9:30 amFirst node is a vm on Vmware,second node is physical machine. Second node cannot be joined in cluster.
(Alert!Node interfaces do not match cluster interface settings.)
Cluster interface is the interface of first node.Is such judgment reasonable?.
First node is a vm on Vmware,second node is physical machine. Second node cannot be joined in cluster.
(Alert!Node interfaces do not match cluster interface settings.)
Cluster interface is the interface of first node.Is such judgment reasonable?.
mmokhtar
11 Posts
February 15, 2017, 2:11 pmQuote from mmokhtar on February 15, 2017, 2:11 pmThis means the number of network interfaces of the node trying to join does not match the cluster setup.
In PetaSAN all nodes must have the same number of interfaces. The number of interfaces and their mappings to subnets is done during cluster creation on first node. Please look at the quick start guide as it goes through all this and is easy to follow.
I also recommend you deploy either on physical machines or on virtual machines for testing but do not mix.
This means the number of network interfaces of the node trying to join does not match the cluster setup.
In PetaSAN all nodes must have the same number of interfaces. The number of interfaces and their mappings to subnets is done during cluster creation on first node. Please look at the quick start guide as it goes through all this and is easy to follow.
I also recommend you deploy either on physical machines or on virtual machines for testing but do not mix.
6gvr7766
15 Posts
February 16, 2017, 2:32 amQuote from 6gvr7766 on February 16, 2017, 2:32 amBut you know different physical server have different number of network interfaces,second node(4 interfaces)can not been joined in first node(6 interfaces and only two are used).I have three different servers,but can not build a cluster.Should the setting be more reasonable?
But you know different physical server have different number of network interfaces,second node(4 interfaces)can not been joined in first node(6 interfaces and only two are used).I have three different servers,but can not build a cluster.Should the setting be more reasonable?
admin
2,930 Posts
February 17, 2017, 12:01 pmQuote from admin on February 17, 2017, 12:01 pmIn the vast majority of cases you will be using same hardware configuration to build the cluster and only in small cases you will need to mix different type of servers.
You are right in that defining the network interfaces and their mappings to subnets per individual server is more flexible but is also more complex for the user to setup and the potential for configuration error is higher. In PetaSAN we try to make it super easy to setup the system quickly out of the box, this comes at the expense of giving the user too many options. Ceph has a huge number of options exposed to the user, they do give a lot of flexibility but also complexity, we trade off some if this to make the system easy.
In the vast majority of cases you will be using same hardware configuration to build the cluster and only in small cases you will need to mix different type of servers.
You are right in that defining the network interfaces and their mappings to subnets per individual server is more flexible but is also more complex for the user to setup and the potential for configuration error is higher. In PetaSAN we try to make it super easy to setup the system quickly out of the box, this comes at the expense of giving the user too many options. Ceph has a huge number of options exposed to the user, they do give a lot of flexibility but also complexity, we trade off some if this to make the system easy.
Last edited on February 17, 2017, 12:02 pm · #4
thajdusl
1 Post
August 31, 2017, 9:25 amQuote from thajdusl on August 31, 2017, 9:25 amHi,
I have the same error message, strangely with phyisical servers, same intterface numbers... when the cluster is up with 3 members, i am not able to add the fourth. it has the same num of ifs, but i got this error. (truth to be told, the cluster runs as active backup with 8 interfaces. i have no idea why this happens.
"bit later"> ok i probably found the reason. i had been silly and assigned the primary interface for the management node to a different etx at the initial setup phase. It seems it Must be so, that you install the stuff as per default interface namings (the cluster might have interfaces renamed and you might have separated switches linkedd to ports etc) so indifferent to the fact that each of your nodes uses a different port for eth0 on boot if name binding you always use on node addding eth0. Afterwards you will have to rename.
It poses the problem, that if you have a usual HPC multi switch separated environment and your maanagement node is not on the same swithc core set, then temporarily you will have to link the eth0 physical link to the cluster core switch node...
So it is solvedd now.
Hi,
I have the same error message, strangely with phyisical servers, same intterface numbers... when the cluster is up with 3 members, i am not able to add the fourth. it has the same num of ifs, but i got this error. (truth to be told, the cluster runs as active backup with 8 interfaces. i have no idea why this happens.
"bit later"> ok i probably found the reason. i had been silly and assigned the primary interface for the management node to a different etx at the initial setup phase. It seems it Must be so, that you install the stuff as per default interface namings (the cluster might have interfaces renamed and you might have separated switches linkedd to ports etc) so indifferent to the fact that each of your nodes uses a different port for eth0 on boot if name binding you always use on node addding eth0. Afterwards you will have to rename.
It poses the problem, that if you have a usual HPC multi switch separated environment and your maanagement node is not on the same swithc core set, then temporarily you will have to link the eth0 physical link to the cluster core switch node...
So it is solvedd now.
Last edited on August 31, 2017, 9:42 am by thajdusl · #5
admin
2,930 Posts
August 31, 2017, 10:21 amQuote from admin on August 31, 2017, 10:21 amHappy you solved it 🙂
Just to emphasis the reason we require all nodes to have the same number of nics as well as force the same mapping of nics to subnets is to make it simpler to setup since this is defined once at cluster creation time and not done per node, but also it makes it much simpler to fix any errors during real operation since there is no individual configuration per node to worry about. The only complexity is if you have different hardware, the nic ordering may be different and may require manual renaming, but this issue needs to be done in all cases and not related to PetaSAN as such. But again if you use the same hardware the order of the nics should be the same. We chose to add the re-ordering of nics on the node console menu rather than doing it in the installer so it can also be done in case you replace nic hardware afterwards during operation.
Happy you solved it 🙂
Just to emphasis the reason we require all nodes to have the same number of nics as well as force the same mapping of nics to subnets is to make it simpler to setup since this is defined once at cluster creation time and not done per node, but also it makes it much simpler to fix any errors during real operation since there is no individual configuration per node to worry about. The only complexity is if you have different hardware, the nic ordering may be different and may require manual renaming, but this issue needs to be done in all cases and not related to PetaSAN as such. But again if you use the same hardware the order of the nics should be the same. We chose to add the re-ordering of nics on the node console menu rather than doing it in the installer so it can also be done in case you replace nic hardware afterwards during operation.
erazmus
40 Posts
September 1, 2017, 5:41 pmQuote from erazmus on September 1, 2017, 5:41 pm
Quote from admin on August 31, 2017, 10:21 am
The only complexity is if you have different hardware, the nic ordering may be different and may require manual renaming, but this issue needs to be done in all cases and not related to PetaSAN as such.
After installing, I open a shell and run 'ethtool -p eth0 60' (for each interface) which will blink the light on the nic. I then move cables to match what petaSAN is expecting. I don't bother renaming interfaces because it'll just get mixed up in the future if I ever need to re-install the node.
Quote from admin on August 31, 2017, 10:21 am
The only complexity is if you have different hardware, the nic ordering may be different and may require manual renaming, but this issue needs to be done in all cases and not related to PetaSAN as such.
After installing, I open a shell and run 'ethtool -p eth0 60' (for each interface) which will blink the light on the nic. I then move cables to match what petaSAN is expecting. I don't bother renaming interfaces because it'll just get mixed up in the future if I ever need to re-install the node.
RafS
32 Posts
September 13, 2017, 2:33 pmQuote from RafS on September 13, 2017, 2:33 pmI think you should be able to have different setups. On the osd nodes it would have 2 x 10 GB network for replication & cluster traffic. On the ISCSI gateway node i would have 4 x 10 GB network 2 for the access to osd nodes, and 2 for iSCSI to client. At this moment this is not possible. Also if in one or 2 years i need to extend/upgrade the system it would be nice if your able to add some additional network cards. Starting small (1 10gb for all trafic, and upgrade is there is a need for it).
I think you should be able to have different setups. On the osd nodes it would have 2 x 10 GB network for replication & cluster traffic. On the ISCSI gateway node i would have 4 x 10 GB network 2 for the access to osd nodes, and 2 for iSCSI to client. At this moment this is not possible. Also if in one or 2 years i need to extend/upgrade the system it would be nice if your able to add some additional network cards. Starting small (1 10gb for all trafic, and upgrade is there is a need for it).
admin
2,930 Posts
September 13, 2017, 3:38 pmQuote from admin on September 13, 2017, 3:38 pmI agree. Currently it is a bit simplified to allow adding/changing node roles which are by default combined. We should probably support 2 configurations as you suggest, but still avoid having individual configuration per node.
For upgrading the nics of the cluster, though not supported via the ui, the configuration files where we store these definitions are quite simple and can be modified manually, we should probably document them or at least their location since they are easily readable. I tend to think automating something like this via the ui will not be easy and may require downtime.
I agree. Currently it is a bit simplified to allow adding/changing node roles which are by default combined. We should probably support 2 configurations as you suggest, but still avoid having individual configuration per node.
For upgrading the nics of the cluster, though not supported via the ui, the configuration files where we store these definitions are quite simple and can be modified manually, we should probably document them or at least their location since they are easily readable. I tend to think automating something like this via the ui will not be easy and may require downtime.
erazmus
40 Posts
September 13, 2017, 4:01 pmQuote from erazmus on September 13, 2017, 4:01 pmTalking of re-configuring manually...
I have a test cluster that I overlayed the two SCSI interfaces both on eth2. All of the hardware has an unused eth3, and I would now like to move the second SCSI interface to eth3. Is this possible via manual editing of the config files? Downtime is okay - I'd just like to save the time of re-installing 10 nodes.
Talking of re-configuring manually...
I have a test cluster that I overlayed the two SCSI interfaces both on eth2. All of the hardware has an unused eth3, and I would now like to move the second SCSI interface to eth3. Is this possible via manual editing of the config files? Downtime is okay - I'd just like to save the time of re-installing 10 nodes.
Pages: 1 2
Node interfaces do not match cluster interface settings.
6gvr7766
15 Posts
Quote from 6gvr7766 on February 15, 2017, 9:30 amFirst node is a vm on Vmware,second node is physical machine. Second node cannot be joined in cluster.
(Alert!Node interfaces do not match cluster interface settings.)
Cluster interface is the interface of first node.Is such judgment reasonable?.
First node is a vm on Vmware,second node is physical machine. Second node cannot be joined in cluster.
(Alert!Node interfaces do not match cluster interface settings.)
Cluster interface is the interface of first node.Is such judgment reasonable?.
mmokhtar
11 Posts
Quote from mmokhtar on February 15, 2017, 2:11 pmThis means the number of network interfaces of the node trying to join does not match the cluster setup.
In PetaSAN all nodes must have the same number of interfaces. The number of interfaces and their mappings to subnets is done during cluster creation on first node. Please look at the quick start guide as it goes through all this and is easy to follow.
I also recommend you deploy either on physical machines or on virtual machines for testing but do not mix.
This means the number of network interfaces of the node trying to join does not match the cluster setup.
In PetaSAN all nodes must have the same number of interfaces. The number of interfaces and their mappings to subnets is done during cluster creation on first node. Please look at the quick start guide as it goes through all this and is easy to follow.
I also recommend you deploy either on physical machines or on virtual machines for testing but do not mix.
6gvr7766
15 Posts
Quote from 6gvr7766 on February 16, 2017, 2:32 amBut you know different physical server have different number of network interfaces,second node(4 interfaces)can not been joined in first node(6 interfaces and only two are used).I have three different servers,but can not build a cluster.Should the setting be more reasonable?
But you know different physical server have different number of network interfaces,second node(4 interfaces)can not been joined in first node(6 interfaces and only two are used).I have three different servers,but can not build a cluster.Should the setting be more reasonable?
admin
2,930 Posts
Quote from admin on February 17, 2017, 12:01 pmIn the vast majority of cases you will be using same hardware configuration to build the cluster and only in small cases you will need to mix different type of servers.
You are right in that defining the network interfaces and their mappings to subnets per individual server is more flexible but is also more complex for the user to setup and the potential for configuration error is higher. In PetaSAN we try to make it super easy to setup the system quickly out of the box, this comes at the expense of giving the user too many options. Ceph has a huge number of options exposed to the user, they do give a lot of flexibility but also complexity, we trade off some if this to make the system easy.
In the vast majority of cases you will be using same hardware configuration to build the cluster and only in small cases you will need to mix different type of servers.
You are right in that defining the network interfaces and their mappings to subnets per individual server is more flexible but is also more complex for the user to setup and the potential for configuration error is higher. In PetaSAN we try to make it super easy to setup the system quickly out of the box, this comes at the expense of giving the user too many options. Ceph has a huge number of options exposed to the user, they do give a lot of flexibility but also complexity, we trade off some if this to make the system easy.
thajdusl
1 Post
Quote from thajdusl on August 31, 2017, 9:25 amHi,
I have the same error message, strangely with phyisical servers, same intterface numbers... when the cluster is up with 3 members, i am not able to add the fourth. it has the same num of ifs, but i got this error. (truth to be told, the cluster runs as active backup with 8 interfaces. i have no idea why this happens.
"bit later"> ok i probably found the reason. i had been silly and assigned the primary interface for the management node to a different etx at the initial setup phase. It seems it Must be so, that you install the stuff as per default interface namings (the cluster might have interfaces renamed and you might have separated switches linkedd to ports etc) so indifferent to the fact that each of your nodes uses a different port for eth0 on boot if name binding you always use on node addding eth0. Afterwards you will have to rename.
It poses the problem, that if you have a usual HPC multi switch separated environment and your maanagement node is not on the same swithc core set, then temporarily you will have to link the eth0 physical link to the cluster core switch node...
So it is solvedd now.
Hi,
I have the same error message, strangely with phyisical servers, same intterface numbers... when the cluster is up with 3 members, i am not able to add the fourth. it has the same num of ifs, but i got this error. (truth to be told, the cluster runs as active backup with 8 interfaces. i have no idea why this happens.
"bit later"> ok i probably found the reason. i had been silly and assigned the primary interface for the management node to a different etx at the initial setup phase. It seems it Must be so, that you install the stuff as per default interface namings (the cluster might have interfaces renamed and you might have separated switches linkedd to ports etc) so indifferent to the fact that each of your nodes uses a different port for eth0 on boot if name binding you always use on node addding eth0. Afterwards you will have to rename.
It poses the problem, that if you have a usual HPC multi switch separated environment and your maanagement node is not on the same swithc core set, then temporarily you will have to link the eth0 physical link to the cluster core switch node...
So it is solvedd now.
admin
2,930 Posts
Quote from admin on August 31, 2017, 10:21 amHappy you solved it 🙂
Just to emphasis the reason we require all nodes to have the same number of nics as well as force the same mapping of nics to subnets is to make it simpler to setup since this is defined once at cluster creation time and not done per node, but also it makes it much simpler to fix any errors during real operation since there is no individual configuration per node to worry about. The only complexity is if you have different hardware, the nic ordering may be different and may require manual renaming, but this issue needs to be done in all cases and not related to PetaSAN as such. But again if you use the same hardware the order of the nics should be the same. We chose to add the re-ordering of nics on the node console menu rather than doing it in the installer so it can also be done in case you replace nic hardware afterwards during operation.
Happy you solved it 🙂
Just to emphasis the reason we require all nodes to have the same number of nics as well as force the same mapping of nics to subnets is to make it simpler to setup since this is defined once at cluster creation time and not done per node, but also it makes it much simpler to fix any errors during real operation since there is no individual configuration per node to worry about. The only complexity is if you have different hardware, the nic ordering may be different and may require manual renaming, but this issue needs to be done in all cases and not related to PetaSAN as such. But again if you use the same hardware the order of the nics should be the same. We chose to add the re-ordering of nics on the node console menu rather than doing it in the installer so it can also be done in case you replace nic hardware afterwards during operation.
erazmus
40 Posts
Quote from erazmus on September 1, 2017, 5:41 pmQuote from admin on August 31, 2017, 10:21 amThe only complexity is if you have different hardware, the nic ordering may be different and may require manual renaming, but this issue needs to be done in all cases and not related to PetaSAN as such.
After installing, I open a shell and run 'ethtool -p eth0 60' (for each interface) which will blink the light on the nic. I then move cables to match what petaSAN is expecting. I don't bother renaming interfaces because it'll just get mixed up in the future if I ever need to re-install the node.
Quote from admin on August 31, 2017, 10:21 amThe only complexity is if you have different hardware, the nic ordering may be different and may require manual renaming, but this issue needs to be done in all cases and not related to PetaSAN as such.
After installing, I open a shell and run 'ethtool -p eth0 60' (for each interface) which will blink the light on the nic. I then move cables to match what petaSAN is expecting. I don't bother renaming interfaces because it'll just get mixed up in the future if I ever need to re-install the node.
RafS
32 Posts
Quote from RafS on September 13, 2017, 2:33 pmI think you should be able to have different setups. On the osd nodes it would have 2 x 10 GB network for replication & cluster traffic. On the ISCSI gateway node i would have 4 x 10 GB network 2 for the access to osd nodes, and 2 for iSCSI to client. At this moment this is not possible. Also if in one or 2 years i need to extend/upgrade the system it would be nice if your able to add some additional network cards. Starting small (1 10gb for all trafic, and upgrade is there is a need for it).
I think you should be able to have different setups. On the osd nodes it would have 2 x 10 GB network for replication & cluster traffic. On the ISCSI gateway node i would have 4 x 10 GB network 2 for the access to osd nodes, and 2 for iSCSI to client. At this moment this is not possible. Also if in one or 2 years i need to extend/upgrade the system it would be nice if your able to add some additional network cards. Starting small (1 10gb for all trafic, and upgrade is there is a need for it).
admin
2,930 Posts
Quote from admin on September 13, 2017, 3:38 pmI agree. Currently it is a bit simplified to allow adding/changing node roles which are by default combined. We should probably support 2 configurations as you suggest, but still avoid having individual configuration per node.
For upgrading the nics of the cluster, though not supported via the ui, the configuration files where we store these definitions are quite simple and can be modified manually, we should probably document them or at least their location since they are easily readable. I tend to think automating something like this via the ui will not be easy and may require downtime.
I agree. Currently it is a bit simplified to allow adding/changing node roles which are by default combined. We should probably support 2 configurations as you suggest, but still avoid having individual configuration per node.
For upgrading the nics of the cluster, though not supported via the ui, the configuration files where we store these definitions are quite simple and can be modified manually, we should probably document them or at least their location since they are easily readable. I tend to think automating something like this via the ui will not be easy and may require downtime.
erazmus
40 Posts
Quote from erazmus on September 13, 2017, 4:01 pmTalking of re-configuring manually...
I have a test cluster that I overlayed the two SCSI interfaces both on eth2. All of the hardware has an unused eth3, and I would now like to move the second SCSI interface to eth3. Is this possible via manual editing of the config files? Downtime is okay - I'd just like to save the time of re-installing 10 nodes.
Talking of re-configuring manually...
I have a test cluster that I overlayed the two SCSI interfaces both on eth2. All of the hardware has an unused eth3, and I would now like to move the second SCSI interface to eth3. Is this possible via manual editing of the config files? Downtime is okay - I'd just like to save the time of re-installing 10 nodes.