Servers with Two NICs.
msalem
87 Posts
May 22, 2018, 6:31 pmQuote from msalem on May 22, 2018, 6:31 pmHey Support,
Hope everything is well, I have a few server with 2X10GB NIcs.
what is the best setup for these servers in terms of networking ?
1 - Create vlans with 5 networks.
NIC1 = management + ISCSI1 + Backend1 ( All in virtual NICs)
NIC2 = ISCSI2 + Backend2 ( All in virtual NICs)
2 - Create 3 vlans with bond NICS
bond0 = Manamgent + Backend + ISCSI
Thanks
Hey Support,
Hope everything is well, I have a few server with 2X10GB NIcs.
what is the best setup for these servers in terms of networking ?
1 - Create vlans with 5 networks.
NIC1 = management + ISCSI1 + Backend1 ( All in virtual NICs)
NIC2 = ISCSI2 + Backend2 ( All in virtual NICs)
2 - Create 3 vlans with bond NICS
bond0 = Manamgent + Backend + ISCSI
Thanks
admin
2,930 Posts
May 22, 2018, 7:43 pmQuote from admin on May 22, 2018, 7:43 pmprobably creating a bond from the 2 nics and mapping all 5 subnets on it
probably creating a bond from the 2 nics and mapping all 5 subnets on it
msalem
87 Posts
May 24, 2018, 6:58 pmQuote from msalem on May 24, 2018, 6:58 pmI have setup the bonding on the server, seems that the software is getting confused on which network to take.
I used the shell to setup all 5 vlans, however none are accessible. ? do you have any documentation or steps on how to setup the Bonding ?
Thanks
I have setup the bonding on the server, seems that the software is getting confused on which network to take.
I used the shell to setup all 5 vlans, however none are accessible. ? do you have any documentation or steps on how to setup the Bonding ?
Thanks
admin
2,930 Posts
May 24, 2018, 10:13 pmQuote from admin on May 24, 2018, 10:13 pmHi,
My apology but it is possible what i suggested to map all subnets into 1 bond may be incorrect. It is not a case that we tested. It is possible the iSCSI MPIO will have issues if mapped to a single bond. We will schedule to test this within the coming days.
I checked the docs, it states a minimum of 2 nics but does not specify if this applies to bonding. I will update this once we test it.
The bonding setup during cluster creation is pretty straightforward by itself. We do have a deployment guide in the making which covers this but is not done yet. but the ui is very easy to figure out.
Hi,
My apology but it is possible what i suggested to map all subnets into 1 bond may be incorrect. It is not a case that we tested. It is possible the iSCSI MPIO will have issues if mapped to a single bond. We will schedule to test this within the coming days.
I checked the docs, it states a minimum of 2 nics but does not specify if this applies to bonding. I will update this once we test it.
The bonding setup during cluster creation is pretty straightforward by itself. We do have a deployment guide in the making which covers this but is not done yet. but the ui is very easy to figure out.
msalem
87 Posts
May 25, 2018, 5:33 pmQuote from msalem on May 25, 2018, 5:33 pmHey Admin,
Thanks for the reply, I have tried to edit /etc/networking/interfaces .. but nothing is showing. I tried bonding and even tried adding vlans to the interface without bonding, it seems not to add any value to the network.
I think one down fall is the OS: Ubuntu is notorious for networking and so confusing, however developers like it but other admins hate it. ( very flaky OS).
even ifdown and ifup does not change anything on the server, I need to reboot to see changes. ?
Any help will be greatly appricaited.
Hey Admin,
Thanks for the reply, I have tried to edit /etc/networking/interfaces .. but nothing is showing. I tried bonding and even tried adding vlans to the interface without bonding, it seems not to add any value to the network.
I think one down fall is the OS: Ubuntu is notorious for networking and so confusing, however developers like it but other admins hate it. ( very flaky OS).
even ifdown and ifup does not change anything on the server, I need to reboot to see changes. ?
Any help will be greatly appricaited.
admin
2,930 Posts
May 25, 2018, 6:19 pmQuote from admin on May 25, 2018, 6:19 pmI believe the issue with MPIO getting confused when using a single nic/bond, we have never tested PetaSAN with a single nic and thus not supported. I do not think it is OS related, we do not rely on /etc/networking/interfaces and ifconfig, rather we use the ip command to dynamically configure the network on startup based on /opt/petasan/config/cluster_info and /opt/petasan/config/node_info as well as to handle iSCSI virtual ip assignments. Maybe the issue is in our code that gets confused when using a single nic, but i still think MPIO is the issue.
Re Ubuntu: most ceph and openstack installations are running it, so it is pretty stable.
So for now i recommend you start with at least 4 nics if you wish to use bonding, create 2 bonds and map the 5 subnets on the 2 bonds as recommended for 2 nics.
Again we will look into it at least to confirm if it is an MPIO issue.
I believe the issue with MPIO getting confused when using a single nic/bond, we have never tested PetaSAN with a single nic and thus not supported. I do not think it is OS related, we do not rely on /etc/networking/interfaces and ifconfig, rather we use the ip command to dynamically configure the network on startup based on /opt/petasan/config/cluster_info and /opt/petasan/config/node_info as well as to handle iSCSI virtual ip assignments. Maybe the issue is in our code that gets confused when using a single nic, but i still think MPIO is the issue.
Re Ubuntu: most ceph and openstack installations are running it, so it is pretty stable.
So for now i recommend you start with at least 4 nics if you wish to use bonding, create 2 bonds and map the 5 subnets on the 2 bonds as recommended for 2 nics.
Again we will look into it at least to confirm if it is an MPIO issue.
Last edited on May 25, 2018, 6:23 pm by admin · #6
msalem
87 Posts
May 25, 2018, 6:49 pmQuote from msalem on May 25, 2018, 6:49 pmThanks for the reply.
With this setup it would be hard for us to implement the solution. So this is what we will have. So we are getting a new 1GIG nic to host the management network.
1 - 1x1GB nic = Management.
2 - First 10GB = ISCSI1 + Backend1 ( Vlan setup).
3 - Second 10GB = ISCSI2 + Backend2 (Vlan setup).
Please let us know if this can be achieved with your solution
Thanks
Thanks for the reply.
With this setup it would be hard for us to implement the solution. So this is what we will have. So we are getting a new 1GIG nic to host the management network.
1 - 1x1GB nic = Management.
2 - First 10GB = ISCSI1 + Backend1 ( Vlan setup).
3 - Second 10GB = ISCSI2 + Backend2 (Vlan setup).
Please let us know if this can be achieved with your solution
Thanks
admin
2,930 Posts
May 25, 2018, 7:15 pmQuote from admin on May 25, 2018, 7:15 pmYes this will work, you can even put your management network on the first 10G nic and would need 2 nics in total. Note i assume by vlan you mean subnet, no vlan tagging.
Yes this will work, you can even put your management network on the first 10G nic and would need 2 nics in total. Note i assume by vlan you mean subnet, no vlan tagging.
msalem
87 Posts
May 25, 2018, 7:23 pmQuote from msalem on May 25, 2018, 7:23 pmI ment vlan tagging, if I leave the network untagged in the config, I cannot see the node.
e.g. I have installed PetaSAN on a host, and put the management IP on the first 10GIG nic, the server cannot reached from the network.
We have other servers in the same OS (Ubuntu 16) we neede to tag the vlan in the configuration to be able to see the server.
example from another server:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
auto enp3s0f0
iface enp3s0f0 inet manual
bond-master bond0
auto enp3s0f1
iface enp3s0f1 inet manual
bond-master bond0
auto bond0
iface bond0 inet manual
bond-mode 4
bond-miimon 100
bond-lacp-rate fast
bond-slaves none
auto bond0.550
iface bond0.550 inet static
address 10.228.32.102
netmask 255.255.248.0
post-up ip r add default via 10.228.32.1
vlan-raw-device bond0
auto bond0.565
iface bond0.565 inet static
address 10.228.56.102
netmask 255.255.248.0
vlan-raw-device bond0
So what we are trying to do is the following:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual
auto eth0.550
iface eth0.550 inet static
address 10.228.32.102
netmask 255.255.248.0
post-up ip r add default via 10.228.32.1
vlan-raw-device eth0
auto eth0.565
iface eth0.565 inet static
address 10.228.56.102
netmask 255.255.248.0
vlan-raw-device eth0
auto eth1.572
iface eth1.572 inet static
address 10.228.72.102
netmask 255.255.248.0
vlan-raw-device eth1
auto eth1.580
iface eth1.580 inet static
address 10.228.80.102
netmask 255.255.248.0
vlan-raw-device eth1
I ment vlan tagging, if I leave the network untagged in the config, I cannot see the node.
e.g. I have installed PetaSAN on a host, and put the management IP on the first 10GIG nic, the server cannot reached from the network.
We have other servers in the same OS (Ubuntu 16) we neede to tag the vlan in the configuration to be able to see the server.
example from another server:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
auto enp3s0f0
iface enp3s0f0 inet manual
bond-master bond0
auto enp3s0f1
iface enp3s0f1 inet manual
bond-master bond0
auto bond0
iface bond0 inet manual
bond-mode 4
bond-miimon 100
bond-lacp-rate fast
bond-slaves none
auto bond0.550
iface bond0.550 inet static
address 10.228.32.102
netmask 255.255.248.0
post-up ip r add default via 10.228.32.1
vlan-raw-device bond0
auto bond0.565
iface bond0.565 inet static
address 10.228.56.102
netmask 255.255.248.0
vlan-raw-device bond0
So what we are trying to do is the following:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual
auto eth0.550
iface eth0.550 inet static
address 10.228.32.102
netmask 255.255.248.0
post-up ip r add default via 10.228.32.1
vlan-raw-device eth0
auto eth0.565
iface eth0.565 inet static
address 10.228.56.102
netmask 255.255.248.0
vlan-raw-device eth0
auto eth1.572
iface eth1.572 inet static
address 10.228.72.102
netmask 255.255.248.0
vlan-raw-device eth1
auto eth1.580
iface eth1.580 inet static
address 10.228.80.102
netmask 255.255.248.0
vlan-raw-device eth1
admin
2,930 Posts
May 25, 2018, 7:39 pmQuote from admin on May 25, 2018, 7:39 pmvlan tagging is not currently supported via the PetaSAN ui, did you actually manually set it up or are you trying it ?
vlan tagging is not currently supported via the PetaSAN ui, did you actually manually set it up or are you trying it ?
Last edited on May 25, 2018, 7:39 pm by admin · #10
Servers with Two NICs.
msalem
87 Posts
Quote from msalem on May 22, 2018, 6:31 pmHey Support,
Hope everything is well, I have a few server with 2X10GB NIcs.
what is the best setup for these servers in terms of networking ?
1 - Create vlans with 5 networks.
NIC1 = management + ISCSI1 + Backend1 ( All in virtual NICs)
NIC2 = ISCSI2 + Backend2 ( All in virtual NICs)
2 - Create 3 vlans with bond NICS
bond0 = Manamgent + Backend + ISCSI
Thanks
Hey Support,
Hope everything is well, I have a few server with 2X10GB NIcs.
what is the best setup for these servers in terms of networking ?
1 - Create vlans with 5 networks.
NIC1 = management + ISCSI1 + Backend1 ( All in virtual NICs)
NIC2 = ISCSI2 + Backend2 ( All in virtual NICs)
2 - Create 3 vlans with bond NICS
bond0 = Manamgent + Backend + ISCSI
Thanks
admin
2,930 Posts
Quote from admin on May 22, 2018, 7:43 pmprobably creating a bond from the 2 nics and mapping all 5 subnets on it
probably creating a bond from the 2 nics and mapping all 5 subnets on it
msalem
87 Posts
Quote from msalem on May 24, 2018, 6:58 pmI have setup the bonding on the server, seems that the software is getting confused on which network to take.
I used the shell to setup all 5 vlans, however none are accessible. ? do you have any documentation or steps on how to setup the Bonding ?
Thanks
I have setup the bonding on the server, seems that the software is getting confused on which network to take.
I used the shell to setup all 5 vlans, however none are accessible. ? do you have any documentation or steps on how to setup the Bonding ?
Thanks
admin
2,930 Posts
Quote from admin on May 24, 2018, 10:13 pmHi,
My apology but it is possible what i suggested to map all subnets into 1 bond may be incorrect. It is not a case that we tested. It is possible the iSCSI MPIO will have issues if mapped to a single bond. We will schedule to test this within the coming days.
I checked the docs, it states a minimum of 2 nics but does not specify if this applies to bonding. I will update this once we test it.
The bonding setup during cluster creation is pretty straightforward by itself. We do have a deployment guide in the making which covers this but is not done yet. but the ui is very easy to figure out.
Hi,
My apology but it is possible what i suggested to map all subnets into 1 bond may be incorrect. It is not a case that we tested. It is possible the iSCSI MPIO will have issues if mapped to a single bond. We will schedule to test this within the coming days.
I checked the docs, it states a minimum of 2 nics but does not specify if this applies to bonding. I will update this once we test it.
The bonding setup during cluster creation is pretty straightforward by itself. We do have a deployment guide in the making which covers this but is not done yet. but the ui is very easy to figure out.
msalem
87 Posts
Quote from msalem on May 25, 2018, 5:33 pmHey Admin,
Thanks for the reply, I have tried to edit /etc/networking/interfaces .. but nothing is showing. I tried bonding and even tried adding vlans to the interface without bonding, it seems not to add any value to the network.
I think one down fall is the OS: Ubuntu is notorious for networking and so confusing, however developers like it but other admins hate it. ( very flaky OS).
even ifdown and ifup does not change anything on the server, I need to reboot to see changes. ?
Any help will be greatly appricaited.
Hey Admin,
Thanks for the reply, I have tried to edit /etc/networking/interfaces .. but nothing is showing. I tried bonding and even tried adding vlans to the interface without bonding, it seems not to add any value to the network.
I think one down fall is the OS: Ubuntu is notorious for networking and so confusing, however developers like it but other admins hate it. ( very flaky OS).
even ifdown and ifup does not change anything on the server, I need to reboot to see changes. ?
Any help will be greatly appricaited.
admin
2,930 Posts
Quote from admin on May 25, 2018, 6:19 pmI believe the issue with MPIO getting confused when using a single nic/bond, we have never tested PetaSAN with a single nic and thus not supported. I do not think it is OS related, we do not rely on /etc/networking/interfaces and ifconfig, rather we use the ip command to dynamically configure the network on startup based on /opt/petasan/config/cluster_info and /opt/petasan/config/node_info as well as to handle iSCSI virtual ip assignments. Maybe the issue is in our code that gets confused when using a single nic, but i still think MPIO is the issue.
Re Ubuntu: most ceph and openstack installations are running it, so it is pretty stable.
So for now i recommend you start with at least 4 nics if you wish to use bonding, create 2 bonds and map the 5 subnets on the 2 bonds as recommended for 2 nics.
Again we will look into it at least to confirm if it is an MPIO issue.
I believe the issue with MPIO getting confused when using a single nic/bond, we have never tested PetaSAN with a single nic and thus not supported. I do not think it is OS related, we do not rely on /etc/networking/interfaces and ifconfig, rather we use the ip command to dynamically configure the network on startup based on /opt/petasan/config/cluster_info and /opt/petasan/config/node_info as well as to handle iSCSI virtual ip assignments. Maybe the issue is in our code that gets confused when using a single nic, but i still think MPIO is the issue.
Re Ubuntu: most ceph and openstack installations are running it, so it is pretty stable.
So for now i recommend you start with at least 4 nics if you wish to use bonding, create 2 bonds and map the 5 subnets on the 2 bonds as recommended for 2 nics.
Again we will look into it at least to confirm if it is an MPIO issue.
msalem
87 Posts
Quote from msalem on May 25, 2018, 6:49 pmThanks for the reply.
With this setup it would be hard for us to implement the solution. So this is what we will have. So we are getting a new 1GIG nic to host the management network.
1 - 1x1GB nic = Management.
2 - First 10GB = ISCSI1 + Backend1 ( Vlan setup).
3 - Second 10GB = ISCSI2 + Backend2 (Vlan setup).
Please let us know if this can be achieved with your solution
Thanks
Thanks for the reply.
With this setup it would be hard for us to implement the solution. So this is what we will have. So we are getting a new 1GIG nic to host the management network.
1 - 1x1GB nic = Management.
2 - First 10GB = ISCSI1 + Backend1 ( Vlan setup).
3 - Second 10GB = ISCSI2 + Backend2 (Vlan setup).
Please let us know if this can be achieved with your solution
Thanks
admin
2,930 Posts
Quote from admin on May 25, 2018, 7:15 pmYes this will work, you can even put your management network on the first 10G nic and would need 2 nics in total. Note i assume by vlan you mean subnet, no vlan tagging.
Yes this will work, you can even put your management network on the first 10G nic and would need 2 nics in total. Note i assume by vlan you mean subnet, no vlan tagging.
msalem
87 Posts
Quote from msalem on May 25, 2018, 7:23 pmI ment vlan tagging, if I leave the network untagged in the config, I cannot see the node.
e.g. I have installed PetaSAN on a host, and put the management IP on the first 10GIG nic, the server cannot reached from the network.
We have other servers in the same OS (Ubuntu 16) we neede to tag the vlan in the configuration to be able to see the server.
example from another server:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopbackauto enp3s0f0
iface enp3s0f0 inet manual
bond-master bond0
auto enp3s0f1
iface enp3s0f1 inet manual
bond-master bond0
auto bond0
iface bond0 inet manual
bond-mode 4
bond-miimon 100
bond-lacp-rate fast
bond-slaves none
auto bond0.550
iface bond0.550 inet static
address 10.228.32.102
netmask 255.255.248.0
post-up ip r add default via 10.228.32.1
vlan-raw-device bond0
auto bond0.565
iface bond0.565 inet static
address 10.228.56.102
netmask 255.255.248.0
vlan-raw-device bond0So what we are trying to do is the following:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopbackauto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manualauto eth0.550
iface eth0.550 inet static
address 10.228.32.102
netmask 255.255.248.0
post-up ip r add default via 10.228.32.1
vlan-raw-device eth0auto eth0.565
iface eth0.565 inet static
address 10.228.56.102
netmask 255.255.248.0
vlan-raw-device eth0auto eth1.572
iface eth1.572 inet static
address 10.228.72.102
netmask 255.255.248.0
vlan-raw-device eth1auto eth1.580
iface eth1.580 inet static
address 10.228.80.102
netmask 255.255.248.0
vlan-raw-device eth1
I ment vlan tagging, if I leave the network untagged in the config, I cannot see the node.
e.g. I have installed PetaSAN on a host, and put the management IP on the first 10GIG nic, the server cannot reached from the network.
We have other servers in the same OS (Ubuntu 16) we neede to tag the vlan in the configuration to be able to see the server.
example from another server:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopbackauto enp3s0f0
iface enp3s0f0 inet manual
bond-master bond0
auto enp3s0f1
iface enp3s0f1 inet manual
bond-master bond0
auto bond0
iface bond0 inet manual
bond-mode 4
bond-miimon 100
bond-lacp-rate fast
bond-slaves none
auto bond0.550
iface bond0.550 inet static
address 10.228.32.102
netmask 255.255.248.0
post-up ip r add default via 10.228.32.1
vlan-raw-device bond0
auto bond0.565
iface bond0.565 inet static
address 10.228.56.102
netmask 255.255.248.0
vlan-raw-device bond0
So what we are trying to do is the following:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopbackauto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manualauto eth0.550
iface eth0.550 inet static
address 10.228.32.102
netmask 255.255.248.0
post-up ip r add default via 10.228.32.1
vlan-raw-device eth0auto eth0.565
iface eth0.565 inet static
address 10.228.56.102
netmask 255.255.248.0
vlan-raw-device eth0auto eth1.572
iface eth1.572 inet static
address 10.228.72.102
netmask 255.255.248.0
vlan-raw-device eth1auto eth1.580
iface eth1.580 inet static
address 10.228.80.102
netmask 255.255.248.0
vlan-raw-device eth1
admin
2,930 Posts
Quote from admin on May 25, 2018, 7:39 pmvlan tagging is not currently supported via the PetaSAN ui, did you actually manually set it up or are you trying it ?
vlan tagging is not currently supported via the PetaSAN ui, did you actually manually set it up or are you trying it ?