Network redundancy
reto
6 Posts
February 16, 2018, 12:45 pmQuote from reto on February 16, 2018, 12:45 pmHi there,
first, it's a really nice project - thanks for your hard work 🙂
We plan to use two switches for redundancy reasons in our PetaSAN concept.
Our first plan was to connect every node to every switch, create MLAG (bond) and use VLANs for the different networks (iscsi1/2, backend1/2, management).
But we found out, that VLANs currently are not supported.
Therefore our plan is now the following:
- Create two VLANs
a) Management / iSCSI 1 / Backend 1
b) iSCSI 2 / Backend 2
- Connect the nodes (every node has only a 2 port 10Gbit card) to switch 1 und switch 2 with a single cable (switchport in access mode)
- Map on hypervisor level the iSCSI Luns with IPs from iSCSI 1/2 network
Our question:
If one switch fails (for example switch 2) the LUNs will be still accessible over iSCSI 2 network, but what happens, if the backend "2" network is not anymore available? Can PetaSAN work with just the Backend "1" network?
Please have a look into our diagram: https://d.pr/FREE/cXLeZV
Many thanks!
Best regards
Reto
Hi there,
first, it's a really nice project - thanks for your hard work 🙂
We plan to use two switches for redundancy reasons in our PetaSAN concept.
Our first plan was to connect every node to every switch, create MLAG (bond) and use VLANs for the different networks (iscsi1/2, backend1/2, management).
But we found out, that VLANs currently are not supported.
Therefore our plan is now the following:
- Create two VLANs
a) Management / iSCSI 1 / Backend 1
b) iSCSI 2 / Backend 2
- Connect the nodes (every node has only a 2 port 10Gbit card) to switch 1 und switch 2 with a single cable (switchport in access mode)
- Map on hypervisor level the iSCSI Luns with IPs from iSCSI 1/2 network
Our question:
If one switch fails (for example switch 2) the LUNs will be still accessible over iSCSI 2 network, but what happens, if the backend "2" network is not anymore available? Can PetaSAN work with just the Backend "1" network?
Please have a look into our diagram: https://d.pr/FREE/cXLeZV
Many thanks!
Best regards
Reto
Last edited on February 16, 2018, 12:51 pm by reto · #1
admin
2,930 Posts
February 16, 2018, 2:17 pmQuote from admin on February 16, 2018, 2:17 pmHi Reto,
Thanks very much. As you stated we currently do not support vlan/trunking ( probably we should soon ). You still can put the different subnets on a single switch and on single interface, the drawback of not segregating them into different vlans is that they would all share the same broadcast domain, any broadcast on 1 subnet will be received by all, in most cases this should not be an issue,unless maybe if you have external clients talk directly on the iSCSI subnets.
In iSCSI, network redundancy between client initiator and target is via using 2 subnets and MPIO, hence iSCSI 1 and 2 subnets. For backend networks, or generally any kind of server network you wish to make redundant, you would bond interfaces and connect them to different switches.
So in your case since you only have 2 interfaces and need redundancy, i would create 1 bond from the 2 nics, connect each nic to a different switch, map all subnets on this single bond. Disable vlan/trunk on your switches and disable vlan/trunk on you hypervisor interfaces that talk to PetaSAN.
Hatem
Hi Reto,
Thanks very much. As you stated we currently do not support vlan/trunking ( probably we should soon ). You still can put the different subnets on a single switch and on single interface, the drawback of not segregating them into different vlans is that they would all share the same broadcast domain, any broadcast on 1 subnet will be received by all, in most cases this should not be an issue,unless maybe if you have external clients talk directly on the iSCSI subnets.
In iSCSI, network redundancy between client initiator and target is via using 2 subnets and MPIO, hence iSCSI 1 and 2 subnets. For backend networks, or generally any kind of server network you wish to make redundant, you would bond interfaces and connect them to different switches.
So in your case since you only have 2 interfaces and need redundancy, i would create 1 bond from the 2 nics, connect each nic to a different switch, map all subnets on this single bond. Disable vlan/trunk on your switches and disable vlan/trunk on you hypervisor interfaces that talk to PetaSAN.
Hatem
Last edited on February 16, 2018, 2:19 pm by admin · #2
reto
6 Posts
February 16, 2018, 3:25 pmQuote from reto on February 16, 2018, 3:25 pmHi Hatem
Many thanks for your fast and clear answer!
In this case, we will put everything together in one network, until the VLAN support is coming 🙂
Best regards
Reto
Hi Hatem
Many thanks for your fast and clear answer!
In this case, we will put everything together in one network, until the VLAN support is coming 🙂
Best regards
Reto
yudorogov
31 Posts
February 17, 2018, 1:26 pmQuote from yudorogov on February 17, 2018, 1:26 pmHi! PetaSAN is great product! I recommend use it in different networks(best use in different switches for cluster&production networks), we use 2 nvram disk and 36 spin HDDs(overall raw space is over 1.5Pb) in each storage node, and 2 Juniper EX4550 on cluster and storage network(with iscsi tuning of configuration on switches) and gets ~9000 IOPS on write and ~30000 IOPS for write(troughput for write is 2500 Mbyte/s at the expense of use NVRAM disks, 800 Mbyte/s for read, this data measure of internal PetaSAN benchmark. We use LACP for Backend-1 & Backend2, and place mgmt network in dedicated 1Gbps port in each server
Hi! PetaSAN is great product! I recommend use it in different networks(best use in different switches for cluster&production networks), we use 2 nvram disk and 36 spin HDDs(overall raw space is over 1.5Pb) in each storage node, and 2 Juniper EX4550 on cluster and storage network(with iscsi tuning of configuration on switches) and gets ~9000 IOPS on write and ~30000 IOPS for write(troughput for write is 2500 Mbyte/s at the expense of use NVRAM disks, 800 Mbyte/s for read, this data measure of internal PetaSAN benchmark. We use LACP for Backend-1 & Backend2, and place mgmt network in dedicated 1Gbps port in each server
Last edited on February 17, 2018, 2:30 pm by yudorogov · #4
reto
6 Posts
February 20, 2018, 1:13 pmQuote from reto on February 20, 2018, 1:13 pmHi there
@yudorogov:
Thanks for sharing the interesting information!
@admin:
I just recognized, that during the setup process, there is no option to create a bond.
Is that correct? Is it only later available in the web interface?
Best regards
Reto
Hi there
@yudorogov:
Thanks for sharing the interesting information!
@admin:
I just recognized, that during the setup process, there is no option to create a bond.
Is that correct? Is it only later available in the web interface?
Best regards
Reto
Last edited on February 20, 2018, 1:13 pm by reto · #5
admin
2,930 Posts
February 20, 2018, 3:15 pmQuote from admin on February 20, 2018, 3:15 pmThe network definition for the entire cluster (which includes bonding configuration) is done only once during cluster creation.,,which happens if you choose "Create New Cluster" rather than "Join Existing Cluster". This definition includes interface count/jumbo/frames/nic bonding/subnets base ips and mapping of subnets to nics/bonds.
So it is done once, any new node joining this cluster will have this configuration applied to it. You do not enter it for every node. This avoids configuration mistakes and keeps setup consistent across nodes.
The network definition for the entire cluster (which includes bonding configuration) is done only once during cluster creation.,,which happens if you choose "Create New Cluster" rather than "Join Existing Cluster". This definition includes interface count/jumbo/frames/nic bonding/subnets base ips and mapping of subnets to nics/bonds.
So it is done once, any new node joining this cluster will have this configuration applied to it. You do not enter it for every node. This avoids configuration mistakes and keeps setup consistent across nodes.
Last edited on February 20, 2018, 3:16 pm by admin · #6
reto
6 Posts
February 20, 2018, 3:25 pmQuote from reto on February 20, 2018, 3:25 pmHi admin,
thanks for your fast reply 🙂
Ok, then we need a separate management network (NIC), because we cannot reach the cluster configuration webpage if no bond is configured within the initial setup.
Best regards
Reto
Hi admin,
thanks for your fast reply 🙂
Ok, then we need a separate management network (NIC), because we cannot reach the cluster configuration webpage if no bond is configured within the initial setup.
Best regards
Reto
dragonred
1 Post
November 7, 2018, 3:39 amQuote from dragonred on November 7, 2018, 3:39 am
Quote from yudorogov on February 17, 2018, 1:26 pm
Hi! PetaSAN is great product! I recommend use it in different networks(best use in different switches for cluster&production networks), we use 2 nvram disk and 36 spin HDDs(overall raw space is over 1.5Pb) in each storage node, and 2 Juniper EX4550 on cluster and storage network(with iscsi tuning of configuration on switches) and gets ~9000 IOPS on write and ~30000 IOPS for write(troughput for write is 2500 Mbyte/s at the expense of use NVRAM disks, 800 Mbyte/s for read, this data measure of internal PetaSAN benchmark. We use LACP for Backend-1 & Backend2, and place mgmt network in dedicated 1Gbps port in each server
Hi yudorogov could you share parameter with iscsi tuning of configuration on switches for me. Thanks yudorogov
Quote from yudorogov on February 17, 2018, 1:26 pm
Hi! PetaSAN is great product! I recommend use it in different networks(best use in different switches for cluster&production networks), we use 2 nvram disk and 36 spin HDDs(overall raw space is over 1.5Pb) in each storage node, and 2 Juniper EX4550 on cluster and storage network(with iscsi tuning of configuration on switches) and gets ~9000 IOPS on write and ~30000 IOPS for write(troughput for write is 2500 Mbyte/s at the expense of use NVRAM disks, 800 Mbyte/s for read, this data measure of internal PetaSAN benchmark. We use LACP for Backend-1 & Backend2, and place mgmt network in dedicated 1Gbps port in each server
Hi yudorogov could you share parameter with iscsi tuning of configuration on switches for me. Thanks yudorogov
Network redundancy
reto
6 Posts
Quote from reto on February 16, 2018, 12:45 pmHi there,
first, it's a really nice project - thanks for your hard work 🙂
We plan to use two switches for redundancy reasons in our PetaSAN concept.
Our first plan was to connect every node to every switch, create MLAG (bond) and use VLANs for the different networks (iscsi1/2, backend1/2, management).
But we found out, that VLANs currently are not supported.Therefore our plan is now the following:
- Create two VLANs
a) Management / iSCSI 1 / Backend 1
b) iSCSI 2 / Backend 2- Connect the nodes (every node has only a 2 port 10Gbit card) to switch 1 und switch 2 with a single cable (switchport in access mode)
- Map on hypervisor level the iSCSI Luns with IPs from iSCSI 1/2 network
Our question:
If one switch fails (for example switch 2) the LUNs will be still accessible over iSCSI 2 network, but what happens, if the backend "2" network is not anymore available? Can PetaSAN work with just the Backend "1" network?
Please have a look into our diagram: https://d.pr/FREE/cXLeZV
Many thanks!
Best regards
Reto
Hi there,
first, it's a really nice project - thanks for your hard work 🙂
We plan to use two switches for redundancy reasons in our PetaSAN concept.
Our first plan was to connect every node to every switch, create MLAG (bond) and use VLANs for the different networks (iscsi1/2, backend1/2, management).
But we found out, that VLANs currently are not supported.
Therefore our plan is now the following:
- Create two VLANs
a) Management / iSCSI 1 / Backend 1
b) iSCSI 2 / Backend 2 - Connect the nodes (every node has only a 2 port 10Gbit card) to switch 1 und switch 2 with a single cable (switchport in access mode)
- Map on hypervisor level the iSCSI Luns with IPs from iSCSI 1/2 network
Our question:
If one switch fails (for example switch 2) the LUNs will be still accessible over iSCSI 2 network, but what happens, if the backend "2" network is not anymore available? Can PetaSAN work with just the Backend "1" network?
Please have a look into our diagram: https://d.pr/FREE/cXLeZV
Many thanks!
Best regards
Reto
admin
2,930 Posts
Quote from admin on February 16, 2018, 2:17 pmHi Reto,
Thanks very much. As you stated we currently do not support vlan/trunking ( probably we should soon ). You still can put the different subnets on a single switch and on single interface, the drawback of not segregating them into different vlans is that they would all share the same broadcast domain, any broadcast on 1 subnet will be received by all, in most cases this should not be an issue,unless maybe if you have external clients talk directly on the iSCSI subnets.
In iSCSI, network redundancy between client initiator and target is via using 2 subnets and MPIO, hence iSCSI 1 and 2 subnets. For backend networks, or generally any kind of server network you wish to make redundant, you would bond interfaces and connect them to different switches.
So in your case since you only have 2 interfaces and need redundancy, i would create 1 bond from the 2 nics, connect each nic to a different switch, map all subnets on this single bond. Disable vlan/trunk on your switches and disable vlan/trunk on you hypervisor interfaces that talk to PetaSAN.
Hatem
Hi Reto,
Thanks very much. As you stated we currently do not support vlan/trunking ( probably we should soon ). You still can put the different subnets on a single switch and on single interface, the drawback of not segregating them into different vlans is that they would all share the same broadcast domain, any broadcast on 1 subnet will be received by all, in most cases this should not be an issue,unless maybe if you have external clients talk directly on the iSCSI subnets.
In iSCSI, network redundancy between client initiator and target is via using 2 subnets and MPIO, hence iSCSI 1 and 2 subnets. For backend networks, or generally any kind of server network you wish to make redundant, you would bond interfaces and connect them to different switches.
So in your case since you only have 2 interfaces and need redundancy, i would create 1 bond from the 2 nics, connect each nic to a different switch, map all subnets on this single bond. Disable vlan/trunk on your switches and disable vlan/trunk on you hypervisor interfaces that talk to PetaSAN.
Hatem
reto
6 Posts
Quote from reto on February 16, 2018, 3:25 pmHi Hatem
Many thanks for your fast and clear answer!
In this case, we will put everything together in one network, until the VLAN support is coming 🙂Best regards
Reto
Hi Hatem
Many thanks for your fast and clear answer!
In this case, we will put everything together in one network, until the VLAN support is coming 🙂
Best regards
Reto
yudorogov
31 Posts
Quote from yudorogov on February 17, 2018, 1:26 pmHi! PetaSAN is great product! I recommend use it in different networks(best use in different switches for cluster&production networks), we use 2 nvram disk and 36 spin HDDs(overall raw space is over 1.5Pb) in each storage node, and 2 Juniper EX4550 on cluster and storage network(with iscsi tuning of configuration on switches) and gets ~9000 IOPS on write and ~30000 IOPS for write(troughput for write is 2500 Mbyte/s at the expense of use NVRAM disks, 800 Mbyte/s for read, this data measure of internal PetaSAN benchmark. We use LACP for Backend-1 & Backend2, and place mgmt network in dedicated 1Gbps port in each server
Hi! PetaSAN is great product! I recommend use it in different networks(best use in different switches for cluster&production networks), we use 2 nvram disk and 36 spin HDDs(overall raw space is over 1.5Pb) in each storage node, and 2 Juniper EX4550 on cluster and storage network(with iscsi tuning of configuration on switches) and gets ~9000 IOPS on write and ~30000 IOPS for write(troughput for write is 2500 Mbyte/s at the expense of use NVRAM disks, 800 Mbyte/s for read, this data measure of internal PetaSAN benchmark. We use LACP for Backend-1 & Backend2, and place mgmt network in dedicated 1Gbps port in each server
reto
6 Posts
Quote from reto on February 20, 2018, 1:13 pmHi there
@yudorogov:
Thanks for sharing the interesting information!@admin:
I just recognized, that during the setup process, there is no option to create a bond.
Is that correct? Is it only later available in the web interface?Best regards
Reto
Hi there
@yudorogov:
Thanks for sharing the interesting information!
@admin:
I just recognized, that during the setup process, there is no option to create a bond.
Is that correct? Is it only later available in the web interface?
Best regards
Reto
admin
2,930 Posts
Quote from admin on February 20, 2018, 3:15 pmThe network definition for the entire cluster (which includes bonding configuration) is done only once during cluster creation.,,which happens if you choose "Create New Cluster" rather than "Join Existing Cluster". This definition includes interface count/jumbo/frames/nic bonding/subnets base ips and mapping of subnets to nics/bonds.
So it is done once, any new node joining this cluster will have this configuration applied to it. You do not enter it for every node. This avoids configuration mistakes and keeps setup consistent across nodes.
The network definition for the entire cluster (which includes bonding configuration) is done only once during cluster creation.,,which happens if you choose "Create New Cluster" rather than "Join Existing Cluster". This definition includes interface count/jumbo/frames/nic bonding/subnets base ips and mapping of subnets to nics/bonds.
So it is done once, any new node joining this cluster will have this configuration applied to it. You do not enter it for every node. This avoids configuration mistakes and keeps setup consistent across nodes.
reto
6 Posts
Quote from reto on February 20, 2018, 3:25 pmHi admin,
thanks for your fast reply 🙂
Ok, then we need a separate management network (NIC), because we cannot reach the cluster configuration webpage if no bond is configured within the initial setup.Best regards
Reto
Hi admin,
thanks for your fast reply 🙂
Ok, then we need a separate management network (NIC), because we cannot reach the cluster configuration webpage if no bond is configured within the initial setup.
Best regards
Reto
dragonred
1 Post
Quote from dragonred on November 7, 2018, 3:39 amQuote from yudorogov on February 17, 2018, 1:26 pmHi! PetaSAN is great product! I recommend use it in different networks(best use in different switches for cluster&production networks), we use 2 nvram disk and 36 spin HDDs(overall raw space is over 1.5Pb) in each storage node, and 2 Juniper EX4550 on cluster and storage network(with iscsi tuning of configuration on switches) and gets ~9000 IOPS on write and ~30000 IOPS for write(troughput for write is 2500 Mbyte/s at the expense of use NVRAM disks, 800 Mbyte/s for read, this data measure of internal PetaSAN benchmark. We use LACP for Backend-1 & Backend2, and place mgmt network in dedicated 1Gbps port in each server
Hi yudorogov could you share parameter with iscsi tuning of configuration on switches for me. Thanks yudorogov
Quote from yudorogov on February 17, 2018, 1:26 pmHi! PetaSAN is great product! I recommend use it in different networks(best use in different switches for cluster&production networks), we use 2 nvram disk and 36 spin HDDs(overall raw space is over 1.5Pb) in each storage node, and 2 Juniper EX4550 on cluster and storage network(with iscsi tuning of configuration on switches) and gets ~9000 IOPS on write and ~30000 IOPS for write(troughput for write is 2500 Mbyte/s at the expense of use NVRAM disks, 800 Mbyte/s for read, this data measure of internal PetaSAN benchmark. We use LACP for Backend-1 & Backend2, and place mgmt network in dedicated 1Gbps port in each server
Hi yudorogov could you share parameter with iscsi tuning of configuration on switches for me. Thanks yudorogov