interface layout on production cluster
icecoke
10 Posts
December 23, 2019, 3:12 pmQuote from icecoke on December 23, 2019, 3:12 pmHi,
is it a performance problem (and possible at all?) to put the management network on the same physical nic as the first iSCSI nic?
Each node has two 10gbit/s interfaces for the iSCSI subnets and two 100gbit/s (mellanox optical) interfaces for the backend subnets. Backend without gateway on two separate switches.
The Cluster will be used for a few thousand VMs with typical, mixed (web, mail, database) traffic/usage.
many thanks in advance for your input!
Hi,
is it a performance problem (and possible at all?) to put the management network on the same physical nic as the first iSCSI nic?
Each node has two 10gbit/s interfaces for the iSCSI subnets and two 100gbit/s (mellanox optical) interfaces for the backend subnets. Backend without gateway on two separate switches.
The Cluster will be used for a few thousand VMs with typical, mixed (web, mail, database) traffic/usage.
many thanks in advance for your input!
admin
2,930 Posts
December 23, 2019, 5:05 pmQuote from admin on December 23, 2019, 5:05 pmNot a problem.
It is better to separate them in vlans to isolate broadcast domains, also need to consider if your management interface subnet will have external access such as for online upgrades and ntp time sync that it is isolated.
Not a problem.
It is better to separate them in vlans to isolate broadcast domains, also need to consider if your management interface subnet will have external access such as for online upgrades and ntp time sync that it is isolated.
icecoke
10 Posts
December 23, 2019, 7:47 pmQuote from icecoke on December 23, 2019, 7:47 pmThanks you for your input.
The iSCSI/Storage traffic network (10.0.0.0/8) is behind a nat router/firewall system, so the management traffic would be able to reach the outside for updates.
Is the amount of traffic high for the management subnet?
Atm we are unable to put an additional network card into the nodes or to use vlans.
That‘s why we wonder if it could be used that way in production, or if we would run into foreseeable problems, even if the storage traffic would be quite under the 10gbit/s capability.
If you say it‘s a must have to separate the management traffic, we would do it. Do we have to?
Thanks you for your input.
The iSCSI/Storage traffic network (10.0.0.0/8) is behind a nat router/firewall system, so the management traffic would be able to reach the outside for updates.
Is the amount of traffic high for the management subnet?
Atm we are unable to put an additional network card into the nodes or to use vlans.
That‘s why we wonder if it could be used that way in production, or if we would run into foreseeable problems, even if the storage traffic would be quite under the 10gbit/s capability.
If you say it‘s a must have to separate the management traffic, we would do it. Do we have to?
admin
2,930 Posts
December 23, 2019, 9:05 pmQuote from admin on December 23, 2019, 9:05 pmno problem, there is very little traffic on the management network.
no problem, there is very little traffic on the management network.
icecoke
10 Posts
December 23, 2019, 9:35 pmQuote from icecoke on December 23, 2019, 9:35 pmGreat to hear this!
Many thanks for your great product and support!
Merry Xmas for you and your family!!!
Great to hear this!
Many thanks for your great product and support!
Merry Xmas for you and your family!!!
admin
2,930 Posts
December 24, 2019, 12:51 pmQuote from admin on December 24, 2019, 12:51 pmThanks ! same to you 🙂
Thanks ! same to you 🙂
Shiori
86 Posts
January 28, 2020, 6:48 pmQuote from Shiori on January 28, 2020, 6:48 pmWhen we tested Petasan 1.6 we had all networks on a single nic. This worked ok until we got 5 nodes on and the backend networks drowned out the management network completely. Most of the traffic was iscsi traffic with node to node traffic being a close second (most likely due to the osd data that was local to the iscsi node)
You could bond your 10GB ports and then vlan off the management network but that may not be what you want.
One of the backend networks is used for iscsi traffic, the other of node to node traffic. Management and other services are usually on anothe network.
We use Mellanox Connectx-2 and 3 cards for our backend networks and then have a built in 1gbps connection for management. If we need in the future to have more management bandwidth we can also bond ethernet ports to gain data stream separation. We havent run into any bandwidth issues this way and some of our clients use our san for boot from san luns, which means if we tried to run in your proposed setup, clients would have access to our noc network with very little effort which isnt desirable at the best of times.
When we tested Petasan 1.6 we had all networks on a single nic. This worked ok until we got 5 nodes on and the backend networks drowned out the management network completely. Most of the traffic was iscsi traffic with node to node traffic being a close second (most likely due to the osd data that was local to the iscsi node)
You could bond your 10GB ports and then vlan off the management network but that may not be what you want.
One of the backend networks is used for iscsi traffic, the other of node to node traffic. Management and other services are usually on anothe network.
We use Mellanox Connectx-2 and 3 cards for our backend networks and then have a built in 1gbps connection for management. If we need in the future to have more management bandwidth we can also bond ethernet ports to gain data stream separation. We havent run into any bandwidth issues this way and some of our clients use our san for boot from san luns, which means if we tried to run in your proposed setup, clients would have access to our noc network with very little effort which isnt desirable at the best of times.
interface layout on production cluster
icecoke
10 Posts
Quote from icecoke on December 23, 2019, 3:12 pmHi,
is it a performance problem (and possible at all?) to put the management network on the same physical nic as the first iSCSI nic?
Each node has two 10gbit/s interfaces for the iSCSI subnets and two 100gbit/s (mellanox optical) interfaces for the backend subnets. Backend without gateway on two separate switches.
The Cluster will be used for a few thousand VMs with typical, mixed (web, mail, database) traffic/usage.
many thanks in advance for your input!
Hi,
is it a performance problem (and possible at all?) to put the management network on the same physical nic as the first iSCSI nic?
Each node has two 10gbit/s interfaces for the iSCSI subnets and two 100gbit/s (mellanox optical) interfaces for the backend subnets. Backend without gateway on two separate switches.
The Cluster will be used for a few thousand VMs with typical, mixed (web, mail, database) traffic/usage.
many thanks in advance for your input!
admin
2,930 Posts
Quote from admin on December 23, 2019, 5:05 pmNot a problem.
It is better to separate them in vlans to isolate broadcast domains, also need to consider if your management interface subnet will have external access such as for online upgrades and ntp time sync that it is isolated.
Not a problem.
It is better to separate them in vlans to isolate broadcast domains, also need to consider if your management interface subnet will have external access such as for online upgrades and ntp time sync that it is isolated.
icecoke
10 Posts
Quote from icecoke on December 23, 2019, 7:47 pmThanks you for your input.
The iSCSI/Storage traffic network (10.0.0.0/8) is behind a nat router/firewall system, so the management traffic would be able to reach the outside for updates.
Is the amount of traffic high for the management subnet?Atm we are unable to put an additional network card into the nodes or to use vlans.
That‘s why we wonder if it could be used that way in production, or if we would run into foreseeable problems, even if the storage traffic would be quite under the 10gbit/s capability.
If you say it‘s a must have to separate the management traffic, we would do it. Do we have to?
Thanks you for your input.
The iSCSI/Storage traffic network (10.0.0.0/8) is behind a nat router/firewall system, so the management traffic would be able to reach the outside for updates.
Is the amount of traffic high for the management subnet?
Atm we are unable to put an additional network card into the nodes or to use vlans.
That‘s why we wonder if it could be used that way in production, or if we would run into foreseeable problems, even if the storage traffic would be quite under the 10gbit/s capability.
If you say it‘s a must have to separate the management traffic, we would do it. Do we have to?
admin
2,930 Posts
Quote from admin on December 23, 2019, 9:05 pmno problem, there is very little traffic on the management network.
no problem, there is very little traffic on the management network.
icecoke
10 Posts
Quote from icecoke on December 23, 2019, 9:35 pmGreat to hear this!
Many thanks for your great product and support!
Merry Xmas for you and your family!!!
Great to hear this!
Many thanks for your great product and support!
Merry Xmas for you and your family!!!
admin
2,930 Posts
Quote from admin on December 24, 2019, 12:51 pmThanks ! same to you 🙂
Thanks ! same to you 🙂
Shiori
86 Posts
Quote from Shiori on January 28, 2020, 6:48 pmWhen we tested Petasan 1.6 we had all networks on a single nic. This worked ok until we got 5 nodes on and the backend networks drowned out the management network completely. Most of the traffic was iscsi traffic with node to node traffic being a close second (most likely due to the osd data that was local to the iscsi node)
You could bond your 10GB ports and then vlan off the management network but that may not be what you want.
One of the backend networks is used for iscsi traffic, the other of node to node traffic. Management and other services are usually on anothe network.
We use Mellanox Connectx-2 and 3 cards for our backend networks and then have a built in 1gbps connection for management. If we need in the future to have more management bandwidth we can also bond ethernet ports to gain data stream separation. We havent run into any bandwidth issues this way and some of our clients use our san for boot from san luns, which means if we tried to run in your proposed setup, clients would have access to our noc network with very little effort which isnt desirable at the best of times.
When we tested Petasan 1.6 we had all networks on a single nic. This worked ok until we got 5 nodes on and the backend networks drowned out the management network completely. Most of the traffic was iscsi traffic with node to node traffic being a close second (most likely due to the osd data that was local to the iscsi node)
You could bond your 10GB ports and then vlan off the management network but that may not be what you want.
One of the backend networks is used for iscsi traffic, the other of node to node traffic. Management and other services are usually on anothe network.
We use Mellanox Connectx-2 and 3 cards for our backend networks and then have a built in 1gbps connection for management. If we need in the future to have more management bandwidth we can also bond ethernet ports to gain data stream separation. We havent run into any bandwidth issues this way and some of our clients use our san for boot from san luns, which means if we tried to run in your proposed setup, clients would have access to our noc network with very little effort which isnt desirable at the best of times.