Different network config per NODE
wailer
75 Posts
February 4, 2019, 6:47 pmQuote from wailer on February 4, 2019, 6:47 pmRight now , as far I have seen , all nodes need to have the same exact network hardware, and config.
It would be really helpful if you could use just backend and management network on nodes which are not going to use the iscsi target role... for instance...
We are currently trying to deploy a 6 node cluster, with 2 types of nodes:
3 x OSD Nodes - 4x 1 GBIT (MANAGE + 2 +2 BONDED BACKENDS) + 2x10GBE ISCSI Targets
3 x MON Nodes - Virtual - 4 x1 Gbit Interfaces (wihtout iSCSI network)
Thanks for your work!
Right now , as far I have seen , all nodes need to have the same exact network hardware, and config.
It would be really helpful if you could use just backend and management network on nodes which are not going to use the iscsi target role... for instance...
We are currently trying to deploy a 6 node cluster, with 2 types of nodes:
3 x OSD Nodes - 4x 1 GBIT (MANAGE + 2 +2 BONDED BACKENDS) + 2x10GBE ISCSI Targets
3 x MON Nodes - Virtual - 4 x1 Gbit Interfaces (wihtout iSCSI network)
Thanks for your work!
yudorogov
31 Posts
February 8, 2019, 5:15 amQuote from yudorogov on February 8, 2019, 5:15 am+100500 please add different network configs for nodes(MON, OSD, ISCSI)
+100500 please add different network configs for nodes(MON, OSD, ISCSI)
wailer
75 Posts
March 17, 2019, 11:13 pmQuote from wailer on March 17, 2019, 11:13 pmAlso , I would love to see the ability to use multiple network cards for iSCSI traffic , not just 2. This way you could get good performance from iSCSI MPIO with "cheap" 10GB network cards, instead of moving to 40GBe switching
Also , I would love to see the ability to use multiple network cards for iSCSI traffic , not just 2. This way you could get good performance from iSCSI MPIO with "cheap" 10GB network cards, instead of moving to 40GBe switching
Shiori
86 Posts
June 6, 2019, 12:56 amQuote from Shiori on June 6, 2019, 12:56 amMON nodes must see the iSCSI network, even with the role disabled. They provide all of the command and control and can not do it if they do not hae access to that network.
You can already accomplish most of what you want with a little modification. You MUST have as many network connections as the management nodes.
EG. I have management nodes with 4 and 2port 1GE nics and a dual port Infiniband nic. All my OSD nodes have various number of 1GE nics but all have a dual port Infiniband nic. These all work due to turning off unused nics and setting the backend and iSCSI networks to use the Infiniband nics.
If you do not have enough Ethernet ports, you can add some fake nics before joining to the cluster, just remember to bond them to real nics to ensure you don't blackhole data connections. (I had a test cluster with very different servers for the nodes and it was not practical to add enough GE ports so I just faked them and pushed everything out the available ports. This worked fine with very little modification)
MPIO works by splitting iSCSI targets across nodes, not nics. There are ways to make a single node do MPIO but that's a hack that means a single box has both of your iSCSI connections on it. If you need more iSCSI bandwidth you can bond more nics to your iSCSI network or just add more nodes with the iSCSI role (not quite the same but the more nodes performing iSCSI hosting, the less load and bandwidth you need per node)
MON nodes must see the iSCSI network, even with the role disabled. They provide all of the command and control and can not do it if they do not hae access to that network.
You can already accomplish most of what you want with a little modification. You MUST have as many network connections as the management nodes.
EG. I have management nodes with 4 and 2port 1GE nics and a dual port Infiniband nic. All my OSD nodes have various number of 1GE nics but all have a dual port Infiniband nic. These all work due to turning off unused nics and setting the backend and iSCSI networks to use the Infiniband nics.
If you do not have enough Ethernet ports, you can add some fake nics before joining to the cluster, just remember to bond them to real nics to ensure you don't blackhole data connections. (I had a test cluster with very different servers for the nodes and it was not practical to add enough GE ports so I just faked them and pushed everything out the available ports. This worked fine with very little modification)
MPIO works by splitting iSCSI targets across nodes, not nics. There are ways to make a single node do MPIO but that's a hack that means a single box has both of your iSCSI connections on it. If you need more iSCSI bandwidth you can bond more nics to your iSCSI network or just add more nodes with the iSCSI role (not quite the same but the more nodes performing iSCSI hosting, the less load and bandwidth you need per node)
wailer
75 Posts
July 11, 2019, 10:47 amQuote from wailer on July 11, 2019, 10:47 amThanks for your hints Shiori.
I still believe it would be nice when you would not need to do this kind of hacks, like creating fake nics and bonding them, to get nodes with different network hardware to run on the same cluster...
MPIO works by splitting iSCSI targets across nodes, not nics. There are ways to make a single node do MPIO but that's a hack that means a single box has both of your iSCSI connections on it. If you need more iSCSI bandwidth you can bond more nics to your iSCSI network or just add more nodes with the iSCSI role (not quite the same but the more nodes performing iSCSI hosting, the less load and bandwidth you need per node)
About this one, I am not sure if I understood this correctly.... We have 3 nodes with 2 nics for iSCSI in each one, we are seeing iSCSI traffic on all of their nics... also, we see these paths active in vSphere.. so it does look like you can do MPIO in a single node out-of-the-box.... As long as your cluster backend network can deal with all this bandwith I believe it's still interesting for many setups, to get this feature...
Thanks!
Thanks for your hints Shiori.
I still believe it would be nice when you would not need to do this kind of hacks, like creating fake nics and bonding them, to get nodes with different network hardware to run on the same cluster...
MPIO works by splitting iSCSI targets across nodes, not nics. There are ways to make a single node do MPIO but that's a hack that means a single box has both of your iSCSI connections on it. If you need more iSCSI bandwidth you can bond more nics to your iSCSI network or just add more nodes with the iSCSI role (not quite the same but the more nodes performing iSCSI hosting, the less load and bandwidth you need per node)
About this one, I am not sure if I understood this correctly.... We have 3 nodes with 2 nics for iSCSI in each one, we are seeing iSCSI traffic on all of their nics... also, we see these paths active in vSphere.. so it does look like you can do MPIO in a single node out-of-the-box.... As long as your cluster backend network can deal with all this bandwith I believe it's still interesting for many setups, to get this feature...
Thanks!
admin
2,921 Posts
July 12, 2019, 11:50 amQuote from admin on July 12, 2019, 11:50 amWe will try to support different nic configuration per role in ver 2.4
We support active/active mpio to different nodes, this gives both ha as well as load balancing, not too many SANs do this. We can also support mpio to the same node, you can move paths of the same disk to a single node from the path assignment page, but by default when you add a disk we try to throw its paths across many nodes. Other things to note is that you have 2 subnets/nics in mpio, but we can have up to 8 paths per disk, so 4 paths on the same subnet/nics. Even if you assign them to the same node, each path will get served by a separate cpu core, even if they are on the same subnet/nic. However increasing the number of paths over 8 could be problematic from an iSCSI client side, most client are not tuned performance wise to handle too many active/active paths.
We will try to support different nic configuration per role in ver 2.4
We support active/active mpio to different nodes, this gives both ha as well as load balancing, not too many SANs do this. We can also support mpio to the same node, you can move paths of the same disk to a single node from the path assignment page, but by default when you add a disk we try to throw its paths across many nodes. Other things to note is that you have 2 subnets/nics in mpio, but we can have up to 8 paths per disk, so 4 paths on the same subnet/nics. Even if you assign them to the same node, each path will get served by a separate cpu core, even if they are on the same subnet/nic. However increasing the number of paths over 8 could be problematic from an iSCSI client side, most client are not tuned performance wise to handle too many active/active paths.
Last edited on July 12, 2019, 12:00 pm by admin · #6
Different network config per NODE
wailer
75 Posts
Quote from wailer on February 4, 2019, 6:47 pmRight now , as far I have seen , all nodes need to have the same exact network hardware, and config.
It would be really helpful if you could use just backend and management network on nodes which are not going to use the iscsi target role... for instance...
We are currently trying to deploy a 6 node cluster, with 2 types of nodes:
3 x OSD Nodes - 4x 1 GBIT (MANAGE + 2 +2 BONDED BACKENDS) + 2x10GBE ISCSI Targets
3 x MON Nodes - Virtual - 4 x1 Gbit Interfaces (wihtout iSCSI network)
Thanks for your work!
Right now , as far I have seen , all nodes need to have the same exact network hardware, and config.
It would be really helpful if you could use just backend and management network on nodes which are not going to use the iscsi target role... for instance...
We are currently trying to deploy a 6 node cluster, with 2 types of nodes:
3 x OSD Nodes - 4x 1 GBIT (MANAGE + 2 +2 BONDED BACKENDS) + 2x10GBE ISCSI Targets
3 x MON Nodes - Virtual - 4 x1 Gbit Interfaces (wihtout iSCSI network)
Thanks for your work!
yudorogov
31 Posts
Quote from yudorogov on February 8, 2019, 5:15 am+100500 please add different network configs for nodes(MON, OSD, ISCSI)
+100500 please add different network configs for nodes(MON, OSD, ISCSI)
wailer
75 Posts
Quote from wailer on March 17, 2019, 11:13 pmAlso , I would love to see the ability to use multiple network cards for iSCSI traffic , not just 2. This way you could get good performance from iSCSI MPIO with "cheap" 10GB network cards, instead of moving to 40GBe switching
Also , I would love to see the ability to use multiple network cards for iSCSI traffic , not just 2. This way you could get good performance from iSCSI MPIO with "cheap" 10GB network cards, instead of moving to 40GBe switching
Shiori
86 Posts
Quote from Shiori on June 6, 2019, 12:56 amMON nodes must see the iSCSI network, even with the role disabled. They provide all of the command and control and can not do it if they do not hae access to that network.
You can already accomplish most of what you want with a little modification. You MUST have as many network connections as the management nodes.
EG. I have management nodes with 4 and 2port 1GE nics and a dual port Infiniband nic. All my OSD nodes have various number of 1GE nics but all have a dual port Infiniband nic. These all work due to turning off unused nics and setting the backend and iSCSI networks to use the Infiniband nics.
If you do not have enough Ethernet ports, you can add some fake nics before joining to the cluster, just remember to bond them to real nics to ensure you don't blackhole data connections. (I had a test cluster with very different servers for the nodes and it was not practical to add enough GE ports so I just faked them and pushed everything out the available ports. This worked fine with very little modification)
MPIO works by splitting iSCSI targets across nodes, not nics. There are ways to make a single node do MPIO but that's a hack that means a single box has both of your iSCSI connections on it. If you need more iSCSI bandwidth you can bond more nics to your iSCSI network or just add more nodes with the iSCSI role (not quite the same but the more nodes performing iSCSI hosting, the less load and bandwidth you need per node)
MON nodes must see the iSCSI network, even with the role disabled. They provide all of the command and control and can not do it if they do not hae access to that network.
You can already accomplish most of what you want with a little modification. You MUST have as many network connections as the management nodes.
EG. I have management nodes with 4 and 2port 1GE nics and a dual port Infiniband nic. All my OSD nodes have various number of 1GE nics but all have a dual port Infiniband nic. These all work due to turning off unused nics and setting the backend and iSCSI networks to use the Infiniband nics.
If you do not have enough Ethernet ports, you can add some fake nics before joining to the cluster, just remember to bond them to real nics to ensure you don't blackhole data connections. (I had a test cluster with very different servers for the nodes and it was not practical to add enough GE ports so I just faked them and pushed everything out the available ports. This worked fine with very little modification)
MPIO works by splitting iSCSI targets across nodes, not nics. There are ways to make a single node do MPIO but that's a hack that means a single box has both of your iSCSI connections on it. If you need more iSCSI bandwidth you can bond more nics to your iSCSI network or just add more nodes with the iSCSI role (not quite the same but the more nodes performing iSCSI hosting, the less load and bandwidth you need per node)
wailer
75 Posts
Quote from wailer on July 11, 2019, 10:47 amThanks for your hints Shiori.
I still believe it would be nice when you would not need to do this kind of hacks, like creating fake nics and bonding them, to get nodes with different network hardware to run on the same cluster...
MPIO works by splitting iSCSI targets across nodes, not nics. There are ways to make a single node do MPIO but that's a hack that means a single box has both of your iSCSI connections on it. If you need more iSCSI bandwidth you can bond more nics to your iSCSI network or just add more nodes with the iSCSI role (not quite the same but the more nodes performing iSCSI hosting, the less load and bandwidth you need per node)
About this one, I am not sure if I understood this correctly.... We have 3 nodes with 2 nics for iSCSI in each one, we are seeing iSCSI traffic on all of their nics... also, we see these paths active in vSphere.. so it does look like you can do MPIO in a single node out-of-the-box.... As long as your cluster backend network can deal with all this bandwith I believe it's still interesting for many setups, to get this feature...
Thanks!
Thanks for your hints Shiori.
I still believe it would be nice when you would not need to do this kind of hacks, like creating fake nics and bonding them, to get nodes with different network hardware to run on the same cluster...
MPIO works by splitting iSCSI targets across nodes, not nics. There are ways to make a single node do MPIO but that's a hack that means a single box has both of your iSCSI connections on it. If you need more iSCSI bandwidth you can bond more nics to your iSCSI network or just add more nodes with the iSCSI role (not quite the same but the more nodes performing iSCSI hosting, the less load and bandwidth you need per node)
About this one, I am not sure if I understood this correctly.... We have 3 nodes with 2 nics for iSCSI in each one, we are seeing iSCSI traffic on all of their nics... also, we see these paths active in vSphere.. so it does look like you can do MPIO in a single node out-of-the-box.... As long as your cluster backend network can deal with all this bandwith I believe it's still interesting for many setups, to get this feature...
Thanks!
admin
2,921 Posts
Quote from admin on July 12, 2019, 11:50 amWe will try to support different nic configuration per role in ver 2.4
We support active/active mpio to different nodes, this gives both ha as well as load balancing, not too many SANs do this. We can also support mpio to the same node, you can move paths of the same disk to a single node from the path assignment page, but by default when you add a disk we try to throw its paths across many nodes. Other things to note is that you have 2 subnets/nics in mpio, but we can have up to 8 paths per disk, so 4 paths on the same subnet/nics. Even if you assign them to the same node, each path will get served by a separate cpu core, even if they are on the same subnet/nic. However increasing the number of paths over 8 could be problematic from an iSCSI client side, most client are not tuned performance wise to handle too many active/active paths.
We will try to support different nic configuration per role in ver 2.4
We support active/active mpio to different nodes, this gives both ha as well as load balancing, not too many SANs do this. We can also support mpio to the same node, you can move paths of the same disk to a single node from the path assignment page, but by default when you add a disk we try to throw its paths across many nodes. Other things to note is that you have 2 subnets/nics in mpio, but we can have up to 8 paths per disk, so 4 paths on the same subnet/nics. Even if you assign them to the same node, each path will get served by a separate cpu core, even if they are on the same subnet/nic. However increasing the number of paths over 8 could be problematic from an iSCSI client side, most client are not tuned performance wise to handle too many active/active paths.