Dedicated iSCSI Node Networking
natnick
4 Posts
September 11, 2019, 12:54 amQuote from natnick on September 11, 2019, 12:54 amDuring cluster build I identify the network interfaces for the cluster. The cluster will have 3 dedicated iSCSI/MNG nodes with 5 Storage Nodes (12 OSDs + NVMe Bluestore each). With option to expand to 10 storage nodes. I wanted to know if I could configure the cluster as follows.
iSCSI/MNG Nodes
- eth0+eth1 (1G Bonded) - MNG
- eth2 (10G) - Backend1
- eth3 (10G) - Backend2
- eth4 (10G) - iSCSI1
- eth5 (10G) - iSCSI2
Storage Nodes
- eth0+eth1 (1G Bonded) - MNG
- eth2 (10G) - Backend1
- eth3 (10G) - Backend2
- eth4+eth5 (10G) - (Not Present)
As to my understanding the storage nodes, without iSCSI, would use the Backend (Ceph public network) to communicate with the iSCSI/MNG nodes. So there would be no purpose to having iSCSI network interfaces on the storage nodes. Is this thinking correct? Do you see any issues with setting up like this?
During cluster build I identify the network interfaces for the cluster. The cluster will have 3 dedicated iSCSI/MNG nodes with 5 Storage Nodes (12 OSDs + NVMe Bluestore each). With option to expand to 10 storage nodes. I wanted to know if I could configure the cluster as follows.
iSCSI/MNG Nodes
- eth0+eth1 (1G Bonded) - MNG
- eth2 (10G) - Backend1
- eth3 (10G) - Backend2
- eth4 (10G) - iSCSI1
- eth5 (10G) - iSCSI2
Storage Nodes
- eth0+eth1 (1G Bonded) - MNG
- eth2 (10G) - Backend1
- eth3 (10G) - Backend2
- eth4+eth5 (10G) - (Not Present)
As to my understanding the storage nodes, without iSCSI, would use the Backend (Ceph public network) to communicate with the iSCSI/MNG nodes. So there would be no purpose to having iSCSI network interfaces on the storage nodes. Is this thinking correct? Do you see any issues with setting up like this?
Last edited on September 11, 2019, 9:02 am by natnick · #1
admin
2,930 Posts
September 11, 2019, 7:46 amQuote from admin on September 11, 2019, 7:46 amI understand what you want to do but it is not supported out of the box. Currently we require all nodes have same interface count so we can configure all nodes the same way + allow roles to be added to nodes dynamically. However it is something most probably will be supported in the future to have different pre-fixed configurations and roles. So in your case you would need to have physically have eth4/5 on all nodes or remove them from all nodes, the later case will map more than 1 subnet/vlan per nic.
Due to the above we enforce an interface count check during deployment when a node joins a cluster. You can probably bypass this check, after installation and before you deploy the node, open a shell via blue node console screen (or ssh if you already set a password ) and add a dummy interface via:
ip link add eth4 type dummy
ip link add eth5 type dummy
then join the node. Note this is not a supported method and not something we have tested, if this is a production setup you may want to try/test in a virtual or test setup first. If this does not work we could supply you with the code line than can be changed to omit the nic count check.
I understand what you want to do but it is not supported out of the box. Currently we require all nodes have same interface count so we can configure all nodes the same way + allow roles to be added to nodes dynamically. However it is something most probably will be supported in the future to have different pre-fixed configurations and roles. So in your case you would need to have physically have eth4/5 on all nodes or remove them from all nodes, the later case will map more than 1 subnet/vlan per nic.
Due to the above we enforce an interface count check during deployment when a node joins a cluster. You can probably bypass this check, after installation and before you deploy the node, open a shell via blue node console screen (or ssh if you already set a password ) and add a dummy interface via:
ip link add eth4 type dummy
ip link add eth5 type dummy
then join the node. Note this is not a supported method and not something we have tested, if this is a production setup you may want to try/test in a virtual or test setup first. If this does not work we could supply you with the code line than can be changed to omit the nic count check.
Last edited on September 11, 2019, 7:48 am by admin · #2
natnick
4 Posts
September 11, 2019, 8:23 amQuote from natnick on September 11, 2019, 8:23 amMy storage nodes do physically have 6 NICs available they are just 4x1G and 2x10G so if I rename interfaces as follows and just not patch in eth4 & eth5 it should pass interface check. I just need to make sure to never install iSCSI service on the storage nodes once in production.
eth0+eth1 (1G) - MNG
eth2 (10G) - Backend1
eth3 (10G) - Backend2
eth4 (1G) - iSCSI1 (disconnected)
eth5 (1G) - iSCSI2 (disconnected)
We are planning to purchase support on all 8 Nodes once we get fully deployed so I want to make sure I can call/email if/when we have issues.
My storage nodes do physically have 6 NICs available they are just 4x1G and 2x10G so if I rename interfaces as follows and just not patch in eth4 & eth5 it should pass interface check. I just need to make sure to never install iSCSI service on the storage nodes once in production.
eth0+eth1 (1G) - MNG
eth2 (10G) - Backend1
eth3 (10G) - Backend2
eth4 (1G) - iSCSI1 (disconnected)
eth5 (1G) - iSCSI2 (disconnected)
We are planning to purchase support on all 8 Nodes once we get fully deployed so I want to make sure I can call/email if/when we have issues.
Last edited on September 11, 2019, 8:26 am by natnick · #3
admin
2,930 Posts
September 11, 2019, 12:22 pmQuote from admin on September 11, 2019, 12:22 pmIn this case you probably do not need to do anything since the interface count is the same. If you are using different hardware, you may need to check the interface names / order is correct, else use the blue menu to re-order them..but if you are using same hardware for your hosts, they will be ordered the same.
If you do not assign the iSCSI target role to a a node, then it will not be serving iSCSI.
In this case you probably do not need to do anything since the interface count is the same. If you are using different hardware, you may need to check the interface names / order is correct, else use the blue menu to re-order them..but if you are using same hardware for your hosts, they will be ordered the same.
If you do not assign the iSCSI target role to a a node, then it will not be serving iSCSI.
Last edited on September 11, 2019, 12:22 pm by admin · #4
natnick
4 Posts
September 11, 2019, 3:36 pmQuote from natnick on September 11, 2019, 3:36 pmPerfect. Thank you so much for your time reviewing the logic. I will report back once I have the cluster up for the benefit of the community.
Perfect. Thank you so much for your time reviewing the logic. I will report back once I have the cluster up for the benefit of the community.
Shiori
86 Posts
September 18, 2019, 4:51 amQuote from Shiori on September 18, 2019, 4:51 amJust as a helpful bit,
This works, we are doing it in a production environment without issue.
As above, make sure you have your order correct and I would still patch in the other two nics because your storage nodes still use those networks for inter-node communications and storage nodes still partake in iscsi transactions. It is only the nodes with the iscsi role activated that manage the iscsi targets and mpio controls, but all nodes do work together. Will it hurt to not connect these ports? No, not really but it also wont hurt to connect them either.
Just as a helpful bit,
This works, we are doing it in a production environment without issue.
As above, make sure you have your order correct and I would still patch in the other two nics because your storage nodes still use those networks for inter-node communications and storage nodes still partake in iscsi transactions. It is only the nodes with the iscsi role activated that manage the iscsi targets and mpio controls, but all nodes do work together. Will it hurt to not connect these ports? No, not really but it also wont hurt to connect them either.
Dedicated iSCSI Node Networking
natnick
4 Posts
Quote from natnick on September 11, 2019, 12:54 amDuring cluster build I identify the network interfaces for the cluster. The cluster will have 3 dedicated iSCSI/MNG nodes with 5 Storage Nodes (12 OSDs + NVMe Bluestore each). With option to expand to 10 storage nodes. I wanted to know if I could configure the cluster as follows.
iSCSI/MNG Nodes
- eth0+eth1 (1G Bonded) - MNG
- eth2 (10G) - Backend1
- eth3 (10G) - Backend2
- eth4 (10G) - iSCSI1
- eth5 (10G) - iSCSI2
Storage Nodes
- eth0+eth1 (1G Bonded) - MNG
- eth2 (10G) - Backend1
- eth3 (10G) - Backend2
- eth4+eth5 (10G) - (Not Present)
As to my understanding the storage nodes, without iSCSI, would use the Backend (Ceph public network) to communicate with the iSCSI/MNG nodes. So there would be no purpose to having iSCSI network interfaces on the storage nodes. Is this thinking correct? Do you see any issues with setting up like this?
During cluster build I identify the network interfaces for the cluster. The cluster will have 3 dedicated iSCSI/MNG nodes with 5 Storage Nodes (12 OSDs + NVMe Bluestore each). With option to expand to 10 storage nodes. I wanted to know if I could configure the cluster as follows.
iSCSI/MNG Nodes
- eth0+eth1 (1G Bonded) - MNG
- eth2 (10G) - Backend1
- eth3 (10G) - Backend2
- eth4 (10G) - iSCSI1
- eth5 (10G) - iSCSI2
Storage Nodes
- eth0+eth1 (1G Bonded) - MNG
- eth2 (10G) - Backend1
- eth3 (10G) - Backend2
- eth4+eth5 (10G) - (Not Present)
As to my understanding the storage nodes, without iSCSI, would use the Backend (Ceph public network) to communicate with the iSCSI/MNG nodes. So there would be no purpose to having iSCSI network interfaces on the storage nodes. Is this thinking correct? Do you see any issues with setting up like this?
admin
2,930 Posts
Quote from admin on September 11, 2019, 7:46 amI understand what you want to do but it is not supported out of the box. Currently we require all nodes have same interface count so we can configure all nodes the same way + allow roles to be added to nodes dynamically. However it is something most probably will be supported in the future to have different pre-fixed configurations and roles. So in your case you would need to have physically have eth4/5 on all nodes or remove them from all nodes, the later case will map more than 1 subnet/vlan per nic.
Due to the above we enforce an interface count check during deployment when a node joins a cluster. You can probably bypass this check, after installation and before you deploy the node, open a shell via blue node console screen (or ssh if you already set a password ) and add a dummy interface via:
ip link add eth4 type dummy
ip link add eth5 type dummythen join the node. Note this is not a supported method and not something we have tested, if this is a production setup you may want to try/test in a virtual or test setup first. If this does not work we could supply you with the code line than can be changed to omit the nic count check.
I understand what you want to do but it is not supported out of the box. Currently we require all nodes have same interface count so we can configure all nodes the same way + allow roles to be added to nodes dynamically. However it is something most probably will be supported in the future to have different pre-fixed configurations and roles. So in your case you would need to have physically have eth4/5 on all nodes or remove them from all nodes, the later case will map more than 1 subnet/vlan per nic.
Due to the above we enforce an interface count check during deployment when a node joins a cluster. You can probably bypass this check, after installation and before you deploy the node, open a shell via blue node console screen (or ssh if you already set a password ) and add a dummy interface via:
ip link add eth4 type dummy
ip link add eth5 type dummy
then join the node. Note this is not a supported method and not something we have tested, if this is a production setup you may want to try/test in a virtual or test setup first. If this does not work we could supply you with the code line than can be changed to omit the nic count check.
natnick
4 Posts
Quote from natnick on September 11, 2019, 8:23 amMy storage nodes do physically have 6 NICs available they are just 4x1G and 2x10G so if I rename interfaces as follows and just not patch in eth4 & eth5 it should pass interface check. I just need to make sure to never install iSCSI service on the storage nodes once in production.
eth0+eth1 (1G) - MNG
eth2 (10G) - Backend1
eth3 (10G) - Backend2
eth4 (1G) - iSCSI1 (disconnected)
eth5 (1G) - iSCSI2 (disconnected)
We are planning to purchase support on all 8 Nodes once we get fully deployed so I want to make sure I can call/email if/when we have issues.
My storage nodes do physically have 6 NICs available they are just 4x1G and 2x10G so if I rename interfaces as follows and just not patch in eth4 & eth5 it should pass interface check. I just need to make sure to never install iSCSI service on the storage nodes once in production.
eth0+eth1 (1G) - MNG
eth2 (10G) - Backend1
eth3 (10G) - Backend2
eth4 (1G) - iSCSI1 (disconnected)
eth5 (1G) - iSCSI2 (disconnected)
We are planning to purchase support on all 8 Nodes once we get fully deployed so I want to make sure I can call/email if/when we have issues.
admin
2,930 Posts
Quote from admin on September 11, 2019, 12:22 pmIn this case you probably do not need to do anything since the interface count is the same. If you are using different hardware, you may need to check the interface names / order is correct, else use the blue menu to re-order them..but if you are using same hardware for your hosts, they will be ordered the same.
If you do not assign the iSCSI target role to a a node, then it will not be serving iSCSI.
In this case you probably do not need to do anything since the interface count is the same. If you are using different hardware, you may need to check the interface names / order is correct, else use the blue menu to re-order them..but if you are using same hardware for your hosts, they will be ordered the same.
If you do not assign the iSCSI target role to a a node, then it will not be serving iSCSI.
natnick
4 Posts
Quote from natnick on September 11, 2019, 3:36 pmPerfect. Thank you so much for your time reviewing the logic. I will report back once I have the cluster up for the benefit of the community.
Perfect. Thank you so much for your time reviewing the logic. I will report back once I have the cluster up for the benefit of the community.
Shiori
86 Posts
Quote from Shiori on September 18, 2019, 4:51 amJust as a helpful bit,
This works, we are doing it in a production environment without issue.
As above, make sure you have your order correct and I would still patch in the other two nics because your storage nodes still use those networks for inter-node communications and storage nodes still partake in iscsi transactions. It is only the nodes with the iscsi role activated that manage the iscsi targets and mpio controls, but all nodes do work together. Will it hurt to not connect these ports? No, not really but it also wont hurt to connect them either.
Just as a helpful bit,
This works, we are doing it in a production environment without issue.
As above, make sure you have your order correct and I would still patch in the other two nics because your storage nodes still use those networks for inter-node communications and storage nodes still partake in iscsi transactions. It is only the nodes with the iscsi role activated that manage the iscsi targets and mpio controls, but all nodes do work together. Will it hurt to not connect these ports? No, not really but it also wont hurt to connect them either.