Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Dedicated iSCSI Node Networking

During cluster build I identify the network interfaces for the cluster.  The cluster will have 3 dedicated iSCSI/MNG nodes with 5 Storage Nodes (12 OSDs + NVMe Bluestore each).  With option to expand to 10 storage nodes.  I wanted to know if I could configure the cluster as follows.

iSCSI/MNG Nodes

  • eth0+eth1 (1G Bonded) - MNG
  • eth2 (10G) - Backend1
  • eth3 (10G) - Backend2
  • eth4 (10G) - iSCSI1
  • eth5 (10G) - iSCSI2

Storage Nodes

  • eth0+eth1 (1G Bonded) - MNG
  • eth2 (10G) - Backend1
  • eth3 (10G) - Backend2
  • eth4+eth5 (10G) - (Not Present)

As to my understanding the storage nodes, without iSCSI, would use the Backend (Ceph public network) to communicate with the iSCSI/MNG nodes.  So there would be no purpose to having iSCSI network interfaces on the storage nodes.  Is this thinking correct? Do you see any issues with setting up like this?

 

I understand what you want to do but it is not supported out of the box. Currently we require all nodes have same interface count so we can configure all nodes the same way + allow roles to be added to nodes dynamically. However it is something most probably will be supported in the future to have different pre-fixed configurations and roles. So in your case you would need to have physically have eth4/5 on all nodes or remove them from all nodes, the later case will map more than 1 subnet/vlan per nic.

Due to the above we enforce an interface count check during deployment when a node joins a cluster. You can probably bypass this check, after installation and before you deploy the node, open a shell via blue node console screen (or ssh if you already set a password ) and add a dummy interface via:

ip link add eth4 type dummy
ip link add eth5 type dummy

then join the node. Note this is not a supported method and not something we have tested,  if this is a production setup you may want to try/test in a virtual or test setup first. If this does not work we could supply you with the code line than can be changed to omit the nic count check.

 

My storage nodes do physically have 6 NICs available they are just 4x1G and 2x10G so if I rename interfaces as follows and just not patch in eth4 & eth5 it should pass interface check. I just need to make sure to never install iSCSI service on the storage nodes once in production.

 

eth0+eth1 (1G) - MNG

eth2 (10G) - Backend1

eth3 (10G) - Backend2

eth4 (1G) - iSCSI1 (disconnected)

eth5 (1G) - iSCSI2 (disconnected)

 

We are planning to purchase support on all 8 Nodes once we get fully deployed so I want to make sure I can call/email if/when we have issues.

In this case you probably do not need to do anything since the interface count is the same. If you are using different hardware, you may need to check the interface names / order is correct, else use the blue menu to re-order them..but if you are using same hardware for your hosts, they will be ordered the same.

If you do not assign the iSCSI target role to a a node, then it will not be serving iSCSI.

Perfect.  Thank you so much for your time reviewing the logic.  I will report back once I have the cluster up for the benefit of the community.

Just as a helpful bit,

This works, we are doing it in a production environment without issue.

As above, make sure you have your order correct and I would still patch in the other two nics because your storage nodes still use those networks for inter-node communications and storage nodes still partake in iscsi transactions. It is only the nodes with the iscsi role activated that manage the iscsi targets and mpio controls, but all nodes do work together. Will it hurt to not connect these ports? No, not really but it also wont hurt to connect them either.