Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Missing NIC interfaces: Possible Bug or limitation?

Is this a possible bug or just a limitation in version 3.1.0?  My server has 12 network interfaces. Setting up the first cluster server using the wizard setup:  In the CLUSTER NIC SETTINGS section, the NIC INTERFACES shows eth1 to Eth9. The NEW BOND section shows eth0 to eth9. I want to bind eth8/eth9 as bond0 and eth10/eth11 as bond1.  We'll manually add it, but you might want to know.

At the CLUSTER NETWORK SETTINGS, under the backend interfaces, the pull-down list shows: eth0 to eth11

At the  iSC

Current Node Interfaces:
Name MAC Address PCI Model
eth0 94:18:82:82:53:1c 02:00.0 Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (Ethernet 1Gb 4-port 331i Adapter)
eth1 94:18:82:82:53:1d 02:00.1 Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (Ethernet 1Gb 4-port 331i Adapter)
eth10 48:0f:cf:ef:8f:91 84:00.0 Mellanox Technologies MT27520 Family [ConnectX-3 Pro] (InfiniBand FDR/Ethernet 10Gb/40Gb 2-port 544+QSFP Adapter)
eth11 48:0f:cf:ef:8f:92 84:00.0 Mellanox Technologies MT27520 Family [ConnectX-3 Pro] (InfiniBand FDR/Ethernet 10Gb/40Gb 2-port 544+QSFP Adapter)
eth2 94:18:82:82:53:1e 02:00.2 Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (Ethernet 1Gb 4-port 331i Adapter)
eth3 94:18:82:82:53:1f 02:00.3 Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (Ethernet 1Gb 4-port 331i Adapter)
eth4 14:02:ec:93:a0:ac 08:00.0 Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (Ethernet 10Gb 2-port 560SFP+ Adapter)
eth5 14:02:ec:93:a0:ad 08:00.1 Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (Ethernet 10Gb 2-port 560SFP+ Adapter)
eth6 14:02:ec:93:97:a8 81:00.0 Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (Ethernet 10Gb 2-port 560SFP+ Adapter)
eth7 14:02:ec:93:97:a9 81:00.1 Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (Ethernet 10Gb 2-port 560SFP+ Adapter)
eth8 04:09:73:d0:dd:f1 04:00.0 Mellanox Technologies MT27520 Family [ConnectX-3 Pro] (InfiniBand FDR/Ethernet 10Gb/40Gb 2-port 544+FLR-QSFP Adapter)
eth9 04:09:73:d0:dd:f2 04:00.0 Mellanox Technologies MT27520 Family [ConnectX-3 Pro] (InfiniBand FDR/Ethernet 10Gb/40Gb 2-port 544+FLR-QSFP Adapter)

Thank you

Currently the combo box list is limited to 10 interfaces. This is hardcoded and not read from the current node, this is because you can have nodes with different number and configuration of network interfaces,  the management node you connect to to define interfaces used for a specific service in the cluster may not have these interfaces. In PetaSAN, the network settings for  a service apply to all nodes running this service (have the role), it is cluster wide and not a node by node setting.

As for you case, some possible solutions:

If you can, remove unused or disable interfaces, no point having them unused in PetaSAN.

Another way: before deployment: go to the interface naming menu on the blue node console and rename interfaces that you want to use in PetaSAN to make them within eth0 to eth9, so if you have existing eth12 you want to use and have an unused eth6 just swap their names.

Do not know if this will help, but just to let you know you can define or change the service settings anytime after you build the cluster. You can also change the /opt/petasan/config/cluster_info.json manually to configure/add bonds and service interfaces after you deploy your cluster.

Let me know if still you see 10 interfaces as a limitation and what you think the number should be.

We have three HPE DL360G9 with the same configuration. We cannot disable NIC ports in our testing, but we would like to bond two of the 1GbE ports on the embedded NIC as Management. The HPE DL380G9 server has embedded four ports 1GbE, two dual ports 544+QSFP 10/40 (Mellanox 27520), and two dual-port 560+SFP (Intel). We're bonding one port on each 544+QSFP for Backend; the other two ports are iSCSI1/iSCSI2. Bond each 560+SFP for CIFS; the final bond is on the 560+SFP for S3. We will have to modify the cluster_info.json file.

Our goal is to test native InfiniBand vs. Ethernet and recommend PetaSAN to our customers, which mostly have large amounts of video files (250TB to 20PB+). We are documenting the installation and making recommendations as an alternative to using cloud storage.

Thank you for responding