4 Node Setup

nocstaff@urbancom.net
9 Posts
February 20, 2025, 12:29 pmQuote from nocstaff@urbancom.net on February 20, 2025, 12:29 pmHello all,
I am setting up a 3 Node Proxmox Cluster along with a 4 Node Petasan Cluster. The Petasan cluster will be used for Raw backup storage. In the past, I have set this up bonding the mgmt interface to 2 switches, the backend bonded to the 2 switches and an adapter for Iscsi1 to switch 1 and an adapter for Iscsi2 to switch 2. Is this the right way to setup the network adapters? I ask because from the very beginning, the last setup (connected Vmware btw) seemed slower than I anticipated. Below is a more detailed layout of how the adapters are connected. I am wondering am I using too many adapters for the backend (currently using 4)?
Current Node Interfaces (both switches are 10gb switches):
Name
MAC Address
PCI
Model
eth0
Mgmt Bond to switch 1
60:00.0
Intel Corporation Ethernet Connection X722 for 1GbE
eth1
Mgmt bond to switch 2
60:00.1
Intel Corporation Ethernet Connection X722 for 1GbE
eth2
Iscsi1 to switch 1
18:00.0
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter)
eth3
Iscsi2 to switch 2
18:00.1
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter)
eth4
Backend bond to switch 1
61:00.0
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth5
Backend bond to switch 1
61:00.1
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth6
Backend bond to switch 2
62:00.0
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth7
Backend bond to switch 2
62:00.1
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
Cluster Network Settings
Jumbo Frames: None
Bonds:
mgmt: eth0,eth1 (LACP)
backend: eth4,eth5,eth6,eth7 (LACP)
Hello all,
I am setting up a 3 Node Proxmox Cluster along with a 4 Node Petasan Cluster. The Petasan cluster will be used for Raw backup storage. In the past, I have set this up bonding the mgmt interface to 2 switches, the backend bonded to the 2 switches and an adapter for Iscsi1 to switch 1 and an adapter for Iscsi2 to switch 2. Is this the right way to setup the network adapters? I ask because from the very beginning, the last setup (connected Vmware btw) seemed slower than I anticipated. Below is a more detailed layout of how the adapters are connected. I am wondering am I using too many adapters for the backend (currently using 4)?
Current Node Interfaces (both switches are 10gb switches):
Name
MAC Address
PCI
Model
eth0
Mgmt Bond to switch 1
60:00.0
Intel Corporation Ethernet Connection X722 for 1GbE
eth1
Mgmt bond to switch 2
60:00.1
Intel Corporation Ethernet Connection X722 for 1GbE
eth2
Iscsi1 to switch 1
18:00.0
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter)
eth3
Iscsi2 to switch 2
18:00.1
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter)
eth4
Backend bond to switch 1
61:00.0
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth5
Backend bond to switch 1
61:00.1
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth6
Backend bond to switch 2
62:00.0
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth7
Backend bond to switch 2
62:00.1
Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
Cluster Network Settings
Jumbo Frames: None
Bonds:
mgmt: eth0,eth1 (LACP)
backend: eth4,eth5,eth6,eth7 (LACP)
Last edited on February 20, 2025, 1:45 pm by nocstaff@urbancom.net · #1

admin
2,934 Posts
February 20, 2025, 8:39 pmQuote from admin on February 20, 2025, 8:39 pmI do not see anything wrong with your setup.
It is not clear what performance issues you are see-ing, but i would try to measure Ceph rados performance ( from UI Benchmark) and see it is good, then compare to rbd and iSCSI performance.
Typically disk and cpu play a more dominant role than network in a Ceph setup.
I do not see anything wrong with your setup.
It is not clear what performance issues you are see-ing, but i would try to measure Ceph rados performance ( from UI Benchmark) and see it is good, then compare to rbd and iSCSI performance.
Typically disk and cpu play a more dominant role than network in a Ceph setup.
Last edited on February 20, 2025, 8:40 pm by admin · #2

nocstaff@urbancom.net
9 Posts
February 21, 2025, 10:14 pmQuote from nocstaff@urbancom.net on February 21, 2025, 10:14 pmOk I will do that.
Ok I will do that.
4 Node Setup
nocstaff@urbancom.net
9 Posts
Quote from nocstaff@urbancom.net on February 20, 2025, 12:29 pmHello all,
I am setting up a 3 Node Proxmox Cluster along with a 4 Node Petasan Cluster. The Petasan cluster will be used for Raw backup storage. In the past, I have set this up bonding the mgmt interface to 2 switches, the backend bonded to the 2 switches and an adapter for Iscsi1 to switch 1 and an adapter for Iscsi2 to switch 2. Is this the right way to setup the network adapters? I ask because from the very beginning, the last setup (connected Vmware btw) seemed slower than I anticipated. Below is a more detailed layout of how the adapters are connected. I am wondering am I using too many adapters for the backend (currently using 4)?
Current Node Interfaces (both switches are 10gb switches):
Name MAC Address PCI Model eth0 Mgmt Bond to switch 1 60:00.0 Intel Corporation Ethernet Connection X722 for 1GbE eth1 Mgmt bond to switch 2 60:00.1 Intel Corporation Ethernet Connection X722 for 1GbE eth2 Iscsi1 to switch 1 18:00.0 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter) eth3 Iscsi2 to switch 2 18:00.1 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter) eth4 Backend bond to switch 1 61:00.0 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2) eth5 Backend bond to switch 1 61:00.1 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2) eth6 Backend bond to switch 2 62:00.0 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2) eth7 Backend bond to switch 2 62:00.1 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2) Cluster Network Settings
Jumbo Frames: None
Bonds:
mgmt: eth0,eth1 (LACP)backend: eth4,eth5,eth6,eth7 (LACP)
Hello all,
I am setting up a 3 Node Proxmox Cluster along with a 4 Node Petasan Cluster. The Petasan cluster will be used for Raw backup storage. In the past, I have set this up bonding the mgmt interface to 2 switches, the backend bonded to the 2 switches and an adapter for Iscsi1 to switch 1 and an adapter for Iscsi2 to switch 2. Is this the right way to setup the network adapters? I ask because from the very beginning, the last setup (connected Vmware btw) seemed slower than I anticipated. Below is a more detailed layout of how the adapters are connected. I am wondering am I using too many adapters for the backend (currently using 4)?
Current Node Interfaces (both switches are 10gb switches):
Name | MAC Address | PCI | Model |
eth0 | Mgmt Bond to switch 1 | 60:00.0 | Intel Corporation Ethernet Connection X722 for 1GbE |
eth1 | Mgmt bond to switch 2 | 60:00.1 | Intel Corporation Ethernet Connection X722 for 1GbE |
eth2 | Iscsi1 to switch 1 | 18:00.0 | Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter) |
eth3 | Iscsi2 to switch 2 | 18:00.1 | Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter) |
eth4 | Backend bond to switch 1 | 61:00.0 | Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2) |
eth5 | Backend bond to switch 1 | 61:00.1 | Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2) |
eth6 | Backend bond to switch 2 | 62:00.0 | Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2) |
eth7 | Backend bond to switch 2 | 62:00.1 | Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2) |
Cluster Network Settings
Jumbo Frames: None
Bonds:
mgmt: eth0,eth1 (LACP)
backend: eth4,eth5,eth6,eth7 (LACP)
admin
2,934 Posts
Quote from admin on February 20, 2025, 8:39 pmI do not see anything wrong with your setup.
It is not clear what performance issues you are see-ing, but i would try to measure Ceph rados performance ( from UI Benchmark) and see it is good, then compare to rbd and iSCSI performance.
Typically disk and cpu play a more dominant role than network in a Ceph setup.
I do not see anything wrong with your setup.
It is not clear what performance issues you are see-ing, but i would try to measure Ceph rados performance ( from UI Benchmark) and see it is good, then compare to rbd and iSCSI performance.
Typically disk and cpu play a more dominant role than network in a Ceph setup.
nocstaff@urbancom.net
9 Posts
Quote from nocstaff@urbancom.net on February 21, 2025, 10:14 pmOk I will do that.
Ok I will do that.