Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

4 Node Setup

Hello all,

I am setting up a 3 Node Proxmox Cluster along with a 4 Node Petasan Cluster. The Petasan cluster will be used for Raw backup storage. In the past, I have set this up bonding the mgmt interface to 2 switches, the backend bonded to the 2 switches and an adapter for Iscsi1 to switch 1 and an adapter for Iscsi2 to switch 2. Is this the right way to setup the network adapters? I ask because from the very beginning, the last setup (connected Vmware btw) seemed slower than I anticipated. Below is a more detailed layout of how the adapters are connected. I am wondering am I using too many adapters for the backend (currently using 4)?

Current Node Interfaces (both switches are 10gb switches):

Name   MAC Address                     PCI Model
eth0 Mgmt Bond to switch 1 60:00.0 Intel Corporation Ethernet Connection X722 for 1GbE
eth1 Mgmt bond to switch 2 60:00.1 Intel Corporation Ethernet Connection X722 for 1GbE
eth2 Iscsi1 to switch 1 18:00.0 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter)
eth3 Iscsi2 to switch 2 18:00.1 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet 10G 2P X540-t Adapter)
eth4 Backend bond to switch 1 61:00.0 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth5 Backend bond to switch 1 61:00.1 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth6 Backend bond to switch 2 62:00.0 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)
eth7 Backend bond to switch 2 62:00.1 Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (Ethernet Converged Network Adapter X540-T2)

Cluster Network Settings

Jumbo Frames: None

Bonds:
mgmt: eth0,eth1 (LACP)

backend: eth4,eth5,eth6,eth7 (LACP)

 

I do not see anything wrong with your setup.

It is not clear what performance issues you are see-ing, but i would try to measure Ceph rados performance ( from UI Benchmark) and see it is good, then compare to rbd and iSCSI performance.

Typically disk and cpu play a more dominant role than network in a Ceph setup.

Ok I will do that.