Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Which networks should I setup?

Hello,

I am setting up a new PetaSAN cluster between 3 nodes and it will be put into commercial production.   Each node has 2x 1Gbps ports and 4x 10GigE ports.   I was expecting to use the 4x 10GigE ports for SAN I/O and then 1x 1Gbps for public network and 1x 1Gbps split between the backend + CIFS networks.   Would you recommend a different approach?

Would it be better to combine the management + backend subnet on a single 1Gbps port or the backend + CIFS or should CIFS get one of the 10GigE ports?   We will have 16x 2TB SSDs on each storage node and would like to get the best performance possible.    Is CIFS I/O intensive?

First, the backend network carries the highest traffic, for example if you write from iSCSI subnets, the combined bandwidth of the both iSCSI subnets multiplied by the number of data replicas (default x3) will flow through backend network.

The other point is that it is your load that determines if CIFS will require a lot of traffic or not, maybe you have no need for it or maybe the other way.

Just a possibility:

bond 2x10G for backend

bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans

bond 2x1G for management

Ok, when you refer to "backend", is this the actual disk I/O over the network?     Or wouldn't that be "iSCSI 1 and iSCSI 2"?   Can you explain exactly what differences between the traffic on the "backend" network vs iSCSI 1/2, etc and CIFS?     I assume CIFS is the replication traffic?

PetaSAN uses several layers, the storage layer is served by the Ceph engine, this talks over the backend network. ( we also use backend for other things). Over this layer we have iSCSI/CIFS/NFS servers, they talk to clients on one side via iSCSI/CIFS/NFS protocol on public client networks and on the other side they write/read data from the internal Ceph engine on the backend network. It is possible to name the iSCSI/CIFS/NFS servers as "gateways" between client and backend.

Although they are different subnets, it is possible they share the same network interfaces.

CIFS/SMB is a protocol for Windows file sharing.

 

Thank you very much.   We do not intend to use CIFS or NFS.   Just iSCSI.  We intend to use PetaSAN for storage of KVM virtual machines from OpenNebula.    Of course speed is a major factor.     Do you still suggest this layout?

bond 2x10G for backend

bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans

bond 2x1G for management

or can we just forget about CIFS if we don't intend to use that protocol?    Based on the quick start guide, I thought it was required (https://gyazo.com/780a7fe234db0997dc45652e69115e2d)

I previously used OnApp and I've created my own custom storage servers based on FreeNAS with multiple hypervisors.   In those cases, we had 3 networks:   1) public 2) backup/management 3) SAN network.    All disk I/O whether it was NFS or iSCSI went through the "SAN network" so we would give 4x 10GigE connections to the SAN and 2x 10GigE to each hypervisor and would then use LACP or bonding just for this iSCSI I/O traffic.      In this case, would what we call the "SAN network" be what you are calling iSCSI 1 and iSCSI 2?

Thanks again for your prompt help.   We also reached out for your consulting services and intend to do at least continuing Basic Support from PetaSAN.   Great product!

Thanks a lot for the feedback 🙂 we will look forward to working with you.

You can go with the prev layout. If you also do not care about CIFS/NFS (+S3 coming) then it could be better to use:

bond 2x10G for backend

2x10G for iSCSI 1, iSCSI 2  this is preferable for MPIO to be on separate interfaces rather than a bond.

bond 2x1G for management

Yes you can ignore CIFS interface in version 2.5 , you can define it but not use it. In version 2.6 you do not have to define it and can add it anytime if needed. You can upgrade to 2.6 online.

You can treat the iSCSI subnets as your SAN network. However the analogy with your other system is not accurate, that system probably served local/RAID disks off from iSCSI. Since PetaSAN is scale-out we have another "backend" network for the backend storage instead of local disks.

How do you suggest going about the bond setup for the management network?   The installer/wizard does not seem to expect it.  I tried setting up the bond interface (and port settings on the switch for the channel-group) prior to running through the wizard (which I could only do first by setting it to use a single NIC port, downloading ifenslave package from apt, and then setting up the bond) but the wizard did not recongize the bond0 inteferace and kept undoing my changes / setting it back to the single NIC that I defined during the install.   I ended up leaving it all on the single NIC.    Is there supposed to be a way to setup this bond in the GUI or do I need to edit these settings manually somewhere?   Otherwise, I was able to create the other bond via the GUI and everything else works as expected.

The Deployment Wizard does allow bond creation in the management interface. maybe try a mode that does not require switch configuration such as active-backup since you are already connected, you do not need active-active bond on management.