Which networks should I setup?
evankmd82
11 Posts
July 28, 2020, 10:57 pmQuote from evankmd82 on July 28, 2020, 10:57 pmHello,
I am setting up a new PetaSAN cluster between 3 nodes and it will be put into commercial production. Each node has 2x 1Gbps ports and 4x 10GigE ports. I was expecting to use the 4x 10GigE ports for SAN I/O and then 1x 1Gbps for public network and 1x 1Gbps split between the backend + CIFS networks. Would you recommend a different approach?
Would it be better to combine the management + backend subnet on a single 1Gbps port or the backend + CIFS or should CIFS get one of the 10GigE ports? We will have 16x 2TB SSDs on each storage node and would like to get the best performance possible. Is CIFS I/O intensive?
Hello,
I am setting up a new PetaSAN cluster between 3 nodes and it will be put into commercial production. Each node has 2x 1Gbps ports and 4x 10GigE ports. I was expecting to use the 4x 10GigE ports for SAN I/O and then 1x 1Gbps for public network and 1x 1Gbps split between the backend + CIFS networks. Would you recommend a different approach?
Would it be better to combine the management + backend subnet on a single 1Gbps port or the backend + CIFS or should CIFS get one of the 10GigE ports? We will have 16x 2TB SSDs on each storage node and would like to get the best performance possible. Is CIFS I/O intensive?
admin
2,930 Posts
July 29, 2020, 12:30 amQuote from admin on July 29, 2020, 12:30 amFirst, the backend network carries the highest traffic, for example if you write from iSCSI subnets, the combined bandwidth of the both iSCSI subnets multiplied by the number of data replicas (default x3) will flow through backend network.
The other point is that it is your load that determines if CIFS will require a lot of traffic or not, maybe you have no need for it or maybe the other way.
Just a possibility:
bond 2x10G for backend
bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans
bond 2x1G for management
First, the backend network carries the highest traffic, for example if you write from iSCSI subnets, the combined bandwidth of the both iSCSI subnets multiplied by the number of data replicas (default x3) will flow through backend network.
The other point is that it is your load that determines if CIFS will require a lot of traffic or not, maybe you have no need for it or maybe the other way.
Just a possibility:
bond 2x10G for backend
bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans
bond 2x1G for management
evankmd82
11 Posts
July 29, 2020, 4:47 pmQuote from evankmd82 on July 29, 2020, 4:47 pmOk, when you refer to "backend", is this the actual disk I/O over the network? Or wouldn't that be "iSCSI 1 and iSCSI 2"? Can you explain exactly what differences between the traffic on the "backend" network vs iSCSI 1/2, etc and CIFS? I assume CIFS is the replication traffic?
Ok, when you refer to "backend", is this the actual disk I/O over the network? Or wouldn't that be "iSCSI 1 and iSCSI 2"? Can you explain exactly what differences between the traffic on the "backend" network vs iSCSI 1/2, etc and CIFS? I assume CIFS is the replication traffic?
admin
2,930 Posts
July 29, 2020, 5:48 pmQuote from admin on July 29, 2020, 5:48 pmPetaSAN uses several layers, the storage layer is served by the Ceph engine, this talks over the backend network. ( we also use backend for other things). Over this layer we have iSCSI/CIFS/NFS servers, they talk to clients on one side via iSCSI/CIFS/NFS protocol on public client networks and on the other side they write/read data from the internal Ceph engine on the backend network. It is possible to name the iSCSI/CIFS/NFS servers as "gateways" between client and backend.
Although they are different subnets, it is possible they share the same network interfaces.
CIFS/SMB is a protocol for Windows file sharing.
PetaSAN uses several layers, the storage layer is served by the Ceph engine, this talks over the backend network. ( we also use backend for other things). Over this layer we have iSCSI/CIFS/NFS servers, they talk to clients on one side via iSCSI/CIFS/NFS protocol on public client networks and on the other side they write/read data from the internal Ceph engine on the backend network. It is possible to name the iSCSI/CIFS/NFS servers as "gateways" between client and backend.
Although they are different subnets, it is possible they share the same network interfaces.
CIFS/SMB is a protocol for Windows file sharing.
evankmd82
11 Posts
July 29, 2020, 9:49 pmQuote from evankmd82 on July 29, 2020, 9:49 pmThank you very much. We do not intend to use CIFS or NFS. Just iSCSI. We intend to use PetaSAN for storage of KVM virtual machines from OpenNebula. Of course speed is a major factor. Do you still suggest this layout?
bond 2x10G for backend
bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans
bond 2x1G for management
or can we just forget about CIFS if we don't intend to use that protocol? Based on the quick start guide, I thought it was required (https://gyazo.com/780a7fe234db0997dc45652e69115e2d)
I previously used OnApp and I've created my own custom storage servers based on FreeNAS with multiple hypervisors. In those cases, we had 3 networks: 1) public 2) backup/management 3) SAN network. All disk I/O whether it was NFS or iSCSI went through the "SAN network" so we would give 4x 10GigE connections to the SAN and 2x 10GigE to each hypervisor and would then use LACP or bonding just for this iSCSI I/O traffic. In this case, would what we call the "SAN network" be what you are calling iSCSI 1 and iSCSI 2?
Thanks again for your prompt help. We also reached out for your consulting services and intend to do at least continuing Basic Support from PetaSAN. Great product!
Thank you very much. We do not intend to use CIFS or NFS. Just iSCSI. We intend to use PetaSAN for storage of KVM virtual machines from OpenNebula. Of course speed is a major factor. Do you still suggest this layout?
bond 2x10G for backend
bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans
bond 2x1G for management
or can we just forget about CIFS if we don't intend to use that protocol? Based on the quick start guide, I thought it was required (https://gyazo.com/780a7fe234db0997dc45652e69115e2d)
I previously used OnApp and I've created my own custom storage servers based on FreeNAS with multiple hypervisors. In those cases, we had 3 networks: 1) public 2) backup/management 3) SAN network. All disk I/O whether it was NFS or iSCSI went through the "SAN network" so we would give 4x 10GigE connections to the SAN and 2x 10GigE to each hypervisor and would then use LACP or bonding just for this iSCSI I/O traffic. In this case, would what we call the "SAN network" be what you are calling iSCSI 1 and iSCSI 2?
Thanks again for your prompt help. We also reached out for your consulting services and intend to do at least continuing Basic Support from PetaSAN. Great product!
Last edited on July 29, 2020, 9:52 pm by evankmd82 · #5
admin
2,930 Posts
July 30, 2020, 12:50 amQuote from admin on July 30, 2020, 12:50 amThanks a lot for the feedback 🙂 we will look forward to working with you.
You can go with the prev layout. If you also do not care about CIFS/NFS (+S3 coming) then it could be better to use:
bond 2x10G for backend
2x10G for iSCSI 1, iSCSI 2 this is preferable for MPIO to be on separate interfaces rather than a bond.
bond 2x1G for management
Yes you can ignore CIFS interface in version 2.5 , you can define it but not use it. In version 2.6 you do not have to define it and can add it anytime if needed. You can upgrade to 2.6 online.
You can treat the iSCSI subnets as your SAN network. However the analogy with your other system is not accurate, that system probably served local/RAID disks off from iSCSI. Since PetaSAN is scale-out we have another "backend" network for the backend storage instead of local disks.
Thanks a lot for the feedback 🙂 we will look forward to working with you.
You can go with the prev layout. If you also do not care about CIFS/NFS (+S3 coming) then it could be better to use:
bond 2x10G for backend
2x10G for iSCSI 1, iSCSI 2 this is preferable for MPIO to be on separate interfaces rather than a bond.
bond 2x1G for management
Yes you can ignore CIFS interface in version 2.5 , you can define it but not use it. In version 2.6 you do not have to define it and can add it anytime if needed. You can upgrade to 2.6 online.
You can treat the iSCSI subnets as your SAN network. However the analogy with your other system is not accurate, that system probably served local/RAID disks off from iSCSI. Since PetaSAN is scale-out we have another "backend" network for the backend storage instead of local disks.
Last edited on July 30, 2020, 12:51 am by admin · #6
evankmd82
11 Posts
August 4, 2020, 11:45 pmQuote from evankmd82 on August 4, 2020, 11:45 pmHow do you suggest going about the bond setup for the management network? The installer/wizard does not seem to expect it. I tried setting up the bond interface (and port settings on the switch for the channel-group) prior to running through the wizard (which I could only do first by setting it to use a single NIC port, downloading ifenslave package from apt, and then setting up the bond) but the wizard did not recongize the bond0 inteferace and kept undoing my changes / setting it back to the single NIC that I defined during the install. I ended up leaving it all on the single NIC. Is there supposed to be a way to setup this bond in the GUI or do I need to edit these settings manually somewhere? Otherwise, I was able to create the other bond via the GUI and everything else works as expected.
How do you suggest going about the bond setup for the management network? The installer/wizard does not seem to expect it. I tried setting up the bond interface (and port settings on the switch for the channel-group) prior to running through the wizard (which I could only do first by setting it to use a single NIC port, downloading ifenslave package from apt, and then setting up the bond) but the wizard did not recongize the bond0 inteferace and kept undoing my changes / setting it back to the single NIC that I defined during the install. I ended up leaving it all on the single NIC. Is there supposed to be a way to setup this bond in the GUI or do I need to edit these settings manually somewhere? Otherwise, I was able to create the other bond via the GUI and everything else works as expected.
admin
2,930 Posts
August 5, 2020, 12:19 amQuote from admin on August 5, 2020, 12:19 amThe Deployment Wizard does allow bond creation in the management interface. maybe try a mode that does not require switch configuration such as active-backup since you are already connected, you do not need active-active bond on management.
The Deployment Wizard does allow bond creation in the management interface. maybe try a mode that does not require switch configuration such as active-backup since you are already connected, you do not need active-active bond on management.
Which networks should I setup?
evankmd82
11 Posts
Quote from evankmd82 on July 28, 2020, 10:57 pmHello,
I am setting up a new PetaSAN cluster between 3 nodes and it will be put into commercial production. Each node has 2x 1Gbps ports and 4x 10GigE ports. I was expecting to use the 4x 10GigE ports for SAN I/O and then 1x 1Gbps for public network and 1x 1Gbps split between the backend + CIFS networks. Would you recommend a different approach?
Would it be better to combine the management + backend subnet on a single 1Gbps port or the backend + CIFS or should CIFS get one of the 10GigE ports? We will have 16x 2TB SSDs on each storage node and would like to get the best performance possible. Is CIFS I/O intensive?
Hello,
I am setting up a new PetaSAN cluster between 3 nodes and it will be put into commercial production. Each node has 2x 1Gbps ports and 4x 10GigE ports. I was expecting to use the 4x 10GigE ports for SAN I/O and then 1x 1Gbps for public network and 1x 1Gbps split between the backend + CIFS networks. Would you recommend a different approach?
Would it be better to combine the management + backend subnet on a single 1Gbps port or the backend + CIFS or should CIFS get one of the 10GigE ports? We will have 16x 2TB SSDs on each storage node and would like to get the best performance possible. Is CIFS I/O intensive?
admin
2,930 Posts
Quote from admin on July 29, 2020, 12:30 amFirst, the backend network carries the highest traffic, for example if you write from iSCSI subnets, the combined bandwidth of the both iSCSI subnets multiplied by the number of data replicas (default x3) will flow through backend network.
The other point is that it is your load that determines if CIFS will require a lot of traffic or not, maybe you have no need for it or maybe the other way.
Just a possibility:
bond 2x10G for backend
bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans
bond 2x1G for management
First, the backend network carries the highest traffic, for example if you write from iSCSI subnets, the combined bandwidth of the both iSCSI subnets multiplied by the number of data replicas (default x3) will flow through backend network.
The other point is that it is your load that determines if CIFS will require a lot of traffic or not, maybe you have no need for it or maybe the other way.
Just a possibility:
bond 2x10G for backend
bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans
bond 2x1G for management
evankmd82
11 Posts
Quote from evankmd82 on July 29, 2020, 4:47 pmOk, when you refer to "backend", is this the actual disk I/O over the network? Or wouldn't that be "iSCSI 1 and iSCSI 2"? Can you explain exactly what differences between the traffic on the "backend" network vs iSCSI 1/2, etc and CIFS? I assume CIFS is the replication traffic?
Ok, when you refer to "backend", is this the actual disk I/O over the network? Or wouldn't that be "iSCSI 1 and iSCSI 2"? Can you explain exactly what differences between the traffic on the "backend" network vs iSCSI 1/2, etc and CIFS? I assume CIFS is the replication traffic?
admin
2,930 Posts
Quote from admin on July 29, 2020, 5:48 pmPetaSAN uses several layers, the storage layer is served by the Ceph engine, this talks over the backend network. ( we also use backend for other things). Over this layer we have iSCSI/CIFS/NFS servers, they talk to clients on one side via iSCSI/CIFS/NFS protocol on public client networks and on the other side they write/read data from the internal Ceph engine on the backend network. It is possible to name the iSCSI/CIFS/NFS servers as "gateways" between client and backend.
Although they are different subnets, it is possible they share the same network interfaces.
CIFS/SMB is a protocol for Windows file sharing.
PetaSAN uses several layers, the storage layer is served by the Ceph engine, this talks over the backend network. ( we also use backend for other things). Over this layer we have iSCSI/CIFS/NFS servers, they talk to clients on one side via iSCSI/CIFS/NFS protocol on public client networks and on the other side they write/read data from the internal Ceph engine on the backend network. It is possible to name the iSCSI/CIFS/NFS servers as "gateways" between client and backend.
Although they are different subnets, it is possible they share the same network interfaces.
CIFS/SMB is a protocol for Windows file sharing.
evankmd82
11 Posts
Quote from evankmd82 on July 29, 2020, 9:49 pmThank you very much. We do not intend to use CIFS or NFS. Just iSCSI. We intend to use PetaSAN for storage of KVM virtual machines from OpenNebula. Of course speed is a major factor. Do you still suggest this layout?
bond 2x10G for backend
bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans
bond 2x1G for management
or can we just forget about CIFS if we don't intend to use that protocol? Based on the quick start guide, I thought it was required (https://gyazo.com/780a7fe234db0997dc45652e69115e2d)
I previously used OnApp and I've created my own custom storage servers based on FreeNAS with multiple hypervisors. In those cases, we had 3 networks: 1) public 2) backup/management 3) SAN network. All disk I/O whether it was NFS or iSCSI went through the "SAN network" so we would give 4x 10GigE connections to the SAN and 2x 10GigE to each hypervisor and would then use LACP or bonding just for this iSCSI I/O traffic. In this case, would what we call the "SAN network" be what you are calling iSCSI 1 and iSCSI 2?
Thanks again for your prompt help. We also reached out for your consulting services and intend to do at least continuing Basic Support from PetaSAN. Great product!
Thank you very much. We do not intend to use CIFS or NFS. Just iSCSI. We intend to use PetaSAN for storage of KVM virtual machines from OpenNebula. Of course speed is a major factor. Do you still suggest this layout?
bond 2x10G for backend
bond 2x10G for iSCSI 1, iSCSI 2, CIFS on different vlans
bond 2x1G for management
or can we just forget about CIFS if we don't intend to use that protocol? Based on the quick start guide, I thought it was required (https://gyazo.com/780a7fe234db0997dc45652e69115e2d)
I previously used OnApp and I've created my own custom storage servers based on FreeNAS with multiple hypervisors. In those cases, we had 3 networks: 1) public 2) backup/management 3) SAN network. All disk I/O whether it was NFS or iSCSI went through the "SAN network" so we would give 4x 10GigE connections to the SAN and 2x 10GigE to each hypervisor and would then use LACP or bonding just for this iSCSI I/O traffic. In this case, would what we call the "SAN network" be what you are calling iSCSI 1 and iSCSI 2?
Thanks again for your prompt help. We also reached out for your consulting services and intend to do at least continuing Basic Support from PetaSAN. Great product!
admin
2,930 Posts
Quote from admin on July 30, 2020, 12:50 amThanks a lot for the feedback 🙂 we will look forward to working with you.
You can go with the prev layout. If you also do not care about CIFS/NFS (+S3 coming) then it could be better to use:
bond 2x10G for backend
2x10G for iSCSI 1, iSCSI 2 this is preferable for MPIO to be on separate interfaces rather than a bond.
bond 2x1G for management
Yes you can ignore CIFS interface in version 2.5 , you can define it but not use it. In version 2.6 you do not have to define it and can add it anytime if needed. You can upgrade to 2.6 online.
You can treat the iSCSI subnets as your SAN network. However the analogy with your other system is not accurate, that system probably served local/RAID disks off from iSCSI. Since PetaSAN is scale-out we have another "backend" network for the backend storage instead of local disks.
Thanks a lot for the feedback 🙂 we will look forward to working with you.
You can go with the prev layout. If you also do not care about CIFS/NFS (+S3 coming) then it could be better to use:
bond 2x10G for backend
2x10G for iSCSI 1, iSCSI 2 this is preferable for MPIO to be on separate interfaces rather than a bond.
bond 2x1G for management
Yes you can ignore CIFS interface in version 2.5 , you can define it but not use it. In version 2.6 you do not have to define it and can add it anytime if needed. You can upgrade to 2.6 online.
You can treat the iSCSI subnets as your SAN network. However the analogy with your other system is not accurate, that system probably served local/RAID disks off from iSCSI. Since PetaSAN is scale-out we have another "backend" network for the backend storage instead of local disks.
evankmd82
11 Posts
Quote from evankmd82 on August 4, 2020, 11:45 pmHow do you suggest going about the bond setup for the management network? The installer/wizard does not seem to expect it. I tried setting up the bond interface (and port settings on the switch for the channel-group) prior to running through the wizard (which I could only do first by setting it to use a single NIC port, downloading ifenslave package from apt, and then setting up the bond) but the wizard did not recongize the bond0 inteferace and kept undoing my changes / setting it back to the single NIC that I defined during the install. I ended up leaving it all on the single NIC. Is there supposed to be a way to setup this bond in the GUI or do I need to edit these settings manually somewhere? Otherwise, I was able to create the other bond via the GUI and everything else works as expected.
How do you suggest going about the bond setup for the management network? The installer/wizard does not seem to expect it. I tried setting up the bond interface (and port settings on the switch for the channel-group) prior to running through the wizard (which I could only do first by setting it to use a single NIC port, downloading ifenslave package from apt, and then setting up the bond) but the wizard did not recongize the bond0 inteferace and kept undoing my changes / setting it back to the single NIC that I defined during the install. I ended up leaving it all on the single NIC. Is there supposed to be a way to setup this bond in the GUI or do I need to edit these settings manually somewhere? Otherwise, I was able to create the other bond via the GUI and everything else works as expected.
admin
2,930 Posts
Quote from admin on August 5, 2020, 12:19 amThe Deployment Wizard does allow bond creation in the management interface. maybe try a mode that does not require switch configuration such as active-backup since you are already connected, you do not need active-active bond on management.
The Deployment Wizard does allow bond creation in the management interface. maybe try a mode that does not require switch configuration such as active-backup since you are already connected, you do not need active-active bond on management.