Networking confusion
bogdanv
2 Posts
January 25, 2022, 1:09 pmQuote from bogdanv on January 25, 2022, 1:09 pmHi
Reading various forum posts and documentation (versions) I am still confused on what should be the setup for a small 4 node (2OSD HDD +1 Journal SSD ) cluster. ( btw ... is it possible to use a single SSD for OS and journal partitions? hints?)
The nodes will have 2*1G + 2*10G network adapters.
What I am trying to achieve is a single expandable NFS storage space, a few iSCSI targets for some VMs, and HA.
This is what I'm working with now:
- 1 single public subnet
- 1 switch with 1&10G for public network, couple more with 1G and 10G
- clients (NFS) will be on a single public subnet
- clients (iSCSI) most probably limited to private net
- small proxmox cluster on public and/or private network (yes, if I want to use iSCSI I assume)
In the QuickStart guide it says: all subnets (for mgmt, backend(1?), iSCSI1&2 , CIFS/SMB(I can live without otherwise it will require another public ...), NFS, S3) need to be separate and do not overlap. This means I can have only NFS on public subnet (what if I need to have cluster replication later?). Or SMB? or S3
In this thread another backend 2 appears which is optional and which will take the task of replication/recovery/re-balancing (RRR)? No big deal because I can have it on a separate 10G switch.
And for Backend 1 the admin says "this is the Ceph public/client network" so ... I need a another public subnet? Isn't this supposed to handle inter-node/service communication? Is this the main backend configured and for RRR by default?
I understand that services might need to bind to different IP but separate subnets?
Hi
Reading various forum posts and documentation (versions) I am still confused on what should be the setup for a small 4 node (2OSD HDD +1 Journal SSD ) cluster. ( btw ... is it possible to use a single SSD for OS and journal partitions? hints?)
The nodes will have 2*1G + 2*10G network adapters.
What I am trying to achieve is a single expandable NFS storage space, a few iSCSI targets for some VMs, and HA.
This is what I'm working with now:
- 1 single public subnet
- 1 switch with 1&10G for public network, couple more with 1G and 10G
- clients (NFS) will be on a single public subnet
- clients (iSCSI) most probably limited to private net
- small proxmox cluster on public and/or private network (yes, if I want to use iSCSI I assume)
In the QuickStart guide it says: all subnets (for mgmt, backend(1?), iSCSI1&2 , CIFS/SMB(I can live without otherwise it will require another public ...), NFS, S3) need to be separate and do not overlap. This means I can have only NFS on public subnet (what if I need to have cluster replication later?). Or SMB? or S3
In this thread another backend 2 appears which is optional and which will take the task of replication/recovery/re-balancing (RRR)? No big deal because I can have it on a separate 10G switch.
And for Backend 1 the admin says "this is the Ceph public/client network" so ... I need a another public subnet? Isn't this supposed to handle inter-node/service communication? Is this the main backend configured and for RRR by default?
I understand that services might need to bind to different IP but separate subnets?
admin
2,930 Posts
January 25, 2022, 2:41 pmQuote from admin on January 25, 2022, 2:41 pmThere ate many options, depending on how many interfaces you have, their type, the services you want, the segregation you want
You could have management on the 1G, you could bond active/backup bond
You can then bond your 10G using LACP and put all other networks on this, you may want to use VLANs to achieve some segregation, at least with the backend network since they all share the same physical interfaces. All services networks need to be on the different non-overlapping ip subnet, even for iSCSI you need 2 distinct subnets for MPIO to work.
If the same client machine need to access different services, they should have ips on the different subnets, these ips could be all using the same interface if you do not to separate traffic.
There ate many options, depending on how many interfaces you have, their type, the services you want, the segregation you want
You could have management on the 1G, you could bond active/backup bond
You can then bond your 10G using LACP and put all other networks on this, you may want to use VLANs to achieve some segregation, at least with the backend network since they all share the same physical interfaces. All services networks need to be on the different non-overlapping ip subnet, even for iSCSI you need 2 distinct subnets for MPIO to work.
If the same client machine need to access different services, they should have ips on the different subnets, these ips could be all using the same interface if you do not to separate traffic.
Networking confusion
bogdanv
2 Posts
Quote from bogdanv on January 25, 2022, 1:09 pmHi
Reading various forum posts and documentation (versions) I am still confused on what should be the setup for a small 4 node (2OSD HDD +1 Journal SSD ) cluster. ( btw ... is it possible to use a single SSD for OS and journal partitions? hints?)
The nodes will have 2*1G + 2*10G network adapters.
What I am trying to achieve is a single expandable NFS storage space, a few iSCSI targets for some VMs, and HA.
This is what I'm working with now:
- 1 single public subnet
- 1 switch with 1&10G for public network, couple more with 1G and 10G
- clients (NFS) will be on a single public subnet
- clients (iSCSI) most probably limited to private net
- small proxmox cluster on public and/or private network (yes, if I want to use iSCSI I assume)
In the QuickStart guide it says: all subnets (for mgmt, backend(1?), iSCSI1&2 , CIFS/SMB(I can live without otherwise it will require another public ...), NFS, S3) need to be separate and do not overlap. This means I can have only NFS on public subnet (what if I need to have cluster replication later?). Or SMB? or S3
In this thread another backend 2 appears which is optional and which will take the task of replication/recovery/re-balancing (RRR)? No big deal because I can have it on a separate 10G switch.
And for Backend 1 the admin says "this is the Ceph public/client network" so ... I need a another public subnet? Isn't this supposed to handle inter-node/service communication? Is this the main backend configured and for RRR by default?I understand that services might need to bind to different IP but separate subnets?
Hi
Reading various forum posts and documentation (versions) I am still confused on what should be the setup for a small 4 node (2OSD HDD +1 Journal SSD ) cluster. ( btw ... is it possible to use a single SSD for OS and journal partitions? hints?)
The nodes will have 2*1G + 2*10G network adapters.
What I am trying to achieve is a single expandable NFS storage space, a few iSCSI targets for some VMs, and HA.
This is what I'm working with now:
- 1 single public subnet
- 1 switch with 1&10G for public network, couple more with 1G and 10G
- clients (NFS) will be on a single public subnet
- clients (iSCSI) most probably limited to private net
- small proxmox cluster on public and/or private network (yes, if I want to use iSCSI I assume)
In the QuickStart guide it says: all subnets (for mgmt, backend(1?), iSCSI1&2 , CIFS/SMB(I can live without otherwise it will require another public ...), NFS, S3) need to be separate and do not overlap. This means I can have only NFS on public subnet (what if I need to have cluster replication later?). Or SMB? or S3
In this thread another backend 2 appears which is optional and which will take the task of replication/recovery/re-balancing (RRR)? No big deal because I can have it on a separate 10G switch.
And for Backend 1 the admin says "this is the Ceph public/client network" so ... I need a another public subnet? Isn't this supposed to handle inter-node/service communication? Is this the main backend configured and for RRR by default?
I understand that services might need to bind to different IP but separate subnets?
admin
2,930 Posts
Quote from admin on January 25, 2022, 2:41 pmThere ate many options, depending on how many interfaces you have, their type, the services you want, the segregation you want
You could have management on the 1G, you could bond active/backup bond
You can then bond your 10G using LACP and put all other networks on this, you may want to use VLANs to achieve some segregation, at least with the backend network since they all share the same physical interfaces. All services networks need to be on the different non-overlapping ip subnet, even for iSCSI you need 2 distinct subnets for MPIO to work.
If the same client machine need to access different services, they should have ips on the different subnets, these ips could be all using the same interface if you do not to separate traffic.
There ate many options, depending on how many interfaces you have, their type, the services you want, the segregation you want
You could have management on the 1G, you could bond active/backup bond
You can then bond your 10G using LACP and put all other networks on this, you may want to use VLANs to achieve some segregation, at least with the backend network since they all share the same physical interfaces. All services networks need to be on the different non-overlapping ip subnet, even for iSCSI you need 2 distinct subnets for MPIO to work.
If the same client machine need to access different services, they should have ips on the different subnets, these ips could be all using the same interface if you do not to separate traffic.