Management, Cluster and Public Network
marcels
3 Posts
January 14, 2021, 11:59 amQuote from marcels on January 14, 2021, 11:59 amDear Petasan, We are currently evaluating Petasan as an alternative for Suse Enterprise Storage.
We configured two backend network interfaces with the idea that one is public network (MON/MGR/ISCSI trafic) and one for cluster network (OSD replicas).
We generated some data but we see only data on one of the two backends. Is it possible to use both backend network. ?
Best Regards,
MarcelS
Dear Petasan, We are currently evaluating Petasan as an alternative for Suse Enterprise Storage.
We configured two backend network interfaces with the idea that one is public network (MON/MGR/ISCSI trafic) and one for cluster network (OSD replicas).
We generated some data but we see only data on one of the two backends. Is it possible to use both backend network. ?
Best Regards,
MarcelS
admin
2,930 Posts
January 14, 2021, 1:29 pmQuote from admin on January 14, 2021, 1:29 pmSince PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
Since PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
marcels
3 Posts
January 15, 2021, 5:44 amQuote from marcels on January 15, 2021, 5:44 amDear PetaSan Admin, Thank you for this clear explanation. Just one thing, are these manual configuration changes in ceph.conf and cluster_info.json left unchanged after for example changing the configuration with the gui or after a PetaSan update/upgrade ?
Dear PetaSan Admin, Thank you for this clear explanation. Just one thing, are these manual configuration changes in ceph.conf and cluster_info.json left unchanged after for example changing the configuration with the gui or after a PetaSan update/upgrade ?
Last edited on January 15, 2021, 5:46 am by marcels · #3
admin
2,930 Posts
January 15, 2021, 10:48 amQuote from admin on January 15, 2021, 10:48 amThey will not be changed.
They will not be changed.
Shiori
86 Posts
January 25, 2021, 8:45 pmQuote from Shiori on January 25, 2021, 8:45 pm
Quote from admin on January 14, 2021, 1:29 pm
Since PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
how about giving us a choice? It shouldnt be too hard to add a question of network design to the setup pages. Both networks on a single bonded link? separate links for front and backend networks? rescan network interfaces and add new ones to the list?
This comes from our design point of view:
Our cluster design uses a non standard interface (infiniband) that does not even show up in the interface lists even after we enable the interface on the command line. Our choice to use infiniband as both a backend and frontend network (split to two seperate switches and interface ports) has allowed 100Gbps throughput which would be unreasonably expensive to use 100G ethernet. When we first started using PetaSAN in v1.5 we had good 10Gbps dual port cards and could move 18Gbps with no problems, we since upgraded to 56Gbps cards and cabling and move just over 100Gbps. This doesnt seem like much until you consider the VM traffic that we run where bandwidth and latency are very important meterics, considering the sub ms latency on infiniband vs the 1ms average on ethernet plus the complete hardware implemented network stack (not the same as off-load) and its easy to see why infiniband is still used in the top 10 super computing systems.
Quote from admin on January 14, 2021, 1:29 pm
Since PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
how about giving us a choice? It shouldnt be too hard to add a question of network design to the setup pages. Both networks on a single bonded link? separate links for front and backend networks? rescan network interfaces and add new ones to the list?
This comes from our design point of view:
Our cluster design uses a non standard interface (infiniband) that does not even show up in the interface lists even after we enable the interface on the command line. Our choice to use infiniband as both a backend and frontend network (split to two seperate switches and interface ports) has allowed 100Gbps throughput which would be unreasonably expensive to use 100G ethernet. When we first started using PetaSAN in v1.5 we had good 10Gbps dual port cards and could move 18Gbps with no problems, we since upgraded to 56Gbps cards and cabling and move just over 100Gbps. This doesnt seem like much until you consider the VM traffic that we run where bandwidth and latency are very important meterics, considering the sub ms latency on infiniband vs the 1ms average on ethernet plus the complete hardware implemented network stack (not the same as off-load) and its easy to see why infiniband is still used in the top 10 super computing systems.
Shiori
86 Posts
December 17, 2021, 6:11 pmQuote from Shiori on December 17, 2021, 6:11 pmso admin?
no response in almost a year?
so admin?
no response in almost a year?
Management, Cluster and Public Network
marcels
3 Posts
Quote from marcels on January 14, 2021, 11:59 amDear Petasan, We are currently evaluating Petasan as an alternative for Suse Enterprise Storage.
We configured two backend network interfaces with the idea that one is public network (MON/MGR/ISCSI trafic) and one for cluster network (OSD replicas).
We generated some data but we see only data on one of the two backends. Is it possible to use both backend network. ?
Best Regards,
MarcelS
Dear Petasan, We are currently evaluating Petasan as an alternative for Suse Enterprise Storage.
We configured two backend network interfaces with the idea that one is public network (MON/MGR/ISCSI trafic) and one for cluster network (OSD replicas).
We generated some data but we see only data on one of the two backends. Is it possible to use both backend network. ?
Best Regards,
MarcelS
admin
2,930 Posts
Quote from admin on January 14, 2021, 1:29 pmSince PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
Since PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
marcels
3 Posts
Quote from marcels on January 15, 2021, 5:44 amDear PetaSan Admin, Thank you for this clear explanation. Just one thing, are these manual configuration changes in ceph.conf and cluster_info.json left unchanged after for example changing the configuration with the gui or after a PetaSan update/upgrade ?
Dear PetaSan Admin, Thank you for this clear explanation. Just one thing, are these manual configuration changes in ceph.conf and cluster_info.json left unchanged after for example changing the configuration with the gui or after a PetaSan update/upgrade ?
admin
2,930 Posts
Quote from admin on January 15, 2021, 10:48 amThey will not be changed.
They will not be changed.
Shiori
86 Posts
Quote from Shiori on January 25, 2021, 8:45 pmQuote from admin on January 14, 2021, 1:29 pmSince PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
how about giving us a choice? It shouldnt be too hard to add a question of network design to the setup pages. Both networks on a single bonded link? separate links for front and backend networks? rescan network interfaces and add new ones to the list?
This comes from our design point of view:
Our cluster design uses a non standard interface (infiniband) that does not even show up in the interface lists even after we enable the interface on the command line. Our choice to use infiniband as both a backend and frontend network (split to two seperate switches and interface ports) has allowed 100Gbps throughput which would be unreasonably expensive to use 100G ethernet. When we first started using PetaSAN in v1.5 we had good 10Gbps dual port cards and could move 18Gbps with no problems, we since upgraded to 56Gbps cards and cabling and move just over 100Gbps. This doesnt seem like much until you consider the VM traffic that we run where bandwidth and latency are very important meterics, considering the sub ms latency on infiniband vs the 1ms average on ethernet plus the complete hardware implemented network stack (not the same as off-load) and its easy to see why infiniband is still used in the top 10 super computing systems.
Quote from admin on January 14, 2021, 1:29 pmSince PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
how about giving us a choice? It shouldnt be too hard to add a question of network design to the setup pages. Both networks on a single bonded link? separate links for front and backend networks? rescan network interfaces and add new ones to the list?
This comes from our design point of view:
Our cluster design uses a non standard interface (infiniband) that does not even show up in the interface lists even after we enable the interface on the command line. Our choice to use infiniband as both a backend and frontend network (split to two seperate switches and interface ports) has allowed 100Gbps throughput which would be unreasonably expensive to use 100G ethernet. When we first started using PetaSAN in v1.5 we had good 10Gbps dual port cards and could move 18Gbps with no problems, we since upgraded to 56Gbps cards and cabling and move just over 100Gbps. This doesnt seem like much until you consider the VM traffic that we run where bandwidth and latency are very important meterics, considering the sub ms latency on infiniband vs the 1ms average on ethernet plus the complete hardware implemented network stack (not the same as off-load) and its easy to see why infiniband is still used in the top 10 super computing systems.
Shiori
86 Posts
Quote from Shiori on December 17, 2021, 6:11 pmso admin?
no response in almost a year?
so admin?
no response in almost a year?