network design choice?
Shiori
86 Posts
January 25, 2021, 8:46 pmQuote from Shiori on January 25, 2021, 8:46 pm
Quote from admin on January 14, 2021, 1:29 pm
Since PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
how about giving us a choice? It shouldnt be too hard to add a question of network design to the setup pages. Both networks on a single bonded link? separate links for front and backend networks? rescan network interfaces and add new ones to the list?
This comes from our design point of view:
Our cluster design uses a non standard interface (infiniband) that does not even show up in the interface lists even after we enable the interface on the command line. Our choice to use infiniband as both a backend and frontend network (split to two seperate switches and interface ports) has allowed 100Gbps throughput which would be unreasonably expensive to use 100G ethernet. When we first started using PetaSAN in v1.5 we had good 10Gbps dual port cards and could move 18Gbps with no problems, we since upgraded to 56Gbps cards and cabling and move just over 100Gbps. This doesnt seem like much until you consider the VM traffic that we run where bandwidth and latency are very important meterics, considering the sub ms latency on infiniband vs the 1ms average on ethernet plus the complete hardware implemented network stack (not the same as off-load) and its easy to see why infiniband is still used in the top 10 super computing systems.
Quote from admin on January 14, 2021, 1:29 pm
Since PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
how about giving us a choice? It shouldnt be too hard to add a question of network design to the setup pages. Both networks on a single bonded link? separate links for front and backend networks? rescan network interfaces and add new ones to the list?
This comes from our design point of view:
Our cluster design uses a non standard interface (infiniband) that does not even show up in the interface lists even after we enable the interface on the command line. Our choice to use infiniband as both a backend and frontend network (split to two seperate switches and interface ports) has allowed 100Gbps throughput which would be unreasonably expensive to use 100G ethernet. When we first started using PetaSAN in v1.5 we had good 10Gbps dual port cards and could move 18Gbps with no problems, we since upgraded to 56Gbps cards and cabling and move just over 100Gbps. This doesnt seem like much until you consider the VM traffic that we run where bandwidth and latency are very important meterics, considering the sub ms latency on infiniband vs the 1ms average on ethernet plus the complete hardware implemented network stack (not the same as off-load) and its easy to see why infiniband is still used in the top 10 super computing systems.
Shiori
86 Posts
July 5, 2021, 7:55 pmQuote from Shiori on July 5, 2021, 7:55 pmSix months later and no comment?
Six months later and no comment?
Shiori
86 Posts
December 17, 2021, 6:12 pmQuote from Shiori on December 17, 2021, 6:12 pmstill no comment?
still no comment?
network design choice?
Shiori
86 Posts
Quote from Shiori on January 25, 2021, 8:46 pmQuote from admin on January 14, 2021, 1:29 pmSince PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
how about giving us a choice? It shouldnt be too hard to add a question of network design to the setup pages. Both networks on a single bonded link? separate links for front and backend networks? rescan network interfaces and add new ones to the list?
This comes from our design point of view:
Our cluster design uses a non standard interface (infiniband) that does not even show up in the interface lists even after we enable the interface on the command line. Our choice to use infiniband as both a backend and frontend network (split to two seperate switches and interface ports) has allowed 100Gbps throughput which would be unreasonably expensive to use 100G ethernet. When we first started using PetaSAN in v1.5 we had good 10Gbps dual port cards and could move 18Gbps with no problems, we since upgraded to 56Gbps cards and cabling and move just over 100Gbps. This doesnt seem like much until you consider the VM traffic that we run where bandwidth and latency are very important meterics, considering the sub ms latency on infiniband vs the 1ms average on ethernet plus the complete hardware implemented network stack (not the same as off-load) and its easy to see why infiniband is still used in the top 10 super computing systems.
Quote from admin on January 14, 2021, 1:29 pmSince PetaSAN v 2.5 we no longer support adding a cluster network in the ui while building a new cluster. It is debatable if it is better to have a separate OSD cluster network or better use a bonded interface, but there is growing trend to use the later. However you can still manually add it to both ceph.conf and cluster_info.json after you deploy a new PetaSAN cluster ( to all 3 nodes ), after you do so new nodes being added will honor these settings.
how about giving us a choice? It shouldnt be too hard to add a question of network design to the setup pages. Both networks on a single bonded link? separate links for front and backend networks? rescan network interfaces and add new ones to the list?
This comes from our design point of view:
Our cluster design uses a non standard interface (infiniband) that does not even show up in the interface lists even after we enable the interface on the command line. Our choice to use infiniband as both a backend and frontend network (split to two seperate switches and interface ports) has allowed 100Gbps throughput which would be unreasonably expensive to use 100G ethernet. When we first started using PetaSAN in v1.5 we had good 10Gbps dual port cards and could move 18Gbps with no problems, we since upgraded to 56Gbps cards and cabling and move just over 100Gbps. This doesnt seem like much until you consider the VM traffic that we run where bandwidth and latency are very important meterics, considering the sub ms latency on infiniband vs the 1ms average on ethernet plus the complete hardware implemented network stack (not the same as off-load) and its easy to see why infiniband is still used in the top 10 super computing systems.
Shiori
86 Posts
Quote from Shiori on July 5, 2021, 7:55 pmSix months later and no comment?
Six months later and no comment?
Shiori
86 Posts
Quote from Shiori on December 17, 2021, 6:12 pmstill no comment?
still no comment?