Secondary Ceph link using backend_2 unexpected behavior
sGoico
2 Posts
November 23, 2022, 7:54 pmQuote from sGoico on November 23, 2022, 7:54 pmHello everyone,
I have installed a new PetaSAN cluster and this is my first deployment. I have managed to configure almost everything as expected but the backend_2 configuration does not seem to do what I expected to.
I wanted to add a backend_2 configuration expecting this to be a "secondary Ceph sync link" in case the main one went down, so I edited the cluster and node configuration to add a second link on a dedicated eth interface and everything seems to be working ok, but it's not really acting as a secondary link for Ceph. If I put down the backend bond, all OSDs for that cluster member go down, because Ceph monitors are only listening on the main IP address, not in all of them (as Gluster seems to be doing).
Is this the expected behavior? Because in this case I could go for a 3x10Gbps bond connected to a redundant switch to make it more reliable (2 links to one FPC and 1 link to the other FPC of the same switch).
Thank you
Hello everyone,
I have installed a new PetaSAN cluster and this is my first deployment. I have managed to configure almost everything as expected but the backend_2 configuration does not seem to do what I expected to.
I wanted to add a backend_2 configuration expecting this to be a "secondary Ceph sync link" in case the main one went down, so I edited the cluster and node configuration to add a second link on a dedicated eth interface and everything seems to be working ok, but it's not really acting as a secondary link for Ceph. If I put down the backend bond, all OSDs for that cluster member go down, because Ceph monitors are only listening on the main IP address, not in all of them (as Gluster seems to be doing).
Is this the expected behavior? Because in this case I could go for a 3x10Gbps bond connected to a redundant switch to make it more reliable (2 links to one FPC and 1 link to the other FPC of the same switch).
Thank you
admin
2,930 Posts
November 23, 2022, 9:59 pmQuote from admin on November 23, 2022, 9:59 pmTo have highly available backend netwok, all you need to do is create a bonded interface and define the backend network to use that bond. You do this from the UI Deployment Wizard. You can do this for other networks like management and services network. After deploying, you many change the bond definitions by editing the cluster_info.json manually.
You do not need to mess with backend_2, this network served a different purpose in earlier version of PetaSAN, it is left for comparability. This allows splitting replication traffic to a separate network than client and monitor networks. Current recommendation is not to split them in separate networks. if you have extra interfaces you can bond them to the single backend network to boost network speed. If users feel they need to split this traffic, they can define the backend2 configuration manually in the config file. To summarize: backend 2 is not a redundant HA for backend 1, if you need redundancy you need to create bonded interfaces for backend network which can be done from UI or manually.
To have highly available backend netwok, all you need to do is create a bonded interface and define the backend network to use that bond. You do this from the UI Deployment Wizard. You can do this for other networks like management and services network. After deploying, you many change the bond definitions by editing the cluster_info.json manually.
You do not need to mess with backend_2, this network served a different purpose in earlier version of PetaSAN, it is left for comparability. This allows splitting replication traffic to a separate network than client and monitor networks. Current recommendation is not to split them in separate networks. if you have extra interfaces you can bond them to the single backend network to boost network speed. If users feel they need to split this traffic, they can define the backend2 configuration manually in the config file. To summarize: backend 2 is not a redundant HA for backend 1, if you need redundancy you need to create bonded interfaces for backend network which can be done from UI or manually.
sGoico
2 Posts
November 24, 2022, 11:05 amQuote from sGoico on November 24, 2022, 11:05 amHello, thank you for your answer. The bond approach is exactly what I did.
After playing a bit with backend_2, I ended up adding that 3rd interface to the bond and having a 3x10Gbps bond with a single IP address on it.
Hello, thank you for your answer. The bond approach is exactly what I did.
After playing a bit with backend_2, I ended up adding that 3rd interface to the bond and having a 3x10Gbps bond with a single IP address on it.
Secondary Ceph link using backend_2 unexpected behavior
sGoico
2 Posts
Quote from sGoico on November 23, 2022, 7:54 pmHello everyone,
I have installed a new PetaSAN cluster and this is my first deployment. I have managed to configure almost everything as expected but the backend_2 configuration does not seem to do what I expected to.
I wanted to add a backend_2 configuration expecting this to be a "secondary Ceph sync link" in case the main one went down, so I edited the cluster and node configuration to add a second link on a dedicated eth interface and everything seems to be working ok, but it's not really acting as a secondary link for Ceph. If I put down the backend bond, all OSDs for that cluster member go down, because Ceph monitors are only listening on the main IP address, not in all of them (as Gluster seems to be doing).
Is this the expected behavior? Because in this case I could go for a 3x10Gbps bond connected to a redundant switch to make it more reliable (2 links to one FPC and 1 link to the other FPC of the same switch).
Thank you
Hello everyone,
I have installed a new PetaSAN cluster and this is my first deployment. I have managed to configure almost everything as expected but the backend_2 configuration does not seem to do what I expected to.
I wanted to add a backend_2 configuration expecting this to be a "secondary Ceph sync link" in case the main one went down, so I edited the cluster and node configuration to add a second link on a dedicated eth interface and everything seems to be working ok, but it's not really acting as a secondary link for Ceph. If I put down the backend bond, all OSDs for that cluster member go down, because Ceph monitors are only listening on the main IP address, not in all of them (as Gluster seems to be doing).
Is this the expected behavior? Because in this case I could go for a 3x10Gbps bond connected to a redundant switch to make it more reliable (2 links to one FPC and 1 link to the other FPC of the same switch).
Thank you
admin
2,930 Posts
Quote from admin on November 23, 2022, 9:59 pmTo have highly available backend netwok, all you need to do is create a bonded interface and define the backend network to use that bond. You do this from the UI Deployment Wizard. You can do this for other networks like management and services network. After deploying, you many change the bond definitions by editing the cluster_info.json manually.
You do not need to mess with backend_2, this network served a different purpose in earlier version of PetaSAN, it is left for comparability. This allows splitting replication traffic to a separate network than client and monitor networks. Current recommendation is not to split them in separate networks. if you have extra interfaces you can bond them to the single backend network to boost network speed. If users feel they need to split this traffic, they can define the backend2 configuration manually in the config file. To summarize: backend 2 is not a redundant HA for backend 1, if you need redundancy you need to create bonded interfaces for backend network which can be done from UI or manually.
To have highly available backend netwok, all you need to do is create a bonded interface and define the backend network to use that bond. You do this from the UI Deployment Wizard. You can do this for other networks like management and services network. After deploying, you many change the bond definitions by editing the cluster_info.json manually.
You do not need to mess with backend_2, this network served a different purpose in earlier version of PetaSAN, it is left for comparability. This allows splitting replication traffic to a separate network than client and monitor networks. Current recommendation is not to split them in separate networks. if you have extra interfaces you can bond them to the single backend network to boost network speed. If users feel they need to split this traffic, they can define the backend2 configuration manually in the config file. To summarize: backend 2 is not a redundant HA for backend 1, if you need redundancy you need to create bonded interfaces for backend network which can be done from UI or manually.
sGoico
2 Posts
Quote from sGoico on November 24, 2022, 11:05 amHello, thank you for your answer. The bond approach is exactly what I did.
After playing a bit with backend_2, I ended up adding that 3rd interface to the bond and having a 3x10Gbps bond with a single IP address on it.
Hello, thank you for your answer. The bond approach is exactly what I did.
After playing a bit with backend_2, I ended up adding that 3rd interface to the bond and having a 3x10Gbps bond with a single IP address on it.