IP redistribution after a node fail
Tarmabal
4 Posts
July 5, 2017, 11:55 amQuote from Tarmabal on July 5, 2017, 11:55 amHey I have noticed that when a iscsi target node fails, the virtual IPs changes to the other 2 nodes but when the node starts again, the ips doesnt change to their original node, instead they keep in the 2 nodes that didnt fail.
How could I change that?
Hey I have noticed that when a iscsi target node fails, the virtual IPs changes to the other 2 nodes but when the node starts again, the ips doesnt change to their original node, instead they keep in the 2 nodes that didnt fail.
How could I change that?
admin
2,930 Posts
July 5, 2017, 3:27 pmQuote from admin on July 5, 2017, 3:27 pmYou are correct, we do not fail the resources back, but this is also quite typical of many HA solutions. Note that failing back the resources may pause the client io for a few seconds which may not desirable. Currently the failed node that came back is treated like a new server that joined the cluster, it will be preferred to service new disks based on its low load distribution, but there is no concept of a node owning a resource and getting it back.
If you need to redistribute the load for disk, currently you need to manually stop the disk and restart it. In the future we plan to provide finer control to stop a particular path and restart it, we also have longer plans for optional dynamic re-assigning paths based on load statistics.
You are correct, we do not fail the resources back, but this is also quite typical of many HA solutions. Note that failing back the resources may pause the client io for a few seconds which may not desirable. Currently the failed node that came back is treated like a new server that joined the cluster, it will be preferred to service new disks based on its low load distribution, but there is no concept of a node owning a resource and getting it back.
If you need to redistribute the load for disk, currently you need to manually stop the disk and restart it. In the future we plan to provide finer control to stop a particular path and restart it, we also have longer plans for optional dynamic re-assigning paths based on load statistics.
Last edited on July 5, 2017, 3:29 pm · #2
IP redistribution after a node fail
Tarmabal
4 Posts
Quote from Tarmabal on July 5, 2017, 11:55 amHey I have noticed that when a iscsi target node fails, the virtual IPs changes to the other 2 nodes but when the node starts again, the ips doesnt change to their original node, instead they keep in the 2 nodes that didnt fail.
How could I change that?
Hey I have noticed that when a iscsi target node fails, the virtual IPs changes to the other 2 nodes but when the node starts again, the ips doesnt change to their original node, instead they keep in the 2 nodes that didnt fail.
How could I change that?
admin
2,930 Posts
Quote from admin on July 5, 2017, 3:27 pmYou are correct, we do not fail the resources back, but this is also quite typical of many HA solutions. Note that failing back the resources may pause the client io for a few seconds which may not desirable. Currently the failed node that came back is treated like a new server that joined the cluster, it will be preferred to service new disks based on its low load distribution, but there is no concept of a node owning a resource and getting it back.
If you need to redistribute the load for disk, currently you need to manually stop the disk and restart it. In the future we plan to provide finer control to stop a particular path and restart it, we also have longer plans for optional dynamic re-assigning paths based on load statistics.
You are correct, we do not fail the resources back, but this is also quite typical of many HA solutions. Note that failing back the resources may pause the client io for a few seconds which may not desirable. Currently the failed node that came back is treated like a new server that joined the cluster, it will be preferred to service new disks based on its low load distribution, but there is no concept of a node owning a resource and getting it back.
If you need to redistribute the load for disk, currently you need to manually stop the disk and restart it. In the future we plan to provide finer control to stop a particular path and restart it, we also have longer plans for optional dynamic re-assigning paths based on load statistics.