adding interfaces
Pages: 1 2
admin
2,930 Posts
February 21, 2018, 1:56 pmQuote from admin on February 21, 2018, 1:56 pmYes have iSCSI 1 and 2 + Management on the 1G network. Have iSCSI 1 and Management using the same subnet ip range. iSCSI 2 give it a different ip subnet range, it will not be used now. The reason you would need 2 iSCSI subnets is if you want to support high availability so each client access the iSCSI disks via 2 nics and 2 different network links / swicthes, this is called MPIO. In your case you will not be using iSCSI2 (now) still better define it, it will not use any extra nics since you map it to the same nic
For backend 1, 2 do define different subnet ranges even if now you map them to same nic, in the future you may add more nics and have each subnet on a nic by itself or even on several bonded nics.
Yes have iSCSI 1 and 2 + Management on the 1G network. Have iSCSI 1 and Management using the same subnet ip range. iSCSI 2 give it a different ip subnet range, it will not be used now. The reason you would need 2 iSCSI subnets is if you want to support high availability so each client access the iSCSI disks via 2 nics and 2 different network links / swicthes, this is called MPIO. In your case you will not be using iSCSI2 (now) still better define it, it will not use any extra nics since you map it to the same nic
For backend 1, 2 do define different subnet ranges even if now you map them to same nic, in the future you may add more nics and have each subnet on a nic by itself or even on several bonded nics.
Last edited on February 21, 2018, 1:57 pm by admin · #11
vijayabhasker.gandla@spectraforce.com
6 Posts
February 22, 2018, 12:55 pmQuote from vijayabhasker.gandla@spectraforce.com on February 22, 2018, 12:55 pmThank you for detailed information. Can I setup 2 node cluster with PetaSan? If one node fails the other node serves the purpose. Can we continue to add additional nodes to this 2 node cluster further. Though it may not be recommended setup with 2 node cluster, this is for my knowledge and experimentation.
Thank you for detailed information. Can I setup 2 node cluster with PetaSan? If one node fails the other node serves the purpose. Can we continue to add additional nodes to this 2 node cluster further. Though it may not be recommended setup with 2 node cluster, this is for my knowledge and experimentation.
admin
2,930 Posts
February 22, 2018, 4:14 pmQuote from admin on February 22, 2018, 4:14 pmWe do not support 2 node setups, you need 3 nodes min for ceph and consul quorums. You can however have one of the three nodes a vm without any OSDs and iSCIS target service and place this vm on any external node. It will just take part in ceph and consul quorum setup.
We do not support 2 node setups, you need 3 nodes min for ceph and consul quorums. You can however have one of the three nodes a vm without any OSDs and iSCIS target service and place this vm on any external node. It will just take part in ceph and consul quorum setup.
vijayabhasker.gandla@spectraforce.com
6 Posts
February 28, 2018, 11:51 amQuote from vijayabhasker.gandla@spectraforce.com on February 28, 2018, 11:51 amWhat is fault tolerance of osd nodes and monitor nodes assuming that there are 3 osd nodes and 3 monitor nodes.
Out of 3 osd nodes, if 2 osd nodes are failed can the single osd node continue serve the data in it to the servers? What is the recovery mechanism in this case?
Similarly what is the recovery mechanism of monitor nodes in these cases?
What is fault tolerance of osd nodes and monitor nodes assuming that there are 3 osd nodes and 3 monitor nodes.
Out of 3 osd nodes, if 2 osd nodes are failed can the single osd node continue serve the data in it to the servers? What is the recovery mechanism in this case?
Similarly what is the recovery mechanism of monitor nodes in these cases?
admin
2,930 Posts
February 28, 2018, 4:21 pmQuote from admin on February 28, 2018, 4:21 pmWe can tolerate 1 failure out of the 3 management nodes, they include the ceph monitors and consul servers, both need a quorum to be up..2 failures out of 3 will bring the cluster down and will require manual fix. If you lose 1 management node, we have a "replace management node" option in the deployment wizard (third option) when adding a replacement node.
For storage nodes: technically if you have 3 replicas, you can tolerate a failure of 2 nodes and keep the cluster up with client io while ceph tries to recreate copies of existing replicas..this however is not recommended as it may lead to inconsistent data in case the copy does not match the copies that went down and also what happens if you accept a user write and afterwards this single copy dies. So the general recommendation is if you have 3 replicas you tolerate 1 storage node failure and safely stop io if you have 2 node failure. This is what PetaSAN does if you choose a replica count of 3.
We can tolerate 1 failure out of the 3 management nodes, they include the ceph monitors and consul servers, both need a quorum to be up..2 failures out of 3 will bring the cluster down and will require manual fix. If you lose 1 management node, we have a "replace management node" option in the deployment wizard (third option) when adding a replacement node.
For storage nodes: technically if you have 3 replicas, you can tolerate a failure of 2 nodes and keep the cluster up with client io while ceph tries to recreate copies of existing replicas..this however is not recommended as it may lead to inconsistent data in case the copy does not match the copies that went down and also what happens if you accept a user write and afterwards this single copy dies. So the general recommendation is if you have 3 replicas you tolerate 1 storage node failure and safely stop io if you have 2 node failure. This is what PetaSAN does if you choose a replica count of 3.
Last edited on February 28, 2018, 4:22 pm by admin · #15
Pages: 1 2
adding interfaces
admin
2,930 Posts
Quote from admin on February 21, 2018, 1:56 pmYes have iSCSI 1 and 2 + Management on the 1G network. Have iSCSI 1 and Management using the same subnet ip range. iSCSI 2 give it a different ip subnet range, it will not be used now. The reason you would need 2 iSCSI subnets is if you want to support high availability so each client access the iSCSI disks via 2 nics and 2 different network links / swicthes, this is called MPIO. In your case you will not be using iSCSI2 (now) still better define it, it will not use any extra nics since you map it to the same nic
For backend 1, 2 do define different subnet ranges even if now you map them to same nic, in the future you may add more nics and have each subnet on a nic by itself or even on several bonded nics.
Yes have iSCSI 1 and 2 + Management on the 1G network. Have iSCSI 1 and Management using the same subnet ip range. iSCSI 2 give it a different ip subnet range, it will not be used now. The reason you would need 2 iSCSI subnets is if you want to support high availability so each client access the iSCSI disks via 2 nics and 2 different network links / swicthes, this is called MPIO. In your case you will not be using iSCSI2 (now) still better define it, it will not use any extra nics since you map it to the same nic
For backend 1, 2 do define different subnet ranges even if now you map them to same nic, in the future you may add more nics and have each subnet on a nic by itself or even on several bonded nics.
vijayabhasker.gandla@spectraforce.com
6 Posts
Quote from vijayabhasker.gandla@spectraforce.com on February 22, 2018, 12:55 pmThank you for detailed information. Can I setup 2 node cluster with PetaSan? If one node fails the other node serves the purpose. Can we continue to add additional nodes to this 2 node cluster further. Though it may not be recommended setup with 2 node cluster, this is for my knowledge and experimentation.
Thank you for detailed information. Can I setup 2 node cluster with PetaSan? If one node fails the other node serves the purpose. Can we continue to add additional nodes to this 2 node cluster further. Though it may not be recommended setup with 2 node cluster, this is for my knowledge and experimentation.
admin
2,930 Posts
Quote from admin on February 22, 2018, 4:14 pmWe do not support 2 node setups, you need 3 nodes min for ceph and consul quorums. You can however have one of the three nodes a vm without any OSDs and iSCIS target service and place this vm on any external node. It will just take part in ceph and consul quorum setup.
We do not support 2 node setups, you need 3 nodes min for ceph and consul quorums. You can however have one of the three nodes a vm without any OSDs and iSCIS target service and place this vm on any external node. It will just take part in ceph and consul quorum setup.
vijayabhasker.gandla@spectraforce.com
6 Posts
Quote from vijayabhasker.gandla@spectraforce.com on February 28, 2018, 11:51 amWhat is fault tolerance of osd nodes and monitor nodes assuming that there are 3 osd nodes and 3 monitor nodes.
Out of 3 osd nodes, if 2 osd nodes are failed can the single osd node continue serve the data in it to the servers? What is the recovery mechanism in this case?
Similarly what is the recovery mechanism of monitor nodes in these cases?
What is fault tolerance of osd nodes and monitor nodes assuming that there are 3 osd nodes and 3 monitor nodes.
Out of 3 osd nodes, if 2 osd nodes are failed can the single osd node continue serve the data in it to the servers? What is the recovery mechanism in this case?
Similarly what is the recovery mechanism of monitor nodes in these cases?
admin
2,930 Posts
Quote from admin on February 28, 2018, 4:21 pmWe can tolerate 1 failure out of the 3 management nodes, they include the ceph monitors and consul servers, both need a quorum to be up..2 failures out of 3 will bring the cluster down and will require manual fix. If you lose 1 management node, we have a "replace management node" option in the deployment wizard (third option) when adding a replacement node.
For storage nodes: technically if you have 3 replicas, you can tolerate a failure of 2 nodes and keep the cluster up with client io while ceph tries to recreate copies of existing replicas..this however is not recommended as it may lead to inconsistent data in case the copy does not match the copies that went down and also what happens if you accept a user write and afterwards this single copy dies. So the general recommendation is if you have 3 replicas you tolerate 1 storage node failure and safely stop io if you have 2 node failure. This is what PetaSAN does if you choose a replica count of 3.
We can tolerate 1 failure out of the 3 management nodes, they include the ceph monitors and consul servers, both need a quorum to be up..2 failures out of 3 will bring the cluster down and will require manual fix. If you lose 1 management node, we have a "replace management node" option in the deployment wizard (third option) when adding a replacement node.
For storage nodes: technically if you have 3 replicas, you can tolerate a failure of 2 nodes and keep the cluster up with client io while ceph tries to recreate copies of existing replicas..this however is not recommended as it may lead to inconsistent data in case the copy does not match the copies that went down and also what happens if you accept a user write and afterwards this single copy dies. So the general recommendation is if you have 3 replicas you tolerate 1 storage node failure and safely stop io if you have 2 node failure. This is what PetaSAN does if you choose a replica count of 3.