Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

adding interfaces

Pages: 1 2

Yes have iSCSI 1 and 2 + Management on the 1G  network. Have iSCSI 1 and Management using the same subnet ip range. iSCSI 2 give it a different ip subnet range, it will not be used now. The reason you would need 2 iSCSI subnets is if you want to support high availability so each client access the iSCSI disks via 2 nics and 2 different network links / swicthes, this is called MPIO.  In your  case you will not be using iSCSI2 (now) still better define it, it will not use any extra nics since you map it to the same nic

For backend 1, 2 do define different subnet ranges even if now you map them to same nic, in the future you may add more nics and have each subnet on a nic by itself or even on several bonded nics.

 

Thank you for detailed information.  Can I setup 2 node cluster with PetaSan?  If one node fails the other node serves the purpose.  Can we continue to add additional nodes to this 2 node cluster further.   Though it may not be recommended setup with 2 node cluster, this is for my knowledge and experimentation.

 

We do not support 2 node setups, you need 3 nodes min for ceph and consul quorums.  You can however have one of the three nodes a vm without any OSDs and iSCIS target service and place this vm on any external node. It will just take part in ceph and consul quorum setup.

What is fault tolerance of osd nodes and monitor nodes assuming that there are 3 osd nodes and 3 monitor nodes.

Out of 3 osd nodes, if 2 osd nodes are failed can the single osd node continue serve the data in it to the servers?  What is the recovery mechanism in this case?

Similarly what is the recovery mechanism of monitor nodes in these cases?

We can tolerate 1 failure out of the 3 management nodes, they include the ceph monitors and consul servers, both need a quorum to be up..2 failures out of 3 will bring the cluster down and will require manual fix. If you lose 1 management node, we have a "replace management node" option in the deployment wizard (third option) when adding a replacement node.

For storage  nodes: technically if you have 3 replicas, you can tolerate a failure of 2 nodes and keep the cluster up with client io while ceph tries to recreate copies of existing replicas..this however is not recommended as it may lead to inconsistent data in case the copy does not match the copies that went down and also what happens if you accept a user write and afterwards this single copy dies. So the general recommendation is if you have 3 replicas you tolerate 1 storage node failure and safely stop io if you have 2 node failure. This is what PetaSAN does if you choose a replica count of 3.

Pages: 1 2