few questions - please someone answer
anilgidla
5 Posts
June 19, 2018, 10:59 amQuote from anilgidla on June 19, 2018, 10:59 am- What is industry best practices on configuring datastores in PetaSAN
- How does PetaSAN handle redundancy in the environment & how much disk does iot use for that
- Datastore size and clustering
- What is industry best practices on configuring datastores in PetaSAN
- How does PetaSAN handle redundancy in the environment & how much disk does iot use for that
- Datastore size and clustering
admin
2,930 Posts
June 19, 2018, 11:45 amQuote from admin on June 19, 2018, 11:45 amPetaSAN gives you the flexibility to create disks of various sizes / active paths, the best way to configure depends on the use case, some client applications allow you to create large disks that many different clients nodes can access simultaneously via a clustered file system, some other use cases will prefer a large number of smaller size disks that each client owns its own. So it is better to look at the use cases. We have a couple of use case examples in our docs,
For redundancy we need a min of 3 nodes, 1 storage disk in each, there is no limit on the max nodes/disks. You define the number of data replicas in the ui as 2 or 3 (in 2.1 it will be up to 10) which means your cluster can tolerate 1 or 2 disk/node failures. Ceph is self healing so even if you have a disk or node failure, it will automatically re-create lost replicas itself. Adding more nodes/disks will increase cluster capacity and performance but the redundancy and replica is the same. At the iSCSI layer we built our redundancy on Consul system. There is a limit that for both Ceph and Consul that for the management nodes (the first 3 nodes) we can tolerate only 1 node failure but not 2 out of 3.
PetaSAN gives you the flexibility to create disks of various sizes / active paths, the best way to configure depends on the use case, some client applications allow you to create large disks that many different clients nodes can access simultaneously via a clustered file system, some other use cases will prefer a large number of smaller size disks that each client owns its own. So it is better to look at the use cases. We have a couple of use case examples in our docs,
For redundancy we need a min of 3 nodes, 1 storage disk in each, there is no limit on the max nodes/disks. You define the number of data replicas in the ui as 2 or 3 (in 2.1 it will be up to 10) which means your cluster can tolerate 1 or 2 disk/node failures. Ceph is self healing so even if you have a disk or node failure, it will automatically re-create lost replicas itself. Adding more nodes/disks will increase cluster capacity and performance but the redundancy and replica is the same. At the iSCSI layer we built our redundancy on Consul system. There is a limit that for both Ceph and Consul that for the management nodes (the first 3 nodes) we can tolerate only 1 node failure but not 2 out of 3.
Last edited on June 19, 2018, 11:47 am by admin · #2
few questions - please someone answer
anilgidla
5 Posts
Quote from anilgidla on June 19, 2018, 10:59 am- What is industry best practices on configuring datastores in PetaSAN
- How does PetaSAN handle redundancy in the environment & how much disk does iot use for that
- Datastore size and clustering
- What is industry best practices on configuring datastores in PetaSAN
- How does PetaSAN handle redundancy in the environment & how much disk does iot use for that
- Datastore size and clustering
admin
2,930 Posts
Quote from admin on June 19, 2018, 11:45 amPetaSAN gives you the flexibility to create disks of various sizes / active paths, the best way to configure depends on the use case, some client applications allow you to create large disks that many different clients nodes can access simultaneously via a clustered file system, some other use cases will prefer a large number of smaller size disks that each client owns its own. So it is better to look at the use cases. We have a couple of use case examples in our docs,
For redundancy we need a min of 3 nodes, 1 storage disk in each, there is no limit on the max nodes/disks. You define the number of data replicas in the ui as 2 or 3 (in 2.1 it will be up to 10) which means your cluster can tolerate 1 or 2 disk/node failures. Ceph is self healing so even if you have a disk or node failure, it will automatically re-create lost replicas itself. Adding more nodes/disks will increase cluster capacity and performance but the redundancy and replica is the same. At the iSCSI layer we built our redundancy on Consul system. There is a limit that for both Ceph and Consul that for the management nodes (the first 3 nodes) we can tolerate only 1 node failure but not 2 out of 3.
PetaSAN gives you the flexibility to create disks of various sizes / active paths, the best way to configure depends on the use case, some client applications allow you to create large disks that many different clients nodes can access simultaneously via a clustered file system, some other use cases will prefer a large number of smaller size disks that each client owns its own. So it is better to look at the use cases. We have a couple of use case examples in our docs,
For redundancy we need a min of 3 nodes, 1 storage disk in each, there is no limit on the max nodes/disks. You define the number of data replicas in the ui as 2 or 3 (in 2.1 it will be up to 10) which means your cluster can tolerate 1 or 2 disk/node failures. Ceph is self healing so even if you have a disk or node failure, it will automatically re-create lost replicas itself. Adding more nodes/disks will increase cluster capacity and performance but the redundancy and replica is the same. At the iSCSI layer we built our redundancy on Consul system. There is a limit that for both Ceph and Consul that for the management nodes (the first 3 nodes) we can tolerate only 1 node failure but not 2 out of 3.