Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Reability Question

I have a cluster of three nodes, each 20 SAS disks 8TB. The task is to write video in many streams. 4tb arrives per day.

I decided not to use ssd cache. Now 2 copies configured.  3 copies are overhead.

There were two HDD deaths in three months, quickly rebuilt. Happy.

I cannot use the 10 or 60 raid with disks of this size. Rebuild is more often unsuccessful than successful.

I use only ISCSI, transfer speed (200MB\s) suits.

 

How much that setup worse than raid 10? In your experience.

if your question is to create raid to use for the OSDs, then no it is better to use the drives as OSDs without raid

I mean Raid10 single storage vs 2 replica Ceph reliability.

How often ceph will fail?

Drive failure rates have nothing to do with mode of service. They will fail just as often in raid 10/60 as they will in a ceph cluster. What matters is service load, if you are constantly reading and writing to a set of drives then they will wear and fail faster than if they were lightly used.

The main difference between the two is a ceph cluster spreads the workload across a much larger number of disks and the rate of replication allows for faster rebiulds while the cluster stays active. Raid arrays usually have to be offlined to rebuild, hence its a raid 60, a mirrored raid 6 which will rebuild at drive read spead of the mirroring drive. In the case of dual drive failure, the parity data is used to help rebuild the array. Both processes are slower than copying placement groups from multiple sources. The difference is recovery rate is in the days category plus your cluster is still active and providing data while rebuilding.

Ceph really shines with 3x replication and a minimumof 5 nodes. And gets better with more nodes and more replicas across many drives.