Reability Question
Pavel
10 Posts
February 8, 2021, 10:11 pmQuote from Pavel on February 8, 2021, 10:11 pmI have a cluster of three nodes, each 20 SAS disks 8TB. The task is to write video in many streams. 4tb arrives per day.
I decided not to use ssd cache. Now 2 copies configured. 3 copies are overhead.
There were two HDD deaths in three months, quickly rebuilt. Happy.
I cannot use the 10 or 60 raid with disks of this size. Rebuild is more often unsuccessful than successful.
I use only ISCSI, transfer speed (200MB\s) suits.
How much that setup worse than raid 10? In your experience.
I have a cluster of three nodes, each 20 SAS disks 8TB. The task is to write video in many streams. 4tb arrives per day.
I decided not to use ssd cache. Now 2 copies configured. 3 copies are overhead.
There were two HDD deaths in three months, quickly rebuilt. Happy.
I cannot use the 10 or 60 raid with disks of this size. Rebuild is more often unsuccessful than successful.
I use only ISCSI, transfer speed (200MB\s) suits.
How much that setup worse than raid 10? In your experience.
admin
2,930 Posts
February 8, 2021, 10:59 pmQuote from admin on February 8, 2021, 10:59 pmif your question is to create raid to use for the OSDs, then no it is better to use the drives as OSDs without raid
if your question is to create raid to use for the OSDs, then no it is better to use the drives as OSDs without raid
Pavel
10 Posts
February 9, 2021, 9:58 amQuote from Pavel on February 9, 2021, 9:58 amI mean Raid10 single storage vs 2 replica Ceph reliability.
How often ceph will fail?
I mean Raid10 single storage vs 2 replica Ceph reliability.
How often ceph will fail?
Shiori
86 Posts
December 17, 2021, 5:24 pmQuote from Shiori on December 17, 2021, 5:24 pmDrive failure rates have nothing to do with mode of service. They will fail just as often in raid 10/60 as they will in a ceph cluster. What matters is service load, if you are constantly reading and writing to a set of drives then they will wear and fail faster than if they were lightly used.
The main difference between the two is a ceph cluster spreads the workload across a much larger number of disks and the rate of replication allows for faster rebiulds while the cluster stays active. Raid arrays usually have to be offlined to rebuild, hence its a raid 60, a mirrored raid 6 which will rebuild at drive read spead of the mirroring drive. In the case of dual drive failure, the parity data is used to help rebuild the array. Both processes are slower than copying placement groups from multiple sources. The difference is recovery rate is in the days category plus your cluster is still active and providing data while rebuilding.
Ceph really shines with 3x replication and a minimumof 5 nodes. And gets better with more nodes and more replicas across many drives.
Drive failure rates have nothing to do with mode of service. They will fail just as often in raid 10/60 as they will in a ceph cluster. What matters is service load, if you are constantly reading and writing to a set of drives then they will wear and fail faster than if they were lightly used.
The main difference between the two is a ceph cluster spreads the workload across a much larger number of disks and the rate of replication allows for faster rebiulds while the cluster stays active. Raid arrays usually have to be offlined to rebuild, hence its a raid 60, a mirrored raid 6 which will rebuild at drive read spead of the mirroring drive. In the case of dual drive failure, the parity data is used to help rebuild the array. Both processes are slower than copying placement groups from multiple sources. The difference is recovery rate is in the days category plus your cluster is still active and providing data while rebuilding.
Ceph really shines with 3x replication and a minimumof 5 nodes. And gets better with more nodes and more replicas across many drives.
Reability Question
Pavel
10 Posts
Quote from Pavel on February 8, 2021, 10:11 pmI have a cluster of three nodes, each 20 SAS disks 8TB. The task is to write video in many streams. 4tb arrives per day.
I decided not to use ssd cache. Now 2 copies configured. 3 copies are overhead.
There were two HDD deaths in three months, quickly rebuilt. Happy.
I cannot use the 10 or 60 raid with disks of this size. Rebuild is more often unsuccessful than successful.
I use only ISCSI, transfer speed (200MB\s) suits.
How much that setup worse than raid 10? In your experience.
I have a cluster of three nodes, each 20 SAS disks 8TB. The task is to write video in many streams. 4tb arrives per day.
I decided not to use ssd cache. Now 2 copies configured. 3 copies are overhead.
There were two HDD deaths in three months, quickly rebuilt. Happy.
I cannot use the 10 or 60 raid with disks of this size. Rebuild is more often unsuccessful than successful.
I use only ISCSI, transfer speed (200MB\s) suits.
How much that setup worse than raid 10? In your experience.
admin
2,930 Posts
Quote from admin on February 8, 2021, 10:59 pmif your question is to create raid to use for the OSDs, then no it is better to use the drives as OSDs without raid
if your question is to create raid to use for the OSDs, then no it is better to use the drives as OSDs without raid
Pavel
10 Posts
Quote from Pavel on February 9, 2021, 9:58 amI mean Raid10 single storage vs 2 replica Ceph reliability.
How often ceph will fail?
I mean Raid10 single storage vs 2 replica Ceph reliability.
How often ceph will fail?
Shiori
86 Posts
Quote from Shiori on December 17, 2021, 5:24 pmDrive failure rates have nothing to do with mode of service. They will fail just as often in raid 10/60 as they will in a ceph cluster. What matters is service load, if you are constantly reading and writing to a set of drives then they will wear and fail faster than if they were lightly used.
The main difference between the two is a ceph cluster spreads the workload across a much larger number of disks and the rate of replication allows for faster rebiulds while the cluster stays active. Raid arrays usually have to be offlined to rebuild, hence its a raid 60, a mirrored raid 6 which will rebuild at drive read spead of the mirroring drive. In the case of dual drive failure, the parity data is used to help rebuild the array. Both processes are slower than copying placement groups from multiple sources. The difference is recovery rate is in the days category plus your cluster is still active and providing data while rebuilding.
Ceph really shines with 3x replication and a minimumof 5 nodes. And gets better with more nodes and more replicas across many drives.
Drive failure rates have nothing to do with mode of service. They will fail just as often in raid 10/60 as they will in a ceph cluster. What matters is service load, if you are constantly reading and writing to a set of drives then they will wear and fail faster than if they were lightly used.
The main difference between the two is a ceph cluster spreads the workload across a much larger number of disks and the rate of replication allows for faster rebiulds while the cluster stays active. Raid arrays usually have to be offlined to rebuild, hence its a raid 60, a mirrored raid 6 which will rebuild at drive read spead of the mirroring drive. In the case of dual drive failure, the parity data is used to help rebuild the array. Both processes are slower than copying placement groups from multiple sources. The difference is recovery rate is in the days category plus your cluster is still active and providing data while rebuilding.
Ceph really shines with 3x replication and a minimumof 5 nodes. And gets better with more nodes and more replicas across many drives.