setup freeze on step 6
milton
8 Posts
May 24, 2017, 7:39 pmQuote from milton on May 24, 2017, 7:39 pmNow everything works, the problem was that I used the same IP range for the two interfaces, I was able to do a test for my iSCSI HA study, the read / write speed metric was 1/4 of the others I tested, it's a good system, but it's not what I'm looking for right now.
Thanks for the help, good luck with the project.
Now everything works, the problem was that I used the same IP range for the two interfaces, I was able to do a test for my iSCSI HA study, the read / write speed metric was 1/4 of the others I tested, it's a good system, but it's not what I'm looking for right now.
Thanks for the help, good luck with the project.
admin
2,918 Posts
May 24, 2017, 7:43 pmQuote from admin on May 24, 2017, 7:43 pmGlad it worked. Thanks for your feedback. PetaSAN does carry the overhead of a distributed system when compared to a traditional SAN, so on a small scale it is not as fast. It does however scale out, so the more you add disks and nodes the faster it gets.
To start getting decent performance from spinning drives we need about 24 drives in total ( 8 x 3 hosts). With SSDs we need about 4 per host as min. Our recommendation is to use from 10-12 per host. Ceph uses object storage, all sequential client io will result in random io on the disks where seek latency has major impact on performance, this is specially true for small number of spinning drives.
Glad it worked. Thanks for your feedback. PetaSAN does carry the overhead of a distributed system when compared to a traditional SAN, so on a small scale it is not as fast. It does however scale out, so the more you add disks and nodes the faster it gets.
To start getting decent performance from spinning drives we need about 24 drives in total ( 8 x 3 hosts). With SSDs we need about 4 per host as min. Our recommendation is to use from 10-12 per host. Ceph uses object storage, all sequential client io will result in random io on the disks where seek latency has major impact on performance, this is specially true for small number of spinning drives.
Last edited on May 25, 2017, 4:06 am · #22
setup freeze on step 6
milton
8 Posts
Quote from milton on May 24, 2017, 7:39 pmNow everything works, the problem was that I used the same IP range for the two interfaces, I was able to do a test for my iSCSI HA study, the read / write speed metric was 1/4 of the others I tested, it's a good system, but it's not what I'm looking for right now.
Thanks for the help, good luck with the project.
Now everything works, the problem was that I used the same IP range for the two interfaces, I was able to do a test for my iSCSI HA study, the read / write speed metric was 1/4 of the others I tested, it's a good system, but it's not what I'm looking for right now.
Thanks for the help, good luck with the project.
admin
2,918 Posts
Quote from admin on May 24, 2017, 7:43 pmGlad it worked. Thanks for your feedback. PetaSAN does carry the overhead of a distributed system when compared to a traditional SAN, so on a small scale it is not as fast. It does however scale out, so the more you add disks and nodes the faster it gets.
To start getting decent performance from spinning drives we need about 24 drives in total ( 8 x 3 hosts). With SSDs we need about 4 per host as min. Our recommendation is to use from 10-12 per host. Ceph uses object storage, all sequential client io will result in random io on the disks where seek latency has major impact on performance, this is specially true for small number of spinning drives.
Glad it worked. Thanks for your feedback. PetaSAN does carry the overhead of a distributed system when compared to a traditional SAN, so on a small scale it is not as fast. It does however scale out, so the more you add disks and nodes the faster it gets.
To start getting decent performance from spinning drives we need about 24 drives in total ( 8 x 3 hosts). With SSDs we need about 4 per host as min. Our recommendation is to use from 10-12 per host. Ceph uses object storage, all sequential client io will result in random io on the disks where seek latency has major impact on performance, this is specially true for small number of spinning drives.