expected performance
alienn
37 Posts
January 11, 2019, 2:33 pmQuote from alienn on January 11, 2019, 2:33 pmHi,
I just deployed a petasan cluster and now I'm trying to measure the performance the setup can give.
I have three identical nodes with:
- 1x Intel(R) Xeon(R) Silver 4110 (8 Cores, 16 Threads)
- 128GB RAM
- 8x 10TB Seagate Exos 10 (ST10000NM0096) SAS 7.2k drives
- 1x Seagate SSD for OS
- 1x Intel Optane 280GB as journal
- 2x 10Gbit network cards (eth0: management, iscsi-1, backend-1; eth1: iscsi-2, backend-2)
Using the integrated performance test (short) I get about 10k iops read and about 4k iops write.
That seems quite slow to me.
Can anyone point me in a direction if this is expected?
Cheers,
Alienn
Hi,
I just deployed a petasan cluster and now I'm trying to measure the performance the setup can give.
I have three identical nodes with:
- 1x Intel(R) Xeon(R) Silver 4110 (8 Cores, 16 Threads)
- 128GB RAM
- 8x 10TB Seagate Exos 10 (ST10000NM0096) SAS 7.2k drives
- 1x Seagate SSD for OS
- 1x Intel Optane 280GB as journal
- 2x 10Gbit network cards (eth0: management, iscsi-1, backend-1; eth1: iscsi-2, backend-2)
Using the integrated performance test (short) I get about 10k iops read and about 4k iops write.
That seems quite slow to me.
Can anyone point me in a direction if this is expected?
Cheers,
Alienn
admin
2,930 Posts
January 11, 2019, 5:04 pmQuote from admin on January 11, 2019, 5:04 pmThe performance varies a lot for the kind of hardware. We see systems from 3k to 60k write iops for 3 nodes. Also iops will be lower on ec pools than replicated pools.
hdds will give 100 to 150 iops each, you have 24 in total so you do not expect a lot. hdd cluster offer very good throughput MB/s which are good for backups and streaming, i expect your cluster has no issue saturating the 10G nic.
To get a good detail of how your system is performing, perform a 64 threads iops test for 10 min, the look at the charts for: cpu %util, disk %util, disk %iops, most probably the disk %util and iops will be at the limits of the hdds.
For iops demanding apps such as virtualization, hdds by themselves will not give good performance. You either need all ssds, or get a controller with write back cache and/or increase the number of hdds to 16-24 per node. In your case you may think of the second and third option. The advantage of a controller with cache is it will handle many iops in cache then issue larger block size write to the hdds, hence does not stress the hdd too much in iops, i many cases it can give a 3-5x boost in iops.
The performance varies a lot for the kind of hardware. We see systems from 3k to 60k write iops for 3 nodes. Also iops will be lower on ec pools than replicated pools.
hdds will give 100 to 150 iops each, you have 24 in total so you do not expect a lot. hdd cluster offer very good throughput MB/s which are good for backups and streaming, i expect your cluster has no issue saturating the 10G nic.
To get a good detail of how your system is performing, perform a 64 threads iops test for 10 min, the look at the charts for: cpu %util, disk %util, disk %iops, most probably the disk %util and iops will be at the limits of the hdds.
For iops demanding apps such as virtualization, hdds by themselves will not give good performance. You either need all ssds, or get a controller with write back cache and/or increase the number of hdds to 16-24 per node. In your case you may think of the second and third option. The advantage of a controller with cache is it will handle many iops in cache then issue larger block size write to the hdds, hence does not stress the hdd too much in iops, i many cases it can give a 3-5x boost in iops.
Last edited on January 11, 2019, 5:25 pm by admin · #2
alienn
37 Posts
January 22, 2019, 4:45 pmQuote from alienn on January 22, 2019, 4:45 pmCan you give an advice on a good controller with cache? Right now we are using LSI SAS 9300-8i controllers.
Does it make sense to add another node with different disks to this cluster in order to scale up the number of available spindles?
The additonal node would have:
- 10 disks with 4tb
- 12 disks with 2tb
- 1 ssd for the system
- 1 sata ssd as journal (1tb)
- raid controller Areca ARC-1882LP (disks as jbod)
Can I add additional performance with this node, or is it just a waste of time to consider this?
Can you give an advice on a good controller with cache? Right now we are using LSI SAS 9300-8i controllers.
Does it make sense to add another node with different disks to this cluster in order to scale up the number of available spindles?
The additonal node would have:
- 10 disks with 4tb
- 12 disks with 2tb
- 1 ssd for the system
- 1 sata ssd as journal (1tb)
- raid controller Areca ARC-1882LP (disks as jbod)
Can I add additional performance with this node, or is it just a waste of time to consider this?
admin
2,930 Posts
January 22, 2019, 10:35 pmQuote from admin on January 22, 2019, 10:35 pmGenerally adding more nodes will linearly scale performance.
In some cases where you have different hardware, this could actually slow things..if your new node is less powerful hardware and/or has slower disks, it will slow the cluster down. If it also has more storage capacity, it will serve more ios than the others, which again could be a bottleneck.
So it is much better to have your storage nodes have similar configuration.
For controllers, both LSI and Areca work well.
Generally adding more nodes will linearly scale performance.
In some cases where you have different hardware, this could actually slow things..if your new node is less powerful hardware and/or has slower disks, it will slow the cluster down. If it also has more storage capacity, it will serve more ios than the others, which again could be a bottleneck.
So it is much better to have your storage nodes have similar configuration.
For controllers, both LSI and Areca work well.
expected performance
alienn
37 Posts
Quote from alienn on January 11, 2019, 2:33 pmHi,
I just deployed a petasan cluster and now I'm trying to measure the performance the setup can give.
I have three identical nodes with:
- 1x Intel(R) Xeon(R) Silver 4110 (8 Cores, 16 Threads)
- 128GB RAM
- 8x 10TB Seagate Exos 10 (ST10000NM0096) SAS 7.2k drives
- 1x Seagate SSD for OS
- 1x Intel Optane 280GB as journal
- 2x 10Gbit network cards (eth0: management, iscsi-1, backend-1; eth1: iscsi-2, backend-2)
Using the integrated performance test (short) I get about 10k iops read and about 4k iops write.
That seems quite slow to me.
Can anyone point me in a direction if this is expected?
Cheers,
Alienn
Hi,
I just deployed a petasan cluster and now I'm trying to measure the performance the setup can give.
I have three identical nodes with:
- 1x Intel(R) Xeon(R) Silver 4110 (8 Cores, 16 Threads)
- 128GB RAM
- 8x 10TB Seagate Exos 10 (ST10000NM0096) SAS 7.2k drives
- 1x Seagate SSD for OS
- 1x Intel Optane 280GB as journal
- 2x 10Gbit network cards (eth0: management, iscsi-1, backend-1; eth1: iscsi-2, backend-2)
Using the integrated performance test (short) I get about 10k iops read and about 4k iops write.
That seems quite slow to me.
Can anyone point me in a direction if this is expected?
Cheers,
Alienn
admin
2,930 Posts
Quote from admin on January 11, 2019, 5:04 pmThe performance varies a lot for the kind of hardware. We see systems from 3k to 60k write iops for 3 nodes. Also iops will be lower on ec pools than replicated pools.
hdds will give 100 to 150 iops each, you have 24 in total so you do not expect a lot. hdd cluster offer very good throughput MB/s which are good for backups and streaming, i expect your cluster has no issue saturating the 10G nic.
To get a good detail of how your system is performing, perform a 64 threads iops test for 10 min, the look at the charts for: cpu %util, disk %util, disk %iops, most probably the disk %util and iops will be at the limits of the hdds.
For iops demanding apps such as virtualization, hdds by themselves will not give good performance. You either need all ssds, or get a controller with write back cache and/or increase the number of hdds to 16-24 per node. In your case you may think of the second and third option. The advantage of a controller with cache is it will handle many iops in cache then issue larger block size write to the hdds, hence does not stress the hdd too much in iops, i many cases it can give a 3-5x boost in iops.
The performance varies a lot for the kind of hardware. We see systems from 3k to 60k write iops for 3 nodes. Also iops will be lower on ec pools than replicated pools.
hdds will give 100 to 150 iops each, you have 24 in total so you do not expect a lot. hdd cluster offer very good throughput MB/s which are good for backups and streaming, i expect your cluster has no issue saturating the 10G nic.
To get a good detail of how your system is performing, perform a 64 threads iops test for 10 min, the look at the charts for: cpu %util, disk %util, disk %iops, most probably the disk %util and iops will be at the limits of the hdds.
For iops demanding apps such as virtualization, hdds by themselves will not give good performance. You either need all ssds, or get a controller with write back cache and/or increase the number of hdds to 16-24 per node. In your case you may think of the second and third option. The advantage of a controller with cache is it will handle many iops in cache then issue larger block size write to the hdds, hence does not stress the hdd too much in iops, i many cases it can give a 3-5x boost in iops.
alienn
37 Posts
Quote from alienn on January 22, 2019, 4:45 pmCan you give an advice on a good controller with cache? Right now we are using LSI SAS 9300-8i controllers.
Does it make sense to add another node with different disks to this cluster in order to scale up the number of available spindles?
The additonal node would have:
- 10 disks with 4tb
- 12 disks with 2tb
- 1 ssd for the system
- 1 sata ssd as journal (1tb)
- raid controller Areca ARC-1882LP (disks as jbod)
Can I add additional performance with this node, or is it just a waste of time to consider this?
Can you give an advice on a good controller with cache? Right now we are using LSI SAS 9300-8i controllers.
Does it make sense to add another node with different disks to this cluster in order to scale up the number of available spindles?
The additonal node would have:
- 10 disks with 4tb
- 12 disks with 2tb
- 1 ssd for the system
- 1 sata ssd as journal (1tb)
- raid controller Areca ARC-1882LP (disks as jbod)
Can I add additional performance with this node, or is it just a waste of time to consider this?
admin
2,930 Posts
Quote from admin on January 22, 2019, 10:35 pmGenerally adding more nodes will linearly scale performance.
In some cases where you have different hardware, this could actually slow things..if your new node is less powerful hardware and/or has slower disks, it will slow the cluster down. If it also has more storage capacity, it will serve more ios than the others, which again could be a bottleneck.
So it is much better to have your storage nodes have similar configuration.
For controllers, both LSI and Areca work well.
Generally adding more nodes will linearly scale performance.
In some cases where you have different hardware, this could actually slow things..if your new node is less powerful hardware and/or has slower disks, it will slow the cluster down. If it also has more storage capacity, it will serve more ios than the others, which again could be a bottleneck.
So it is much better to have your storage nodes have similar configuration.
For controllers, both LSI and Areca work well.