Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Very slow on HDD. I will close my project

Pages: 1 2

Hi guys!

I have lived with petasan for about a year. The system was growing and I thought that productivity would also grow. This did not happen.

A cluster of three nodes each of 15 SAS 7200 8TB replica size 2 produces ~ 300 Iops!! (Windows ISCSI mpio clients).

20 HDD or 40 is no difference. 10g network, 128GB 10Core each node.

Of course, maybe I'm doing something wrong, but I talked on specialized channels(Redhat Ceph) and this is the situation for everyone.

Maybe we should admit that the this solution does not work on the HDD?

 

PS: I want to note that I did not have any problems with fault tolerance, but this is not enough. I use VSAN in other projects and it is ten times faster with the same linear writing / reading.

 

Have you done any benchmark from the ui before ? if so what are results you got for iops / throughput ? if not can you run any benchmarks now in production ?

For iops workloads like virtualisation you should go all flash, you can get 100K iops with 3 nodes, or at least use some SSD helpers for jorunal and cache. A 256GB SSD journal can serve 5 HDD, at least doubling their iops: a pure HDD OSD will do many iops for metadata (object size, offset, crc, modified time,,etc) during data io, taking these iops out to the SSD is really needed.

The only workloads suitable for pure HDD without any SSD helpers is low iops large block size workload such as backups/video streaming recording/S3.

1Minute 8Threads client is 1st node from 3.

272 write, 12876 read IOPS

Util Mem 99%, Cpu 35-60%, OSDs 21-30% write, 9-11% read.

234 MB/s write, 1031 MB/s read

Util Mem 99%, Cpu 17-24%, OSDs 22-37% write, 15-22% read.

Cluster under light load.

 

My workload is video archive (simpy storing large files)+backups on separate pools. Storage is 50% full

Disk utilisation under load only 15-20%(70% while scrubbing). OSD latency 50-100ms.

 

 

 

 

8 threads iops 272 write 12876 read -> 1 thread iops 34 write 1600 read
8 threads throughput 234 MB/s write, 1031 MB/s read -> 1 thread 30 MBs write 128 MB/s read

The performance per thread looks ok for a pure HDD setup (note writes are lower due to replication)
8 threads is not enough for 40 disks, this is why the disks are not busy, and you will get same performance even with 20 disks.
You need to put several concurrent operations / threads per HDD to get the most out of it
if you test with 128 threads you will get much higher total performance

if on the other hand your workload does not have concurrency / queue depth then you will get limited performance as you are limited by latency which determines single thread performance. in such case adding more OSD drives will not be of use.
To lower latency at least add journals : a 512 GB SSD will server 5 HDDs and will lower the latency by at least 2x
For large block sizes and low concurreny workload you can also consider using raid0 to speed up your OSDs
It also help in Windows iSCSI settings in your case to change MaxTransferLength in registry
default is 0x00040000 hex ( 262144 bytes)
change to 0x00400000 hex (4194304 bytes)

in addition, in your case you may want to create raid volume on the Windows clients to increase the queue depth

I started evacuating data, there is no difference in speed if I read from two clients or from one.

50Mb\s, 200 Iops average, scrubbing off.

is this 50 MB/s total for all clients or is it for each ?

as per earlier: have you tried ssd journal ? if using large block size, did you increase the registry setting ?

 

I have the same experience with a harddisk only setup (3 nodes with 4 x 12TB)

 

did any of the prev replies help ?

I've add SSD's for journal and caching, the speed was slightly increased but still to slow.

Pages: 1 2