PetaSAN in our 2 datacenters for production
Syscon
23 Posts
September 27, 2019, 3:06 pmQuote from Syscon on September 27, 2019, 3:06 pmHi, we would like to use PetaSAN in our 2 datacenters for production.
The idea is to place 3 storage NODES in the first and the 3 others in the second datacenter.
I understood we should add 2 NODES (1 per datacenter) as fully management NODE, these could be VM’s. This to make the PetaSAN cluster available via iSCSI to our ESXi servers in both datacenters. Is this the way to go? (we want to have one iSCSI connection in DC 1 and one in DC2).
We prefer to use SSD disks of 2TB and need (effectively) 30TB storage. So a pool with 2 replica’s would need 30 disks (5 disks per NODE).
For every 6 NODE’s we like to choose this hardware:
- Dell Power Edge R620 (R620 chassis + mainboard v2)
- Server chassis has space for 10 2.5" SAS/SSD disks
- CPU: 1x 1.70GHz/Ten Core/QPi 7.20/Cache 25 MB/TDP 70W/64-bit XEON E5-2650L v2
- 64 GB memory
- Dell PERC H710P Mini Mono, 1GB NV, BBU
- 2x 120 GB SSD for OS (disks in RAID1)
- (5x 2TB SSD)
Will the performance be sufficient for approximately 160 VM’s (half of them are high IOPS demanding, the rest low to medium)?
What is we use compression? Which impact will that have performance wise and on the compression ratio?
Our NL-IX connection (between datacenters) is 1GB (and connects the 2x 3 nodes, one cluster). Is this sufficient? What if this connection goes (temporary) down (are here steps to be taken to avoid problems with the cluster)?
Thanks in advance for your response!
Hi, we would like to use PetaSAN in our 2 datacenters for production.
The idea is to place 3 storage NODES in the first and the 3 others in the second datacenter.
I understood we should add 2 NODES (1 per datacenter) as fully management NODE, these could be VM’s. This to make the PetaSAN cluster available via iSCSI to our ESXi servers in both datacenters. Is this the way to go? (we want to have one iSCSI connection in DC 1 and one in DC2).
We prefer to use SSD disks of 2TB and need (effectively) 30TB storage. So a pool with 2 replica’s would need 30 disks (5 disks per NODE).
For every 6 NODE’s we like to choose this hardware:
- Dell Power Edge R620 (R620 chassis + mainboard v2)
- Server chassis has space for 10 2.5" SAS/SSD disks
- CPU: 1x 1.70GHz/Ten Core/QPi 7.20/Cache 25 MB/TDP 70W/64-bit XEON E5-2650L v2
- 64 GB memory
- Dell PERC H710P Mini Mono, 1GB NV, BBU
- 2x 120 GB SSD for OS (disks in RAID1)
- (5x 2TB SSD)
Will the performance be sufficient for approximately 160 VM’s (half of them are high IOPS demanding, the rest low to medium)?
What is we use compression? Which impact will that have performance wise and on the compression ratio?
Our NL-IX connection (between datacenters) is 1GB (and connects the 2x 3 nodes, one cluster). Is this sufficient? What if this connection goes (temporary) down (are here steps to be taken to avoid problems with the cluster)?
Thanks in advance for your response!
Last edited on September 27, 2019, 3:07 pm by Syscon · #1
admin
2,930 Posts
September 27, 2019, 7:24 pmQuote from admin on September 27, 2019, 7:24 pmGeneral hardware comments: seems fine but with SSD OSDs it is not advisable to use controller with cache, it is better to use hba with plain JBOD. The type of SSDs makes a huge difference, try to use an enterprise disk known to work well with Ceph it must have good sync write speed + DWPD. You can test sync write speed from PetaSAN blue console menu.
So a pool with 2 replica’s would need 30 disks (5 disks per NODE).
This is not correct, each iSCSI disk will be distributed/stored on all available disks. Also it is much advised to use replica x3.
As for the main issue of datacenters: You need to really decide if you need 1 PetaSAN cluster separated on the 2 datacenters with synchronous io going on between them, or have 2 separate clusters with some asynchronous replication of disks to be used during a failover/disaster recovery.
For a 1 cluster solution: you should consider connection latency as it could be a real bottleneck for iops and latency sensitive vms since all your ios are synchronous. Also Ceph and Consul require 3 monitor/management servers to avoid split brain so it is a bit challenging where to setup the 3rd server: you could either set it up bare-metal and active on 1 datacenter and have standby on the other, and use some replication such as drbd to achieve this, or setup this server as a vm and let the HA/failover managed by the hyper-visor but make sure this vm is stored outside the PetaSAN storage.
For a 2 cluster solution: you will need to setup remote/async replication which is supported in 2.3.0. The advantage is you will get much better performance iops/latency, but in case of fail over, your vms may not have the latest data.
So both solutions have pros/cons you will have to evaluate.
For compression: it has a small effect on performance so you should test it yourself.
Do not forget we provide professional support if you need this. Good luck.
General hardware comments: seems fine but with SSD OSDs it is not advisable to use controller with cache, it is better to use hba with plain JBOD. The type of SSDs makes a huge difference, try to use an enterprise disk known to work well with Ceph it must have good sync write speed + DWPD. You can test sync write speed from PetaSAN blue console menu.
So a pool with 2 replica’s would need 30 disks (5 disks per NODE).
This is not correct, each iSCSI disk will be distributed/stored on all available disks. Also it is much advised to use replica x3.
As for the main issue of datacenters: You need to really decide if you need 1 PetaSAN cluster separated on the 2 datacenters with synchronous io going on between them, or have 2 separate clusters with some asynchronous replication of disks to be used during a failover/disaster recovery.
For a 1 cluster solution: you should consider connection latency as it could be a real bottleneck for iops and latency sensitive vms since all your ios are synchronous. Also Ceph and Consul require 3 monitor/management servers to avoid split brain so it is a bit challenging where to setup the 3rd server: you could either set it up bare-metal and active on 1 datacenter and have standby on the other, and use some replication such as drbd to achieve this, or setup this server as a vm and let the HA/failover managed by the hyper-visor but make sure this vm is stored outside the PetaSAN storage.
For a 2 cluster solution: you will need to setup remote/async replication which is supported in 2.3.0. The advantage is you will get much better performance iops/latency, but in case of fail over, your vms may not have the latest data.
So both solutions have pros/cons you will have to evaluate.
For compression: it has a small effect on performance so you should test it yourself.
Do not forget we provide professional support if you need this. Good luck.
Last edited on September 27, 2019, 7:35 pm by admin · #2
Syscon
23 Posts
October 3, 2019, 1:43 pmQuote from Syscon on October 3, 2019, 1:43 pmHi Admin,
Thanks for your quick reply!
For sure we would like to have the advantage of having professional support (this is one of the reasons for choosing PetaSAN).
The SSD disk's we are looking for are the Samsung 860 PRO's. They run fast in our current storages. But looking around if these are good when using with Ceph I read mixed experiences…What is your opinion/experience? (I was not able yet to test them from within PetaSAN).
Just to clarify: Our idea was to use 30 disks, 2TB each (assuming a replica x2 would be sufficient) so, having 30 TB effective storage.
Then spread these 30 disks over the 6 NODES (resulting in 5 disks per NODE ).
Also we want all the 6 nodes (3 in both datacenters) to be storage AND monitor/management servers at the same time (sorry for not being clear in my first post). In this scenario could a split brain situation be avoided (with the 1 cluster solution)? I guess that the connection (between the two datacenters) also in this scenario’s will be a bottleneck (regarding latency’s)? Or is there a way to guide the traffic (what about the Crush Map)?
Our wish is to have one cluster, usable in both datacenters. And, as a backup, choose to replicate to another cluster. As I understand it’s much advised to use replica x3 (is factor 2 simply too risky?)
Thank you!
Hi Admin,
Thanks for your quick reply!
For sure we would like to have the advantage of having professional support (this is one of the reasons for choosing PetaSAN).
The SSD disk's we are looking for are the Samsung 860 PRO's. They run fast in our current storages. But looking around if these are good when using with Ceph I read mixed experiences…What is your opinion/experience? (I was not able yet to test them from within PetaSAN).
Just to clarify: Our idea was to use 30 disks, 2TB each (assuming a replica x2 would be sufficient) so, having 30 TB effective storage.
Then spread these 30 disks over the 6 NODES (resulting in 5 disks per NODE ).
Also we want all the 6 nodes (3 in both datacenters) to be storage AND monitor/management servers at the same time (sorry for not being clear in my first post). In this scenario could a split brain situation be avoided (with the 1 cluster solution)? I guess that the connection (between the two datacenters) also in this scenario’s will be a bottleneck (regarding latency’s)? Or is there a way to guide the traffic (what about the Crush Map)?
Our wish is to have one cluster, usable in both datacenters. And, as a backup, choose to replicate to another cluster. As I understand it’s much advised to use replica x3 (is factor 2 simply too risky?)
Thank you!
admin
2,930 Posts
October 3, 2019, 7:40 pmQuote from admin on October 3, 2019, 7:40 pmThe 860 PRO is probably not a good choice with Ceph. I would recommend you test it via the console menu in PetaSAN and look at the sync write iops. Other things to look for is its DWPD lifetime and how it handles power loss.
As i pointed earlier, the issue with a single cluster running on 2 dispersed locations is performance degradation due to latency. This could be ok for large block size workloads such as backups and streaming but could be a killer for high iops and latency sensitive apps like virtualization and databases.
Sure the crush map is flexible and will do whatever you tell it on where to place your data, but if you need to storage your data synchronously across both centers, it will not help with latency issues.
Split brain is not something you would worry too much about, if you will go with the 1 cluster approach then as mentioned earlier you would need to failover one of your monitors using an external HA mechanism like drbd of hypervisor HA. The main worry is the reduced network latency due to the remote sites..
Yes 3 replicas is now the standard way. For a 2 datacenter setup you may go for 4 replicas and use crush to place 2 in each. Another method is to use EC with 3+3 or 4+4, which will need 6 or 8 nodes, it will give you a better storage overhead but is not recommended for latency sensitive workloads.
As indicated earlier, you could consider a 2 cluster setup and use async replication, this will solve the above issues but will not guarantee your data is synced to the latest.
The 860 PRO is probably not a good choice with Ceph. I would recommend you test it via the console menu in PetaSAN and look at the sync write iops. Other things to look for is its DWPD lifetime and how it handles power loss.
As i pointed earlier, the issue with a single cluster running on 2 dispersed locations is performance degradation due to latency. This could be ok for large block size workloads such as backups and streaming but could be a killer for high iops and latency sensitive apps like virtualization and databases.
Sure the crush map is flexible and will do whatever you tell it on where to place your data, but if you need to storage your data synchronously across both centers, it will not help with latency issues.
Split brain is not something you would worry too much about, if you will go with the 1 cluster approach then as mentioned earlier you would need to failover one of your monitors using an external HA mechanism like drbd of hypervisor HA. The main worry is the reduced network latency due to the remote sites..
Yes 3 replicas is now the standard way. For a 2 datacenter setup you may go for 4 replicas and use crush to place 2 in each. Another method is to use EC with 3+3 or 4+4, which will need 6 or 8 nodes, it will give you a better storage overhead but is not recommended for latency sensitive workloads.
As indicated earlier, you could consider a 2 cluster setup and use async replication, this will solve the above issues but will not guarantee your data is synced to the latest.
Syscon
23 Posts
December 4, 2019, 1:27 pmQuote from Syscon on December 4, 2019, 1:27 pmHi Admin,
We decided to use 3 nodes, separately in the two datacenters. For now i build one cluster (of 3 nodes) and started to test.
The final hardware per NODE looks like this (datacenter grade SSD Disks and a controller without cache like you advised):
- Dell Power Edge R620 (R620 chassis + mainboard v2)
- Server chassis has space for 10 2.5" SAS/SSD disks
- CPU: 2x 1.70GHz/Ten Core/QPi 7.20/Cache 25 MB/TDP 70W/64-bit XEON E5-2650L v2
- 128 GB memory
- Dell PERC H310 Mini Mono
- 1x KST-SEDC500M/1920G SSD for OS
- 3x KST-SEDC500M/1920G SSD ODS's
- 2x 10GB+2x1GB network (iSCSI2+Heartbeat1 and iSCSI1+Heartbeat2 combined on both 10GB ports; Management owns 1x1GB adapter)
Benchmark tests looks OK:
Benchmark
Read:
write
4K IOPS
64 threads
50128
9610
128 threads
47428
13323
4M Throughput
128 threads
1186 MB/s
1164 MB/s
CPU, OSD’s and network utilization hardly reaches 25% max.
We are using 4 iSCSI paths
But as soon we connect an ESXi server to it and run some test we encounter much lower speeds.
What we tested:
- Copy a Virtual Machine from the local ESXi storage to the PetaSAN storage
- Running speed tests (read/write and read+write) within the VM
In all these cases the read speeds reach max. 300 MB/s en write speeds 90 MB/s
When I also connect and test a NAS4Free storage read speed reaches 720 MB/s en write speed 810 MB/s (!).
What could be the problem? Thanks in advance for your response!
Hi Admin,
We decided to use 3 nodes, separately in the two datacenters. For now i build one cluster (of 3 nodes) and started to test.
The final hardware per NODE looks like this (datacenter grade SSD Disks and a controller without cache like you advised):
- Dell Power Edge R620 (R620 chassis + mainboard v2)
- Server chassis has space for 10 2.5" SAS/SSD disks
- CPU: 2x 1.70GHz/Ten Core/QPi 7.20/Cache 25 MB/TDP 70W/64-bit XEON E5-2650L v2
- 128 GB memory
- Dell PERC H310 Mini Mono
- 1x KST-SEDC500M/1920G SSD for OS
- 3x KST-SEDC500M/1920G SSD ODS's
- 2x 10GB+2x1GB network (iSCSI2+Heartbeat1 and iSCSI1+Heartbeat2 combined on both 10GB ports; Management owns 1x1GB adapter)
Benchmark tests looks OK:
Benchmark
Read:
write
4K IOPS
64 threads
50128
9610
128 threads
47428
13323
4M Throughput
128 threads
1186 MB/s
1164 MB/s
CPU, OSD’s and network utilization hardly reaches 25% max.
We are using 4 iSCSI paths
But as soon we connect an ESXi server to it and run some test we encounter much lower speeds.
What we tested:
- Copy a Virtual Machine from the local ESXi storage to the PetaSAN storage
- Running speed tests (read/write and read+write) within the VM
In all these cases the read speeds reach max. 300 MB/s en write speeds 90 MB/s
When I also connect and test a NAS4Free storage read speed reaches 720 MB/s en write speed 810 MB/s (!).
What could be the problem? Thanks in advance for your response!
admin
2,930 Posts
December 4, 2019, 2:19 pmQuote from admin on December 4, 2019, 2:19 pm1.Can you do a 4k iops test from ui with 1 thread / 1 client, so we can measure latency.
2. Can you measure the 4k sync write speed of the KST DC500 using the blue console menu. The disk needs to be a raw unused device, so use an extra drive or delete an OSD and use its disk for such test (i assume you are still testing).
3. Can you test the speed from VM using fio (Linux guest) or diskspd (Windows guest ) of at 4k random writes
4. Did you perform the recommendation in our VMWare guide.
1.Can you do a 4k iops test from ui with 1 thread / 1 client, so we can measure latency.
2. Can you measure the 4k sync write speed of the KST DC500 using the blue console menu. The disk needs to be a raw unused device, so use an extra drive or delete an OSD and use its disk for such test (i assume you are still testing).
3. Can you test the speed from VM using fio (Linux guest) or diskspd (Windows guest ) of at 4k random writes
4. Did you perform the recommendation in our VMWare guide.
Last edited on December 4, 2019, 2:20 pm by admin · #6
Syscon
23 Posts
December 5, 2019, 11:44 amQuote from Syscon on December 5, 2019, 11:44 amHi admin, thanks for your quick reply!
- The result for IOPS (1 thread / 1 client); Write: 707, Read: 2800 (do you need more results of this test?)
- One unused (but same) disk test from the Blue console (4K, 1 thread) -> sequential Read test: 16169 IOPS, Write test: 15630 IOPS, Write SYNC: 2548 IOPS. Random Read test: 15894 IOPS, Write test: 15849 IOPS
- Command Line: diskSpd.exe -c2G -d60 -r -w100 -t1 -o32 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
using software cache
using hardware write cache, writethrough off
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 32
thread stride size: 0
threads per file: 1
using I/O Completion Ports
IO priority: normal
System information:
computer name: IOTestPC01
start time: 2019/12/05 11:42:34 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 60.00s
thread count: 1
proc count: 8
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 7.08%| 0.47%| 6.61%| 92.92%
1| 1.48%| 0.03%| 1.46%| 98.52%
2| 0.03%| 0.03%| 0.00%| 99.97%
3| 1.67%| 0.70%| 0.96%| 98.33%
4| 0.86%| 0.10%| 0.76%| 99.14%
5| 0.83%| 0.26%| 0.57%| 99.17%
6| 0.21%| 0.05%| 0.16%| 99.79%
7| 1.07%| 0.18%| 0.89%| 98.93%
-------------------------------------------
avg.| 1.65%| 0.23%| 1.43%| 98.35%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377
total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 0.020 | 0.020
25th | N/A | 0.427 | 0.427
50th | N/A | 0.456 | 0.456
75th | N/A | 0.572 | 0.572
90th | N/A | 0.817 | 0.817
95th | N/A | 1.364 | 1.364
99th | N/A | 194.867 | 194.867
3-nines | N/A | 239.319 | 239.319
4-nines | N/A | 322.461 | 322.461
5-nines | N/A | 322.650 | 322.650
6-nines | N/A | 322.792 | 322.792
7-nines | N/A | 322.792 | 322.792
8-nines | N/A | 322.792 | 322.792
9-nines | N/A | 322.792 | 322.792
max | N/A | 322.792 | 322.792
- I followed the guide (and our host is setup with two 10GB ports for iSCSI, both handling two PetaSAN iSCSI path's
Hi admin, thanks for your quick reply!
- The result for IOPS (1 thread / 1 client); Write: 707, Read: 2800 (do you need more results of this test?)
- One unused (but same) disk test from the Blue console (4K, 1 thread) -> sequential Read test: 16169 IOPS, Write test: 15630 IOPS, Write SYNC: 2548 IOPS. Random Read test: 15894 IOPS, Write test: 15849 IOPS
- Command Line: diskSpd.exe -c2G -d60 -r -w100 -t1 -o32 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
using software cache
using hardware write cache, writethrough off
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 32
thread stride size: 0
threads per file: 1
using I/O Completion Ports
IO priority: normal
System information:
computer name: IOTestPC01
start time: 2019/12/05 11:42:34 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 60.00s
thread count: 1
proc count: 8
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 7.08%| 0.47%| 6.61%| 92.92%
1| 1.48%| 0.03%| 1.46%| 98.52%
2| 0.03%| 0.03%| 0.00%| 99.97%
3| 1.67%| 0.70%| 0.96%| 98.33%
4| 0.86%| 0.10%| 0.76%| 99.14%
5| 0.83%| 0.26%| 0.57%| 99.17%
6| 0.21%| 0.05%| 0.16%| 99.79%
7| 1.07%| 0.18%| 0.89%| 98.93%
-------------------------------------------
avg.| 1.65%| 0.23%| 1.43%| 98.35%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377
total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 0.020 | 0.020
25th | N/A | 0.427 | 0.427
50th | N/A | 0.456 | 0.456
75th | N/A | 0.572 | 0.572
90th | N/A | 0.817 | 0.817
95th | N/A | 1.364 | 1.364
99th | N/A | 194.867 | 194.867
3-nines | N/A | 239.319 | 239.319
4-nines | N/A | 322.461 | 322.461
5-nines | N/A | 322.650 | 322.650
6-nines | N/A | 322.792 | 322.792
7-nines | N/A | 322.792 | 322.792
8-nines | N/A | 322.792 | 322.792
9-nines | N/A | 322.792 | 322.792
max | N/A | 322.792 | 322.792
- I followed the guide (and our host is setup with two 10GB ports for iSCSI, both handling two PetaSAN iSCSI path's
admin
2,930 Posts
December 5, 2019, 1:17 pmQuote from admin on December 5, 2019, 1:17 pmThe disks are OK for sync writes at 2.5K iops, note some SSDs are able to deliver 100K sync write iops.
The 700 write iops for single thread = 1.4 ms is good, some high end hardware gives 0.8-1 ms for write latency in Ceph. read latency of 0.35 ms is good.
For VM test, can you run the diskspd test with cache off:
-Sh
at 1 and 32 threads:
-o1 -t1
-o1 -t32
and also run the 4k ui test at 32 threads 1 client.
We run 32 threads as this is the queue depth limit per vm in VMWare, unless you change it.
The disks are OK for sync writes at 2.5K iops, note some SSDs are able to deliver 100K sync write iops.
The 700 write iops for single thread = 1.4 ms is good, some high end hardware gives 0.8-1 ms for write latency in Ceph. read latency of 0.35 ms is good.
For VM test, can you run the diskspd test with cache off:
-Sh
at 1 and 32 threads:
-o1 -t1
-o1 -t32
and also run the 4k ui test at 32 threads 1 client.
We run 32 threads as this is the queue depth limit per vm in VMWare, unless you change it.
Last edited on December 5, 2019, 1:19 pm by admin · #8
Syscon
23 Posts
December 5, 2019, 2:00 pmQuote from Syscon on December 5, 2019, 2:00 pmHappy to hear the disks are OK!
The diskspd tests with cache off (1 and 32 Threads):
Command Line: diskSpd.exe -c2G -d60 -r -w100 -t1 -Sh -o1 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 1
thread stride size: 0
threads per file: 1
IO priority: normal
System information:
computer name: IOTestPC01
start time: 2019/12/05 13:49:49 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 60.00s
thread count: 1
proc count: 8
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 4.27%| 0.34%| 3.93%| 95.73%
1| 0.05%| 0.03%| 0.03%| 99.95%
2| 0.13%| 0.03%| 0.10%| 99.87%
3| 0.05%| 0.05%| 0.00%| 99.95%
4| 0.08%| 0.00%| 0.08%| 99.92%
5| 0.08%| 0.03%| 0.05%| 99.92%
6| 0.10%| 0.05%| 0.05%| 99.90%
7| 0.13%| 0.10%| 0.03%| 99.87%
-------------------------------------------
avg.| 0.61%| 0.08%| 0.53%| 99.39%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297
total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 1.049 | 1.049
25th | N/A | 1.190 | 1.190
50th | N/A | 1.227 | 1.227
75th | N/A | 1.276 | 1.276
90th | N/A | 1.408 | 1.408
95th | N/A | 1.743 | 1.743
99th | N/A | 9.821 | 9.821
3-nines | N/A | 10.479 | 10.479
4-nines | N/A | 15.947 | 15.947
5-nines | N/A | 25.800 | 25.800
6-nines | N/A | 25.800 | 25.800
7-nines | N/A | 25.800 | 25.800
8-nines | N/A | 25.800 | 25.800
9-nines | N/A | 25.800 | 25.800
max | N/A | 25.800 | 25.800
---------------------------------------------------------------------------------------------------
Command Line: diskSpd.exe -c2G -d60 -r -w100 -t32 -Sh -o1 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 1
thread stride size: 0
threads per file: 32
IO priority: normal
System information:
computer name: IOTestPC01
start time: 2019/12/05 13:52:11 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 60.00s
thread count: 32
proc count: 8
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 4.51%| 0.60%| 3.91%| 95.49%
1| 4.45%| 0.42%| 4.04%| 95.55%
2| 4.82%| 0.63%| 4.19%| 95.18%
3| 4.71%| 0.63%| 4.09%| 95.29%
4| 4.53%| 0.31%| 4.22%| 95.47%
5| 4.19%| 0.49%| 3.70%| 95.81%
6| 5.03%| 0.49%| 4.53%| 94.97%
7| 4.27%| 0.39%| 3.88%| 95.73%
-------------------------------------------
avg.| 4.56%| 0.49%| 4.07%| 95.44%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 37462016 | 9146 | 0.60 | 152.43 | 6.553 | 5.520 | C:\Beheer\testfile.dat (2048MiB)
1 | 37281792 | 9102 | 0.59 | 151.70 | 6.585 | 5.519 | C:\Beheer\testfile.dat (2048MiB)
2 | 37105664 | 9059 | 0.59 | 150.98 | 6.616 | 5.541 | C:\Beheer\testfile.dat (2048MiB)
3 | 37523456 | 9161 | 0.60 | 152.68 | 6.542 | 5.408 | C:\Beheer\testfile.dat (2048MiB)
4 | 37535744 | 9164 | 0.60 | 152.73 | 6.541 | 5.500 | C:\Beheer\testfile.dat (2048MiB)
5 | 37359616 | 9121 | 0.59 | 152.02 | 6.571 | 5.571 | C:\Beheer\testfile.dat (2048MiB)
6 | 37289984 | 9104 | 0.59 | 151.73 | 6.584 | 5.548 | C:\Beheer\testfile.dat (2048MiB)
7 | 37556224 | 9169 | 0.60 | 152.82 | 6.537 | 5.403 | C:\Beheer\testfile.dat (2048MiB)
8 | 36876288 | 9003 | 0.59 | 150.05 | 6.657 | 5.616 | C:\Beheer\testfile.dat (2048MiB)
9 | 37081088 | 9053 | 0.59 | 150.88 | 6.621 | 5.475 | C:\Beheer\testfile.dat (2048MiB)
10 | 37285888 | 9103 | 0.59 | 151.72 | 6.584 | 5.470 | C:\Beheer\testfile.dat (2048MiB)
11 | 37429248 | 9138 | 0.59 | 152.30 | 6.559 | 5.338 | C:\Beheer\testfile.dat (2048MiB)
12 | 37351424 | 9119 | 0.59 | 151.98 | 6.573 | 5.428 | C:\Beheer\testfile.dat (2048MiB)
13 | 37167104 | 9074 | 0.59 | 151.23 | 6.605 | 5.598 | C:\Beheer\testfile.dat (2048MiB)
14 | 37875712 | 9247 | 0.60 | 154.12 | 6.481 | 5.357 | C:\Beheer\testfile.dat (2048MiB)
15 | 37777408 | 9223 | 0.60 | 153.72 | 6.499 | 5.437 | C:\Beheer\testfile.dat (2048MiB)
16 | 37523456 | 9161 | 0.60 | 152.68 | 6.543 | 5.574 | C:\Beheer\testfile.dat (2048MiB)
17 | 37560320 | 9170 | 0.60 | 152.83 | 6.535 | 5.429 | C:\Beheer\testfile.dat (2048MiB)
18 | 37875712 | 9247 | 0.60 | 154.12 | 6.482 | 5.409 | C:\Beheer\testfile.dat (2048MiB)
19 | 37801984 | 9229 | 0.60 | 153.82 | 6.494 | 5.359 | C:\Beheer\testfile.dat (2048MiB)
20 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.400 | C:\Beheer\testfile.dat (2048MiB)
21 | 37576704 | 9174 | 0.60 | 152.90 | 6.534 | 5.392 | C:\Beheer\testfile.dat (2048MiB)
22 | 37806080 | 9230 | 0.60 | 153.83 | 6.493 | 5.502 | C:\Beheer\testfile.dat (2048MiB)
23 | 37711872 | 9207 | 0.60 | 153.45 | 6.510 | 5.481 | C:\Beheer\testfile.dat (2048MiB)
24 | 37953536 | 9266 | 0.60 | 154.43 | 6.469 | 5.248 | C:\Beheer\testfile.dat (2048MiB)
25 | 37490688 | 9153 | 0.60 | 152.55 | 6.548 | 5.431 | C:\Beheer\testfile.dat (2048MiB)
26 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.549 | C:\Beheer\testfile.dat (2048MiB)
27 | 37703680 | 9205 | 0.60 | 153.42 | 6.511 | 5.426 | C:\Beheer\testfile.dat (2048MiB)
28 | 37380096 | 9126 | 0.59 | 152.10 | 6.568 | 5.384 | C:\Beheer\testfile.dat (2048MiB)
29 | 37584896 | 9176 | 0.60 | 152.93 | 6.532 | 5.246 | C:\Beheer\testfile.dat (2048MiB)
30 | 37482496 | 9151 | 0.60 | 152.52 | 6.550 | 5.518 | C:\Beheer\testfile.dat (2048MiB)
31 | 37318656 | 9111 | 0.59 | 151.85 | 6.578 | 5.385 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1199546368 | 292858 | 19.07 | 4880.97 | 6.549 | 5.453
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
1 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
2 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
3 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
4 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
5 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
6 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
7 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
8 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
9 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
10 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
11 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
12 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
13 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
14 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
15 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
16 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
17 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
18 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
19 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
20 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
21 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
22 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
23 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
24 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
25 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
26 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
27 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
28 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
29 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
30 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
31 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 37462016 | 9146 | 0.60 | 152.43 | 6.553 | 5.520 | C:\Beheer\testfile.dat (2048MiB)
1 | 37281792 | 9102 | 0.59 | 151.70 | 6.585 | 5.519 | C:\Beheer\testfile.dat (2048MiB)
2 | 37105664 | 9059 | 0.59 | 150.98 | 6.616 | 5.541 | C:\Beheer\testfile.dat (2048MiB)
3 | 37523456 | 9161 | 0.60 | 152.68 | 6.542 | 5.408 | C:\Beheer\testfile.dat (2048MiB)
4 | 37535744 | 9164 | 0.60 | 152.73 | 6.541 | 5.500 | C:\Beheer\testfile.dat (2048MiB)
5 | 37359616 | 9121 | 0.59 | 152.02 | 6.571 | 5.571 | C:\Beheer\testfile.dat (2048MiB)
6 | 37289984 | 9104 | 0.59 | 151.73 | 6.584 | 5.548 | C:\Beheer\testfile.dat (2048MiB)
7 | 37556224 | 9169 | 0.60 | 152.82 | 6.537 | 5.403 | C:\Beheer\testfile.dat (2048MiB)
8 | 36876288 | 9003 | 0.59 | 150.05 | 6.657 | 5.616 | C:\Beheer\testfile.dat (2048MiB)
9 | 37081088 | 9053 | 0.59 | 150.88 | 6.621 | 5.475 | C:\Beheer\testfile.dat (2048MiB)
10 | 37285888 | 9103 | 0.59 | 151.72 | 6.584 | 5.470 | C:\Beheer\testfile.dat (2048MiB)
11 | 37429248 | 9138 | 0.59 | 152.30 | 6.559 | 5.338 | C:\Beheer\testfile.dat (2048MiB)
12 | 37351424 | 9119 | 0.59 | 151.98 | 6.573 | 5.428 | C:\Beheer\testfile.dat (2048MiB)
13 | 37167104 | 9074 | 0.59 | 151.23 | 6.605 | 5.598 | C:\Beheer\testfile.dat (2048MiB)
14 | 37875712 | 9247 | 0.60 | 154.12 | 6.481 | 5.357 | C:\Beheer\testfile.dat (2048MiB)
15 | 37777408 | 9223 | 0.60 | 153.72 | 6.499 | 5.437 | C:\Beheer\testfile.dat (2048MiB)
16 | 37523456 | 9161 | 0.60 | 152.68 | 6.543 | 5.574 | C:\Beheer\testfile.dat (2048MiB)
17 | 37560320 | 9170 | 0.60 | 152.83 | 6.535 | 5.429 | C:\Beheer\testfile.dat (2048MiB)
18 | 37875712 | 9247 | 0.60 | 154.12 | 6.482 | 5.409 | C:\Beheer\testfile.dat (2048MiB)
19 | 37801984 | 9229 | 0.60 | 153.82 | 6.494 | 5.359 | C:\Beheer\testfile.dat (2048MiB)
20 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.400 | C:\Beheer\testfile.dat (2048MiB)
21 | 37576704 | 9174 | 0.60 | 152.90 | 6.534 | 5.392 | C:\Beheer\testfile.dat (2048MiB)
22 | 37806080 | 9230 | 0.60 | 153.83 | 6.493 | 5.502 | C:\Beheer\testfile.dat (2048MiB)
23 | 37711872 | 9207 | 0.60 | 153.45 | 6.510 | 5.481 | C:\Beheer\testfile.dat (2048MiB)
24 | 37953536 | 9266 | 0.60 | 154.43 | 6.469 | 5.248 | C:\Beheer\testfile.dat (2048MiB)
25 | 37490688 | 9153 | 0.60 | 152.55 | 6.548 | 5.431 | C:\Beheer\testfile.dat (2048MiB)
26 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.549 | C:\Beheer\testfile.dat (2048MiB)
27 | 37703680 | 9205 | 0.60 | 153.42 | 6.511 | 5.426 | C:\Beheer\testfile.dat (2048MiB)
28 | 37380096 | 9126 | 0.59 | 152.10 | 6.568 | 5.384 | C:\Beheer\testfile.dat (2048MiB)
29 | 37584896 | 9176 | 0.60 | 152.93 | 6.532 | 5.246 | C:\Beheer\testfile.dat (2048MiB)
30 | 37482496 | 9151 | 0.60 | 152.52 | 6.550 | 5.518 | C:\Beheer\testfile.dat (2048MiB)
31 | 37318656 | 9111 | 0.59 | 151.85 | 6.578 | 5.385 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1199546368 | 292858 | 19.07 | 4880.97 | 6.549 | 5.453
total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 1.124 | 1.124
25th | N/A | 2.903 | 2.903
50th | N/A | 4.403 | 4.403
75th | N/A | 10.784 | 10.784
90th | N/A | 12.787 | 12.787
95th | N/A | 13.543 | 13.543
99th | N/A | 15.070 | 15.070
3-nines | N/A | 31.530 | 31.530
4-nines | N/A | 173.368 | 173.368
5-nines | N/A | 198.035 | 198.035
6-nines | N/A | 199.792 | 199.792
7-nines | N/A | 199.792 | 199.792
8-nines | N/A | 199.792 | 199.792
9-nines | N/A | 199.792 | 199.792
max | N/A | 199.792 | 199.792
The 4k ui test at 32 threads 1 client
Write: 5206 IOPS, Read: 40379 IOPS
Happy to hear the disks are OK!
The diskspd tests with cache off (1 and 32 Threads):
Command Line: diskSpd.exe -c2G -d60 -r -w100 -t1 -Sh -o1 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 1
thread stride size: 0
threads per file: 1
IO priority: normal
System information:
computer name: IOTestPC01
start time: 2019/12/05 13:49:49 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 60.00s
thread count: 1
proc count: 8
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 4.27%| 0.34%| 3.93%| 95.73%
1| 0.05%| 0.03%| 0.03%| 99.95%
2| 0.13%| 0.03%| 0.10%| 99.87%
3| 0.05%| 0.05%| 0.00%| 99.95%
4| 0.08%| 0.00%| 0.08%| 99.92%
5| 0.08%| 0.03%| 0.05%| 99.92%
6| 0.10%| 0.05%| 0.05%| 99.90%
7| 0.13%| 0.10%| 0.03%| 99.87%
-------------------------------------------
avg.| 0.61%| 0.08%| 0.53%| 99.39%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297
total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 1.049 | 1.049
25th | N/A | 1.190 | 1.190
50th | N/A | 1.227 | 1.227
75th | N/A | 1.276 | 1.276
90th | N/A | 1.408 | 1.408
95th | N/A | 1.743 | 1.743
99th | N/A | 9.821 | 9.821
3-nines | N/A | 10.479 | 10.479
4-nines | N/A | 15.947 | 15.947
5-nines | N/A | 25.800 | 25.800
6-nines | N/A | 25.800 | 25.800
7-nines | N/A | 25.800 | 25.800
8-nines | N/A | 25.800 | 25.800
9-nines | N/A | 25.800 | 25.800
max | N/A | 25.800 | 25.800
---------------------------------------------------------------------------------------------------
Command Line: diskSpd.exe -c2G -d60 -r -w100 -t32 -Sh -o1 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 1
thread stride size: 0
threads per file: 32
IO priority: normal
System information:
computer name: IOTestPC01
start time: 2019/12/05 13:52:11 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 60.00s
thread count: 32
proc count: 8
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 4.51%| 0.60%| 3.91%| 95.49%
1| 4.45%| 0.42%| 4.04%| 95.55%
2| 4.82%| 0.63%| 4.19%| 95.18%
3| 4.71%| 0.63%| 4.09%| 95.29%
4| 4.53%| 0.31%| 4.22%| 95.47%
5| 4.19%| 0.49%| 3.70%| 95.81%
6| 5.03%| 0.49%| 4.53%| 94.97%
7| 4.27%| 0.39%| 3.88%| 95.73%
-------------------------------------------
avg.| 4.56%| 0.49%| 4.07%| 95.44%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 37462016 | 9146 | 0.60 | 152.43 | 6.553 | 5.520 | C:\Beheer\testfile.dat (2048MiB)
1 | 37281792 | 9102 | 0.59 | 151.70 | 6.585 | 5.519 | C:\Beheer\testfile.dat (2048MiB)
2 | 37105664 | 9059 | 0.59 | 150.98 | 6.616 | 5.541 | C:\Beheer\testfile.dat (2048MiB)
3 | 37523456 | 9161 | 0.60 | 152.68 | 6.542 | 5.408 | C:\Beheer\testfile.dat (2048MiB)
4 | 37535744 | 9164 | 0.60 | 152.73 | 6.541 | 5.500 | C:\Beheer\testfile.dat (2048MiB)
5 | 37359616 | 9121 | 0.59 | 152.02 | 6.571 | 5.571 | C:\Beheer\testfile.dat (2048MiB)
6 | 37289984 | 9104 | 0.59 | 151.73 | 6.584 | 5.548 | C:\Beheer\testfile.dat (2048MiB)
7 | 37556224 | 9169 | 0.60 | 152.82 | 6.537 | 5.403 | C:\Beheer\testfile.dat (2048MiB)
8 | 36876288 | 9003 | 0.59 | 150.05 | 6.657 | 5.616 | C:\Beheer\testfile.dat (2048MiB)
9 | 37081088 | 9053 | 0.59 | 150.88 | 6.621 | 5.475 | C:\Beheer\testfile.dat (2048MiB)
10 | 37285888 | 9103 | 0.59 | 151.72 | 6.584 | 5.470 | C:\Beheer\testfile.dat (2048MiB)
11 | 37429248 | 9138 | 0.59 | 152.30 | 6.559 | 5.338 | C:\Beheer\testfile.dat (2048MiB)
12 | 37351424 | 9119 | 0.59 | 151.98 | 6.573 | 5.428 | C:\Beheer\testfile.dat (2048MiB)
13 | 37167104 | 9074 | 0.59 | 151.23 | 6.605 | 5.598 | C:\Beheer\testfile.dat (2048MiB)
14 | 37875712 | 9247 | 0.60 | 154.12 | 6.481 | 5.357 | C:\Beheer\testfile.dat (2048MiB)
15 | 37777408 | 9223 | 0.60 | 153.72 | 6.499 | 5.437 | C:\Beheer\testfile.dat (2048MiB)
16 | 37523456 | 9161 | 0.60 | 152.68 | 6.543 | 5.574 | C:\Beheer\testfile.dat (2048MiB)
17 | 37560320 | 9170 | 0.60 | 152.83 | 6.535 | 5.429 | C:\Beheer\testfile.dat (2048MiB)
18 | 37875712 | 9247 | 0.60 | 154.12 | 6.482 | 5.409 | C:\Beheer\testfile.dat (2048MiB)
19 | 37801984 | 9229 | 0.60 | 153.82 | 6.494 | 5.359 | C:\Beheer\testfile.dat (2048MiB)
20 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.400 | C:\Beheer\testfile.dat (2048MiB)
21 | 37576704 | 9174 | 0.60 | 152.90 | 6.534 | 5.392 | C:\Beheer\testfile.dat (2048MiB)
22 | 37806080 | 9230 | 0.60 | 153.83 | 6.493 | 5.502 | C:\Beheer\testfile.dat (2048MiB)
23 | 37711872 | 9207 | 0.60 | 153.45 | 6.510 | 5.481 | C:\Beheer\testfile.dat (2048MiB)
24 | 37953536 | 9266 | 0.60 | 154.43 | 6.469 | 5.248 | C:\Beheer\testfile.dat (2048MiB)
25 | 37490688 | 9153 | 0.60 | 152.55 | 6.548 | 5.431 | C:\Beheer\testfile.dat (2048MiB)
26 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.549 | C:\Beheer\testfile.dat (2048MiB)
27 | 37703680 | 9205 | 0.60 | 153.42 | 6.511 | 5.426 | C:\Beheer\testfile.dat (2048MiB)
28 | 37380096 | 9126 | 0.59 | 152.10 | 6.568 | 5.384 | C:\Beheer\testfile.dat (2048MiB)
29 | 37584896 | 9176 | 0.60 | 152.93 | 6.532 | 5.246 | C:\Beheer\testfile.dat (2048MiB)
30 | 37482496 | 9151 | 0.60 | 152.52 | 6.550 | 5.518 | C:\Beheer\testfile.dat (2048MiB)
31 | 37318656 | 9111 | 0.59 | 151.85 | 6.578 | 5.385 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1199546368 | 292858 | 19.07 | 4880.97 | 6.549 | 5.453
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
1 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
2 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
3 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
4 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
5 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
6 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
7 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
8 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
9 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
10 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
11 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
12 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
13 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
14 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
15 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
16 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
17 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
18 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
19 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
20 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
21 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
22 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
23 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
24 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
25 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
26 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
27 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
28 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
29 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
30 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
31 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 37462016 | 9146 | 0.60 | 152.43 | 6.553 | 5.520 | C:\Beheer\testfile.dat (2048MiB)
1 | 37281792 | 9102 | 0.59 | 151.70 | 6.585 | 5.519 | C:\Beheer\testfile.dat (2048MiB)
2 | 37105664 | 9059 | 0.59 | 150.98 | 6.616 | 5.541 | C:\Beheer\testfile.dat (2048MiB)
3 | 37523456 | 9161 | 0.60 | 152.68 | 6.542 | 5.408 | C:\Beheer\testfile.dat (2048MiB)
4 | 37535744 | 9164 | 0.60 | 152.73 | 6.541 | 5.500 | C:\Beheer\testfile.dat (2048MiB)
5 | 37359616 | 9121 | 0.59 | 152.02 | 6.571 | 5.571 | C:\Beheer\testfile.dat (2048MiB)
6 | 37289984 | 9104 | 0.59 | 151.73 | 6.584 | 5.548 | C:\Beheer\testfile.dat (2048MiB)
7 | 37556224 | 9169 | 0.60 | 152.82 | 6.537 | 5.403 | C:\Beheer\testfile.dat (2048MiB)
8 | 36876288 | 9003 | 0.59 | 150.05 | 6.657 | 5.616 | C:\Beheer\testfile.dat (2048MiB)
9 | 37081088 | 9053 | 0.59 | 150.88 | 6.621 | 5.475 | C:\Beheer\testfile.dat (2048MiB)
10 | 37285888 | 9103 | 0.59 | 151.72 | 6.584 | 5.470 | C:\Beheer\testfile.dat (2048MiB)
11 | 37429248 | 9138 | 0.59 | 152.30 | 6.559 | 5.338 | C:\Beheer\testfile.dat (2048MiB)
12 | 37351424 | 9119 | 0.59 | 151.98 | 6.573 | 5.428 | C:\Beheer\testfile.dat (2048MiB)
13 | 37167104 | 9074 | 0.59 | 151.23 | 6.605 | 5.598 | C:\Beheer\testfile.dat (2048MiB)
14 | 37875712 | 9247 | 0.60 | 154.12 | 6.481 | 5.357 | C:\Beheer\testfile.dat (2048MiB)
15 | 37777408 | 9223 | 0.60 | 153.72 | 6.499 | 5.437 | C:\Beheer\testfile.dat (2048MiB)
16 | 37523456 | 9161 | 0.60 | 152.68 | 6.543 | 5.574 | C:\Beheer\testfile.dat (2048MiB)
17 | 37560320 | 9170 | 0.60 | 152.83 | 6.535 | 5.429 | C:\Beheer\testfile.dat (2048MiB)
18 | 37875712 | 9247 | 0.60 | 154.12 | 6.482 | 5.409 | C:\Beheer\testfile.dat (2048MiB)
19 | 37801984 | 9229 | 0.60 | 153.82 | 6.494 | 5.359 | C:\Beheer\testfile.dat (2048MiB)
20 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.400 | C:\Beheer\testfile.dat (2048MiB)
21 | 37576704 | 9174 | 0.60 | 152.90 | 6.534 | 5.392 | C:\Beheer\testfile.dat (2048MiB)
22 | 37806080 | 9230 | 0.60 | 153.83 | 6.493 | 5.502 | C:\Beheer\testfile.dat (2048MiB)
23 | 37711872 | 9207 | 0.60 | 153.45 | 6.510 | 5.481 | C:\Beheer\testfile.dat (2048MiB)
24 | 37953536 | 9266 | 0.60 | 154.43 | 6.469 | 5.248 | C:\Beheer\testfile.dat (2048MiB)
25 | 37490688 | 9153 | 0.60 | 152.55 | 6.548 | 5.431 | C:\Beheer\testfile.dat (2048MiB)
26 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.549 | C:\Beheer\testfile.dat (2048MiB)
27 | 37703680 | 9205 | 0.60 | 153.42 | 6.511 | 5.426 | C:\Beheer\testfile.dat (2048MiB)
28 | 37380096 | 9126 | 0.59 | 152.10 | 6.568 | 5.384 | C:\Beheer\testfile.dat (2048MiB)
29 | 37584896 | 9176 | 0.60 | 152.93 | 6.532 | 5.246 | C:\Beheer\testfile.dat (2048MiB)
30 | 37482496 | 9151 | 0.60 | 152.52 | 6.550 | 5.518 | C:\Beheer\testfile.dat (2048MiB)
31 | 37318656 | 9111 | 0.59 | 151.85 | 6.578 | 5.385 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1199546368 | 292858 | 19.07 | 4880.97 | 6.549 | 5.453
total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 1.124 | 1.124
25th | N/A | 2.903 | 2.903
50th | N/A | 4.403 | 4.403
75th | N/A | 10.784 | 10.784
90th | N/A | 12.787 | 12.787
95th | N/A | 13.543 | 13.543
99th | N/A | 15.070 | 15.070
3-nines | N/A | 31.530 | 31.530
4-nines | N/A | 173.368 | 173.368
5-nines | N/A | 198.035 | 198.035
6-nines | N/A | 199.792 | 199.792
7-nines | N/A | 199.792 | 199.792
8-nines | N/A | 199.792 | 199.792
9-nines | N/A | 199.792 | 199.792
max | N/A | 199.792 | 199.792
The 4k ui test at 32 threads 1 client
Write: 5206 IOPS, Read: 40379 IOPS
admin
2,930 Posts
December 5, 2019, 2:24 pmQuote from admin on December 5, 2019, 2:24 pmThe results seems OK for your system:
4k rand write rados level 1 thread = 707 iops
4k rand write from VM 1 thread = 675 iops
4k rand write rados level 32 threads = 5206 iops
4k rand write from VM 32 threads = 4880 iops
Which correlate well.
Note in Ceph, single client thread performance is not high, in return the system scales when you have a lot of client concurrency : ( many vms, applications, transactions..) .
If you have many datastores and vms, the total iops will be the same as the max ui. You can also bump up vmware limits over 32 queue depth in vmware for a single vm.
In general with Ceph, you can get around 1ms latency for write and 0.3 ms for reads. So a (single) file copy in Windows which uses 256k block size, will not exceed 256 MB/s speed.
The results seems OK for your system:
4k rand write rados level 1 thread = 707 iops
4k rand write from VM 1 thread = 675 iops
4k rand write rados level 32 threads = 5206 iops
4k rand write from VM 32 threads = 4880 iops
Which correlate well.
Note in Ceph, single client thread performance is not high, in return the system scales when you have a lot of client concurrency : ( many vms, applications, transactions..) .
If you have many datastores and vms, the total iops will be the same as the max ui. You can also bump up vmware limits over 32 queue depth in vmware for a single vm.
In general with Ceph, you can get around 1ms latency for write and 0.3 ms for reads. So a (single) file copy in Windows which uses 256k block size, will not exceed 256 MB/s speed.
Last edited on December 5, 2019, 2:29 pm by admin · #10
PetaSAN in our 2 datacenters for production
Syscon
23 Posts
Quote from Syscon on September 27, 2019, 3:06 pmHi, we would like to use PetaSAN in our 2 datacenters for production.
The idea is to place 3 storage NODES in the first and the 3 others in the second datacenter.
I understood we should add 2 NODES (1 per datacenter) as fully management NODE, these could be VM’s. This to make the PetaSAN cluster available via iSCSI to our ESXi servers in both datacenters. Is this the way to go? (we want to have one iSCSI connection in DC 1 and one in DC2).
We prefer to use SSD disks of 2TB and need (effectively) 30TB storage. So a pool with 2 replica’s would need 30 disks (5 disks per NODE).
For every 6 NODE’s we like to choose this hardware:
- Dell Power Edge R620 (R620 chassis + mainboard v2)
- Server chassis has space for 10 2.5" SAS/SSD disks
- CPU: 1x 1.70GHz/Ten Core/QPi 7.20/Cache 25 MB/TDP 70W/64-bit XEON E5-2650L v2
- 64 GB memory
- Dell PERC H710P Mini Mono, 1GB NV, BBU
- 2x 120 GB SSD for OS (disks in RAID1)
- (5x 2TB SSD)
Will the performance be sufficient for approximately 160 VM’s (half of them are high IOPS demanding, the rest low to medium)?
What is we use compression? Which impact will that have performance wise and on the compression ratio?
Our NL-IX connection (between datacenters) is 1GB (and connects the 2x 3 nodes, one cluster). Is this sufficient? What if this connection goes (temporary) down (are here steps to be taken to avoid problems with the cluster)?
Thanks in advance for your response!
Hi, we would like to use PetaSAN in our 2 datacenters for production.
The idea is to place 3 storage NODES in the first and the 3 others in the second datacenter.
I understood we should add 2 NODES (1 per datacenter) as fully management NODE, these could be VM’s. This to make the PetaSAN cluster available via iSCSI to our ESXi servers in both datacenters. Is this the way to go? (we want to have one iSCSI connection in DC 1 and one in DC2).
We prefer to use SSD disks of 2TB and need (effectively) 30TB storage. So a pool with 2 replica’s would need 30 disks (5 disks per NODE).
For every 6 NODE’s we like to choose this hardware:
- Dell Power Edge R620 (R620 chassis + mainboard v2)
- Server chassis has space for 10 2.5" SAS/SSD disks
- CPU: 1x 1.70GHz/Ten Core/QPi 7.20/Cache 25 MB/TDP 70W/64-bit XEON E5-2650L v2
- 64 GB memory
- Dell PERC H710P Mini Mono, 1GB NV, BBU
- 2x 120 GB SSD for OS (disks in RAID1)
- (5x 2TB SSD)
Will the performance be sufficient for approximately 160 VM’s (half of them are high IOPS demanding, the rest low to medium)?
What is we use compression? Which impact will that have performance wise and on the compression ratio?
Our NL-IX connection (between datacenters) is 1GB (and connects the 2x 3 nodes, one cluster). Is this sufficient? What if this connection goes (temporary) down (are here steps to be taken to avoid problems with the cluster)?
Thanks in advance for your response!
admin
2,930 Posts
Quote from admin on September 27, 2019, 7:24 pmGeneral hardware comments: seems fine but with SSD OSDs it is not advisable to use controller with cache, it is better to use hba with plain JBOD. The type of SSDs makes a huge difference, try to use an enterprise disk known to work well with Ceph it must have good sync write speed + DWPD. You can test sync write speed from PetaSAN blue console menu.
So a pool with 2 replica’s would need 30 disks (5 disks per NODE).
This is not correct, each iSCSI disk will be distributed/stored on all available disks. Also it is much advised to use replica x3.
As for the main issue of datacenters: You need to really decide if you need 1 PetaSAN cluster separated on the 2 datacenters with synchronous io going on between them, or have 2 separate clusters with some asynchronous replication of disks to be used during a failover/disaster recovery.
For a 1 cluster solution: you should consider connection latency as it could be a real bottleneck for iops and latency sensitive vms since all your ios are synchronous. Also Ceph and Consul require 3 monitor/management servers to avoid split brain so it is a bit challenging where to setup the 3rd server: you could either set it up bare-metal and active on 1 datacenter and have standby on the other, and use some replication such as drbd to achieve this, or setup this server as a vm and let the HA/failover managed by the hyper-visor but make sure this vm is stored outside the PetaSAN storage.
For a 2 cluster solution: you will need to setup remote/async replication which is supported in 2.3.0. The advantage is you will get much better performance iops/latency, but in case of fail over, your vms may not have the latest data.
So both solutions have pros/cons you will have to evaluate.
For compression: it has a small effect on performance so you should test it yourself.
Do not forget we provide professional support if you need this. Good luck.
General hardware comments: seems fine but with SSD OSDs it is not advisable to use controller with cache, it is better to use hba with plain JBOD. The type of SSDs makes a huge difference, try to use an enterprise disk known to work well with Ceph it must have good sync write speed + DWPD. You can test sync write speed from PetaSAN blue console menu.
So a pool with 2 replica’s would need 30 disks (5 disks per NODE).
This is not correct, each iSCSI disk will be distributed/stored on all available disks. Also it is much advised to use replica x3.
As for the main issue of datacenters: You need to really decide if you need 1 PetaSAN cluster separated on the 2 datacenters with synchronous io going on between them, or have 2 separate clusters with some asynchronous replication of disks to be used during a failover/disaster recovery.
For a 1 cluster solution: you should consider connection latency as it could be a real bottleneck for iops and latency sensitive vms since all your ios are synchronous. Also Ceph and Consul require 3 monitor/management servers to avoid split brain so it is a bit challenging where to setup the 3rd server: you could either set it up bare-metal and active on 1 datacenter and have standby on the other, and use some replication such as drbd to achieve this, or setup this server as a vm and let the HA/failover managed by the hyper-visor but make sure this vm is stored outside the PetaSAN storage.
For a 2 cluster solution: you will need to setup remote/async replication which is supported in 2.3.0. The advantage is you will get much better performance iops/latency, but in case of fail over, your vms may not have the latest data.
So both solutions have pros/cons you will have to evaluate.
For compression: it has a small effect on performance so you should test it yourself.
Do not forget we provide professional support if you need this. Good luck.
Syscon
23 Posts
Quote from Syscon on October 3, 2019, 1:43 pmHi Admin,
Thanks for your quick reply!
For sure we would like to have the advantage of having professional support (this is one of the reasons for choosing PetaSAN).
The SSD disk's we are looking for are the Samsung 860 PRO's. They run fast in our current storages. But looking around if these are good when using with Ceph I read mixed experiences…What is your opinion/experience? (I was not able yet to test them from within PetaSAN).
Just to clarify: Our idea was to use 30 disks, 2TB each (assuming a replica x2 would be sufficient) so, having 30 TB effective storage.
Then spread these 30 disks over the 6 NODES (resulting in 5 disks per NODE ).
Also we want all the 6 nodes (3 in both datacenters) to be storage AND monitor/management servers at the same time (sorry for not being clear in my first post). In this scenario could a split brain situation be avoided (with the 1 cluster solution)? I guess that the connection (between the two datacenters) also in this scenario’s will be a bottleneck (regarding latency’s)? Or is there a way to guide the traffic (what about the Crush Map)?
Our wish is to have one cluster, usable in both datacenters. And, as a backup, choose to replicate to another cluster. As I understand it’s much advised to use replica x3 (is factor 2 simply too risky?)
Thank you!
Hi Admin,
Thanks for your quick reply!
For sure we would like to have the advantage of having professional support (this is one of the reasons for choosing PetaSAN).
The SSD disk's we are looking for are the Samsung 860 PRO's. They run fast in our current storages. But looking around if these are good when using with Ceph I read mixed experiences…What is your opinion/experience? (I was not able yet to test them from within PetaSAN).
Just to clarify: Our idea was to use 30 disks, 2TB each (assuming a replica x2 would be sufficient) so, having 30 TB effective storage.
Then spread these 30 disks over the 6 NODES (resulting in 5 disks per NODE ).
Also we want all the 6 nodes (3 in both datacenters) to be storage AND monitor/management servers at the same time (sorry for not being clear in my first post). In this scenario could a split brain situation be avoided (with the 1 cluster solution)? I guess that the connection (between the two datacenters) also in this scenario’s will be a bottleneck (regarding latency’s)? Or is there a way to guide the traffic (what about the Crush Map)?
Our wish is to have one cluster, usable in both datacenters. And, as a backup, choose to replicate to another cluster. As I understand it’s much advised to use replica x3 (is factor 2 simply too risky?)
Thank you!
admin
2,930 Posts
Quote from admin on October 3, 2019, 7:40 pmThe 860 PRO is probably not a good choice with Ceph. I would recommend you test it via the console menu in PetaSAN and look at the sync write iops. Other things to look for is its DWPD lifetime and how it handles power loss.
As i pointed earlier, the issue with a single cluster running on 2 dispersed locations is performance degradation due to latency. This could be ok for large block size workloads such as backups and streaming but could be a killer for high iops and latency sensitive apps like virtualization and databases.
Sure the crush map is flexible and will do whatever you tell it on where to place your data, but if you need to storage your data synchronously across both centers, it will not help with latency issues.
Split brain is not something you would worry too much about, if you will go with the 1 cluster approach then as mentioned earlier you would need to failover one of your monitors using an external HA mechanism like drbd of hypervisor HA. The main worry is the reduced network latency due to the remote sites..
Yes 3 replicas is now the standard way. For a 2 datacenter setup you may go for 4 replicas and use crush to place 2 in each. Another method is to use EC with 3+3 or 4+4, which will need 6 or 8 nodes, it will give you a better storage overhead but is not recommended for latency sensitive workloads.
As indicated earlier, you could consider a 2 cluster setup and use async replication, this will solve the above issues but will not guarantee your data is synced to the latest.
The 860 PRO is probably not a good choice with Ceph. I would recommend you test it via the console menu in PetaSAN and look at the sync write iops. Other things to look for is its DWPD lifetime and how it handles power loss.
As i pointed earlier, the issue with a single cluster running on 2 dispersed locations is performance degradation due to latency. This could be ok for large block size workloads such as backups and streaming but could be a killer for high iops and latency sensitive apps like virtualization and databases.
Sure the crush map is flexible and will do whatever you tell it on where to place your data, but if you need to storage your data synchronously across both centers, it will not help with latency issues.
Split brain is not something you would worry too much about, if you will go with the 1 cluster approach then as mentioned earlier you would need to failover one of your monitors using an external HA mechanism like drbd of hypervisor HA. The main worry is the reduced network latency due to the remote sites..
Yes 3 replicas is now the standard way. For a 2 datacenter setup you may go for 4 replicas and use crush to place 2 in each. Another method is to use EC with 3+3 or 4+4, which will need 6 or 8 nodes, it will give you a better storage overhead but is not recommended for latency sensitive workloads.
As indicated earlier, you could consider a 2 cluster setup and use async replication, this will solve the above issues but will not guarantee your data is synced to the latest.
Syscon
23 Posts
Quote from Syscon on December 4, 2019, 1:27 pmHi Admin,
We decided to use 3 nodes, separately in the two datacenters. For now i build one cluster (of 3 nodes) and started to test.
The final hardware per NODE looks like this (datacenter grade SSD Disks and a controller without cache like you advised):
- Dell Power Edge R620 (R620 chassis + mainboard v2)
- Server chassis has space for 10 2.5" SAS/SSD disks
- CPU: 2x 1.70GHz/Ten Core/QPi 7.20/Cache 25 MB/TDP 70W/64-bit XEON E5-2650L v2
- 128 GB memory
- Dell PERC H310 Mini Mono
- 1x KST-SEDC500M/1920G SSD for OS
- 3x KST-SEDC500M/1920G SSD ODS's
- 2x 10GB+2x1GB network (iSCSI2+Heartbeat1 and iSCSI1+Heartbeat2 combined on both 10GB ports; Management owns 1x1GB adapter)
Benchmark tests looks OK:
Benchmark Read: write 4K IOPS 64 threads 50128 9610 128 threads 47428 13323 4M Throughput 128 threads 1186 MB/s 1164 MB/s CPU, OSD’s and network utilization hardly reaches 25% max.
We are using 4 iSCSI paths
But as soon we connect an ESXi server to it and run some test we encounter much lower speeds.
What we tested:
- Copy a Virtual Machine from the local ESXi storage to the PetaSAN storage
- Running speed tests (read/write and read+write) within the VM
In all these cases the read speeds reach max. 300 MB/s en write speeds 90 MB/s
When I also connect and test a NAS4Free storage read speed reaches 720 MB/s en write speed 810 MB/s (!).
What could be the problem? Thanks in advance for your response!
Hi Admin,
We decided to use 3 nodes, separately in the two datacenters. For now i build one cluster (of 3 nodes) and started to test.
The final hardware per NODE looks like this (datacenter grade SSD Disks and a controller without cache like you advised):
- Dell Power Edge R620 (R620 chassis + mainboard v2)
- Server chassis has space for 10 2.5" SAS/SSD disks
- CPU: 2x 1.70GHz/Ten Core/QPi 7.20/Cache 25 MB/TDP 70W/64-bit XEON E5-2650L v2
- 128 GB memory
- Dell PERC H310 Mini Mono
- 1x KST-SEDC500M/1920G SSD for OS
- 3x KST-SEDC500M/1920G SSD ODS's
- 2x 10GB+2x1GB network (iSCSI2+Heartbeat1 and iSCSI1+Heartbeat2 combined on both 10GB ports; Management owns 1x1GB adapter)
Benchmark tests looks OK:
Benchmark | Read: | write | |
4K IOPS | |||
64 threads | 50128 | 9610 | |
128 threads | 47428 | 13323 | |
4M Throughput | |||
128 threads | 1186 MB/s | 1164 MB/s |
CPU, OSD’s and network utilization hardly reaches 25% max.
We are using 4 iSCSI paths
But as soon we connect an ESXi server to it and run some test we encounter much lower speeds.
What we tested:
- Copy a Virtual Machine from the local ESXi storage to the PetaSAN storage
- Running speed tests (read/write and read+write) within the VM
In all these cases the read speeds reach max. 300 MB/s en write speeds 90 MB/s
When I also connect and test a NAS4Free storage read speed reaches 720 MB/s en write speed 810 MB/s (!).
What could be the problem? Thanks in advance for your response!
admin
2,930 Posts
Quote from admin on December 4, 2019, 2:19 pm1.Can you do a 4k iops test from ui with 1 thread / 1 client, so we can measure latency.
2. Can you measure the 4k sync write speed of the KST DC500 using the blue console menu. The disk needs to be a raw unused device, so use an extra drive or delete an OSD and use its disk for such test (i assume you are still testing).
3. Can you test the speed from VM using fio (Linux guest) or diskspd (Windows guest ) of at 4k random writes
4. Did you perform the recommendation in our VMWare guide.
1.Can you do a 4k iops test from ui with 1 thread / 1 client, so we can measure latency.
2. Can you measure the 4k sync write speed of the KST DC500 using the blue console menu. The disk needs to be a raw unused device, so use an extra drive or delete an OSD and use its disk for such test (i assume you are still testing).
3. Can you test the speed from VM using fio (Linux guest) or diskspd (Windows guest ) of at 4k random writes
4. Did you perform the recommendation in our VMWare guide.
Syscon
23 Posts
Quote from Syscon on December 5, 2019, 11:44 amHi admin, thanks for your quick reply!
- The result for IOPS (1 thread / 1 client); Write: 707, Read: 2800 (do you need more results of this test?)
- One unused (but same) disk test from the Blue console (4K, 1 thread) -> sequential Read test: 16169 IOPS, Write test: 15630 IOPS, Write SYNC: 2548 IOPS. Random Read test: 15894 IOPS, Write test: 15849 IOPS
- Command Line: diskSpd.exe -c2G -d60 -r -w100 -t1 -o32 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
using software cache
using hardware write cache, writethrough off
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 32
thread stride size: 0
threads per file: 1
using I/O Completion Ports
IO priority: normalSystem information:
computer name: IOTestPC01
start time: 2019/12/05 11:42:34 UTCResults for timespan 1:
*******************************************************************************actual test time: 60.00s
thread count: 1
proc count: 8CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 7.08%| 0.47%| 6.61%| 92.92%
1| 1.48%| 0.03%| 1.46%| 98.52%
2| 0.03%| 0.03%| 0.00%| 99.97%
3| 1.67%| 0.70%| 0.96%| 98.33%
4| 0.86%| 0.10%| 0.76%| 99.14%
5| 0.83%| 0.26%| 0.57%| 99.17%
6| 0.21%| 0.05%| 0.16%| 99.79%
7| 1.07%| 0.18%| 0.89%| 98.93%
-------------------------------------------
avg.| 1.65%| 0.23%| 1.43%| 98.35%Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/AWrite IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 0.020 | 0.020
25th | N/A | 0.427 | 0.427
50th | N/A | 0.456 | 0.456
75th | N/A | 0.572 | 0.572
90th | N/A | 0.817 | 0.817
95th | N/A | 1.364 | 1.364
99th | N/A | 194.867 | 194.867
3-nines | N/A | 239.319 | 239.319
4-nines | N/A | 322.461 | 322.461
5-nines | N/A | 322.650 | 322.650
6-nines | N/A | 322.792 | 322.792
7-nines | N/A | 322.792 | 322.792
8-nines | N/A | 322.792 | 322.792
9-nines | N/A | 322.792 | 322.792
max | N/A | 322.792 | 322.792- I followed the guide (and our host is setup with two 10GB ports for iSCSI, both handling two PetaSAN iSCSI path's
Hi admin, thanks for your quick reply!
- The result for IOPS (1 thread / 1 client); Write: 707, Read: 2800 (do you need more results of this test?)
- One unused (but same) disk test from the Blue console (4K, 1 thread) -> sequential Read test: 16169 IOPS, Write test: 15630 IOPS, Write SYNC: 2548 IOPS. Random Read test: 15894 IOPS, Write test: 15849 IOPS
- Command Line: diskSpd.exe -c2G -d60 -r -w100 -t1 -o32 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
using software cache
using hardware write cache, writethrough off
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 32
thread stride size: 0
threads per file: 1
using I/O Completion Ports
IO priority: normalSystem information:
computer name: IOTestPC01
start time: 2019/12/05 11:42:34 UTCResults for timespan 1:
*******************************************************************************actual test time: 60.00s
thread count: 1
proc count: 8CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 7.08%| 0.47%| 6.61%| 92.92%
1| 1.48%| 0.03%| 1.46%| 98.52%
2| 0.03%| 0.03%| 0.00%| 99.97%
3| 1.67%| 0.70%| 0.96%| 98.33%
4| 0.86%| 0.10%| 0.76%| 99.14%
5| 0.83%| 0.26%| 0.57%| 99.17%
6| 0.21%| 0.05%| 0.16%| 99.79%
7| 1.07%| 0.18%| 0.89%| 98.93%
-------------------------------------------
avg.| 1.65%| 0.23%| 1.43%| 98.35%Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/AWrite IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1020190720 | 249070 | 16.22 | 4151.19 | 7.679 | 34.377total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 0.020 | 0.020
25th | N/A | 0.427 | 0.427
50th | N/A | 0.456 | 0.456
75th | N/A | 0.572 | 0.572
90th | N/A | 0.817 | 0.817
95th | N/A | 1.364 | 1.364
99th | N/A | 194.867 | 194.867
3-nines | N/A | 239.319 | 239.319
4-nines | N/A | 322.461 | 322.461
5-nines | N/A | 322.650 | 322.650
6-nines | N/A | 322.792 | 322.792
7-nines | N/A | 322.792 | 322.792
8-nines | N/A | 322.792 | 322.792
9-nines | N/A | 322.792 | 322.792
max | N/A | 322.792 | 322.792 - I followed the guide (and our host is setup with two 10GB ports for iSCSI, both handling two PetaSAN iSCSI path's
admin
2,930 Posts
Quote from admin on December 5, 2019, 1:17 pmThe disks are OK for sync writes at 2.5K iops, note some SSDs are able to deliver 100K sync write iops.
The 700 write iops for single thread = 1.4 ms is good, some high end hardware gives 0.8-1 ms for write latency in Ceph. read latency of 0.35 ms is good.
For VM test, can you run the diskspd test with cache off:
-Sh
at 1 and 32 threads:
-o1 -t1
-o1 -t32
and also run the 4k ui test at 32 threads 1 client.We run 32 threads as this is the queue depth limit per vm in VMWare, unless you change it.
The disks are OK for sync writes at 2.5K iops, note some SSDs are able to deliver 100K sync write iops.
The 700 write iops for single thread = 1.4 ms is good, some high end hardware gives 0.8-1 ms for write latency in Ceph. read latency of 0.35 ms is good.
For VM test, can you run the diskspd test with cache off:
-Sh
at 1 and 32 threads:
-o1 -t1
-o1 -t32
and also run the 4k ui test at 32 threads 1 client.
We run 32 threads as this is the queue depth limit per vm in VMWare, unless you change it.
Syscon
23 Posts
Quote from Syscon on December 5, 2019, 2:00 pmHappy to hear the disks are OK!
The diskspd tests with cache off (1 and 32 Threads):
Command Line: diskSpd.exe -c2G -d60 -r -w100 -t1 -Sh -o1 -b4K -L C:\Beheer\testfile.datInput parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 1
thread stride size: 0
threads per file: 1
IO priority: normalSystem information:
computer name: IOTestPC01
start time: 2019/12/05 13:49:49 UTCResults for timespan 1:
*******************************************************************************actual test time: 60.00s
thread count: 1
proc count: 8CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 4.27%| 0.34%| 3.93%| 95.73%
1| 0.05%| 0.03%| 0.03%| 99.95%
2| 0.13%| 0.03%| 0.10%| 99.87%
3| 0.05%| 0.05%| 0.00%| 99.95%
4| 0.08%| 0.00%| 0.08%| 99.92%
5| 0.08%| 0.03%| 0.05%| 99.92%
6| 0.10%| 0.05%| 0.05%| 99.90%
7| 0.13%| 0.10%| 0.03%| 99.87%
-------------------------------------------
avg.| 0.61%| 0.08%| 0.53%| 99.39%Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/AWrite IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 1.049 | 1.049
25th | N/A | 1.190 | 1.190
50th | N/A | 1.227 | 1.227
75th | N/A | 1.276 | 1.276
90th | N/A | 1.408 | 1.408
95th | N/A | 1.743 | 1.743
99th | N/A | 9.821 | 9.821
3-nines | N/A | 10.479 | 10.479
4-nines | N/A | 15.947 | 15.947
5-nines | N/A | 25.800 | 25.800
6-nines | N/A | 25.800 | 25.800
7-nines | N/A | 25.800 | 25.800
8-nines | N/A | 25.800 | 25.800
9-nines | N/A | 25.800 | 25.800
max | N/A | 25.800 | 25.800---------------------------------------------------------------------------------------------------
Command Line: diskSpd.exe -c2G -d60 -r -w100 -t32 -Sh -o1 -b4K -L C:\Beheer\testfile.datInput parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 1
thread stride size: 0
threads per file: 32
IO priority: normalSystem information:
computer name: IOTestPC01
start time: 2019/12/05 13:52:11 UTCResults for timespan 1:
*******************************************************************************actual test time: 60.00s
thread count: 32
proc count: 8CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 4.51%| 0.60%| 3.91%| 95.49%
1| 4.45%| 0.42%| 4.04%| 95.55%
2| 4.82%| 0.63%| 4.19%| 95.18%
3| 4.71%| 0.63%| 4.09%| 95.29%
4| 4.53%| 0.31%| 4.22%| 95.47%
5| 4.19%| 0.49%| 3.70%| 95.81%
6| 5.03%| 0.49%| 4.53%| 94.97%
7| 4.27%| 0.39%| 3.88%| 95.73%
-------------------------------------------
avg.| 4.56%| 0.49%| 4.07%| 95.44%Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 37462016 | 9146 | 0.60 | 152.43 | 6.553 | 5.520 | C:\Beheer\testfile.dat (2048MiB)
1 | 37281792 | 9102 | 0.59 | 151.70 | 6.585 | 5.519 | C:\Beheer\testfile.dat (2048MiB)
2 | 37105664 | 9059 | 0.59 | 150.98 | 6.616 | 5.541 | C:\Beheer\testfile.dat (2048MiB)
3 | 37523456 | 9161 | 0.60 | 152.68 | 6.542 | 5.408 | C:\Beheer\testfile.dat (2048MiB)
4 | 37535744 | 9164 | 0.60 | 152.73 | 6.541 | 5.500 | C:\Beheer\testfile.dat (2048MiB)
5 | 37359616 | 9121 | 0.59 | 152.02 | 6.571 | 5.571 | C:\Beheer\testfile.dat (2048MiB)
6 | 37289984 | 9104 | 0.59 | 151.73 | 6.584 | 5.548 | C:\Beheer\testfile.dat (2048MiB)
7 | 37556224 | 9169 | 0.60 | 152.82 | 6.537 | 5.403 | C:\Beheer\testfile.dat (2048MiB)
8 | 36876288 | 9003 | 0.59 | 150.05 | 6.657 | 5.616 | C:\Beheer\testfile.dat (2048MiB)
9 | 37081088 | 9053 | 0.59 | 150.88 | 6.621 | 5.475 | C:\Beheer\testfile.dat (2048MiB)
10 | 37285888 | 9103 | 0.59 | 151.72 | 6.584 | 5.470 | C:\Beheer\testfile.dat (2048MiB)
11 | 37429248 | 9138 | 0.59 | 152.30 | 6.559 | 5.338 | C:\Beheer\testfile.dat (2048MiB)
12 | 37351424 | 9119 | 0.59 | 151.98 | 6.573 | 5.428 | C:\Beheer\testfile.dat (2048MiB)
13 | 37167104 | 9074 | 0.59 | 151.23 | 6.605 | 5.598 | C:\Beheer\testfile.dat (2048MiB)
14 | 37875712 | 9247 | 0.60 | 154.12 | 6.481 | 5.357 | C:\Beheer\testfile.dat (2048MiB)
15 | 37777408 | 9223 | 0.60 | 153.72 | 6.499 | 5.437 | C:\Beheer\testfile.dat (2048MiB)
16 | 37523456 | 9161 | 0.60 | 152.68 | 6.543 | 5.574 | C:\Beheer\testfile.dat (2048MiB)
17 | 37560320 | 9170 | 0.60 | 152.83 | 6.535 | 5.429 | C:\Beheer\testfile.dat (2048MiB)
18 | 37875712 | 9247 | 0.60 | 154.12 | 6.482 | 5.409 | C:\Beheer\testfile.dat (2048MiB)
19 | 37801984 | 9229 | 0.60 | 153.82 | 6.494 | 5.359 | C:\Beheer\testfile.dat (2048MiB)
20 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.400 | C:\Beheer\testfile.dat (2048MiB)
21 | 37576704 | 9174 | 0.60 | 152.90 | 6.534 | 5.392 | C:\Beheer\testfile.dat (2048MiB)
22 | 37806080 | 9230 | 0.60 | 153.83 | 6.493 | 5.502 | C:\Beheer\testfile.dat (2048MiB)
23 | 37711872 | 9207 | 0.60 | 153.45 | 6.510 | 5.481 | C:\Beheer\testfile.dat (2048MiB)
24 | 37953536 | 9266 | 0.60 | 154.43 | 6.469 | 5.248 | C:\Beheer\testfile.dat (2048MiB)
25 | 37490688 | 9153 | 0.60 | 152.55 | 6.548 | 5.431 | C:\Beheer\testfile.dat (2048MiB)
26 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.549 | C:\Beheer\testfile.dat (2048MiB)
27 | 37703680 | 9205 | 0.60 | 153.42 | 6.511 | 5.426 | C:\Beheer\testfile.dat (2048MiB)
28 | 37380096 | 9126 | 0.59 | 152.10 | 6.568 | 5.384 | C:\Beheer\testfile.dat (2048MiB)
29 | 37584896 | 9176 | 0.60 | 152.93 | 6.532 | 5.246 | C:\Beheer\testfile.dat (2048MiB)
30 | 37482496 | 9151 | 0.60 | 152.52 | 6.550 | 5.518 | C:\Beheer\testfile.dat (2048MiB)
31 | 37318656 | 9111 | 0.59 | 151.85 | 6.578 | 5.385 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1199546368 | 292858 | 19.07 | 4880.97 | 6.549 | 5.453Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
1 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
2 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
3 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
4 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
5 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
6 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
7 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
8 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
9 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
10 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
11 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
12 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
13 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
14 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
15 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
16 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
17 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
18 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
19 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
20 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
21 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
22 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
23 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
24 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
25 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
26 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
27 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
28 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
29 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
30 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
31 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/AWrite IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 37462016 | 9146 | 0.60 | 152.43 | 6.553 | 5.520 | C:\Beheer\testfile.dat (2048MiB)
1 | 37281792 | 9102 | 0.59 | 151.70 | 6.585 | 5.519 | C:\Beheer\testfile.dat (2048MiB)
2 | 37105664 | 9059 | 0.59 | 150.98 | 6.616 | 5.541 | C:\Beheer\testfile.dat (2048MiB)
3 | 37523456 | 9161 | 0.60 | 152.68 | 6.542 | 5.408 | C:\Beheer\testfile.dat (2048MiB)
4 | 37535744 | 9164 | 0.60 | 152.73 | 6.541 | 5.500 | C:\Beheer\testfile.dat (2048MiB)
5 | 37359616 | 9121 | 0.59 | 152.02 | 6.571 | 5.571 | C:\Beheer\testfile.dat (2048MiB)
6 | 37289984 | 9104 | 0.59 | 151.73 | 6.584 | 5.548 | C:\Beheer\testfile.dat (2048MiB)
7 | 37556224 | 9169 | 0.60 | 152.82 | 6.537 | 5.403 | C:\Beheer\testfile.dat (2048MiB)
8 | 36876288 | 9003 | 0.59 | 150.05 | 6.657 | 5.616 | C:\Beheer\testfile.dat (2048MiB)
9 | 37081088 | 9053 | 0.59 | 150.88 | 6.621 | 5.475 | C:\Beheer\testfile.dat (2048MiB)
10 | 37285888 | 9103 | 0.59 | 151.72 | 6.584 | 5.470 | C:\Beheer\testfile.dat (2048MiB)
11 | 37429248 | 9138 | 0.59 | 152.30 | 6.559 | 5.338 | C:\Beheer\testfile.dat (2048MiB)
12 | 37351424 | 9119 | 0.59 | 151.98 | 6.573 | 5.428 | C:\Beheer\testfile.dat (2048MiB)
13 | 37167104 | 9074 | 0.59 | 151.23 | 6.605 | 5.598 | C:\Beheer\testfile.dat (2048MiB)
14 | 37875712 | 9247 | 0.60 | 154.12 | 6.481 | 5.357 | C:\Beheer\testfile.dat (2048MiB)
15 | 37777408 | 9223 | 0.60 | 153.72 | 6.499 | 5.437 | C:\Beheer\testfile.dat (2048MiB)
16 | 37523456 | 9161 | 0.60 | 152.68 | 6.543 | 5.574 | C:\Beheer\testfile.dat (2048MiB)
17 | 37560320 | 9170 | 0.60 | 152.83 | 6.535 | 5.429 | C:\Beheer\testfile.dat (2048MiB)
18 | 37875712 | 9247 | 0.60 | 154.12 | 6.482 | 5.409 | C:\Beheer\testfile.dat (2048MiB)
19 | 37801984 | 9229 | 0.60 | 153.82 | 6.494 | 5.359 | C:\Beheer\testfile.dat (2048MiB)
20 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.400 | C:\Beheer\testfile.dat (2048MiB)
21 | 37576704 | 9174 | 0.60 | 152.90 | 6.534 | 5.392 | C:\Beheer\testfile.dat (2048MiB)
22 | 37806080 | 9230 | 0.60 | 153.83 | 6.493 | 5.502 | C:\Beheer\testfile.dat (2048MiB)
23 | 37711872 | 9207 | 0.60 | 153.45 | 6.510 | 5.481 | C:\Beheer\testfile.dat (2048MiB)
24 | 37953536 | 9266 | 0.60 | 154.43 | 6.469 | 5.248 | C:\Beheer\testfile.dat (2048MiB)
25 | 37490688 | 9153 | 0.60 | 152.55 | 6.548 | 5.431 | C:\Beheer\testfile.dat (2048MiB)
26 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.549 | C:\Beheer\testfile.dat (2048MiB)
27 | 37703680 | 9205 | 0.60 | 153.42 | 6.511 | 5.426 | C:\Beheer\testfile.dat (2048MiB)
28 | 37380096 | 9126 | 0.59 | 152.10 | 6.568 | 5.384 | C:\Beheer\testfile.dat (2048MiB)
29 | 37584896 | 9176 | 0.60 | 152.93 | 6.532 | 5.246 | C:\Beheer\testfile.dat (2048MiB)
30 | 37482496 | 9151 | 0.60 | 152.52 | 6.550 | 5.518 | C:\Beheer\testfile.dat (2048MiB)
31 | 37318656 | 9111 | 0.59 | 151.85 | 6.578 | 5.385 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1199546368 | 292858 | 19.07 | 4880.97 | 6.549 | 5.453total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 1.124 | 1.124
25th | N/A | 2.903 | 2.903
50th | N/A | 4.403 | 4.403
75th | N/A | 10.784 | 10.784
90th | N/A | 12.787 | 12.787
95th | N/A | 13.543 | 13.543
99th | N/A | 15.070 | 15.070
3-nines | N/A | 31.530 | 31.530
4-nines | N/A | 173.368 | 173.368
5-nines | N/A | 198.035 | 198.035
6-nines | N/A | 199.792 | 199.792
7-nines | N/A | 199.792 | 199.792
8-nines | N/A | 199.792 | 199.792
9-nines | N/A | 199.792 | 199.792
max | N/A | 199.792 | 199.792The 4k ui test at 32 threads 1 client
Write: 5206 IOPS, Read: 40379 IOPS
Happy to hear the disks are OK!
The diskspd tests with cache off (1 and 32 Threads):
Command Line: diskSpd.exe -c2G -d60 -r -w100 -t1 -Sh -o1 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 1
thread stride size: 0
threads per file: 1
IO priority: normal
System information:
computer name: IOTestPC01
start time: 2019/12/05 13:49:49 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 60.00s
thread count: 1
proc count: 8
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 4.27%| 0.34%| 3.93%| 95.73%
1| 0.05%| 0.03%| 0.03%| 99.95%
2| 0.13%| 0.03%| 0.10%| 99.87%
3| 0.05%| 0.05%| 0.00%| 99.95%
4| 0.08%| 0.00%| 0.08%| 99.92%
5| 0.08%| 0.03%| 0.05%| 99.92%
6| 0.10%| 0.05%| 0.05%| 99.90%
7| 0.13%| 0.10%| 0.03%| 99.87%
-------------------------------------------
avg.| 0.61%| 0.08%| 0.53%| 99.39%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 166006784 | 40529 | 2.64 | 675.48 | 1.477 | 1.297
total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 1.049 | 1.049
25th | N/A | 1.190 | 1.190
50th | N/A | 1.227 | 1.227
75th | N/A | 1.276 | 1.276
90th | N/A | 1.408 | 1.408
95th | N/A | 1.743 | 1.743
99th | N/A | 9.821 | 9.821
3-nines | N/A | 10.479 | 10.479
4-nines | N/A | 15.947 | 15.947
5-nines | N/A | 25.800 | 25.800
6-nines | N/A | 25.800 | 25.800
7-nines | N/A | 25.800 | 25.800
8-nines | N/A | 25.800 | 25.800
9-nines | N/A | 25.800 | 25.800
max | N/A | 25.800 | 25.800
---------------------------------------------------------------------------------------------------
Command Line: diskSpd.exe -c2G -d60 -r -w100 -t32 -Sh -o1 -b4K -L C:\Beheer\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\Beheer\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing write test
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 1
thread stride size: 0
threads per file: 32
IO priority: normal
System information:
computer name: IOTestPC01
start time: 2019/12/05 13:52:11 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 60.00s
thread count: 32
proc count: 8
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 4.51%| 0.60%| 3.91%| 95.49%
1| 4.45%| 0.42%| 4.04%| 95.55%
2| 4.82%| 0.63%| 4.19%| 95.18%
3| 4.71%| 0.63%| 4.09%| 95.29%
4| 4.53%| 0.31%| 4.22%| 95.47%
5| 4.19%| 0.49%| 3.70%| 95.81%
6| 5.03%| 0.49%| 4.53%| 94.97%
7| 4.27%| 0.39%| 3.88%| 95.73%
-------------------------------------------
avg.| 4.56%| 0.49%| 4.07%| 95.44%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 37462016 | 9146 | 0.60 | 152.43 | 6.553 | 5.520 | C:\Beheer\testfile.dat (2048MiB)
1 | 37281792 | 9102 | 0.59 | 151.70 | 6.585 | 5.519 | C:\Beheer\testfile.dat (2048MiB)
2 | 37105664 | 9059 | 0.59 | 150.98 | 6.616 | 5.541 | C:\Beheer\testfile.dat (2048MiB)
3 | 37523456 | 9161 | 0.60 | 152.68 | 6.542 | 5.408 | C:\Beheer\testfile.dat (2048MiB)
4 | 37535744 | 9164 | 0.60 | 152.73 | 6.541 | 5.500 | C:\Beheer\testfile.dat (2048MiB)
5 | 37359616 | 9121 | 0.59 | 152.02 | 6.571 | 5.571 | C:\Beheer\testfile.dat (2048MiB)
6 | 37289984 | 9104 | 0.59 | 151.73 | 6.584 | 5.548 | C:\Beheer\testfile.dat (2048MiB)
7 | 37556224 | 9169 | 0.60 | 152.82 | 6.537 | 5.403 | C:\Beheer\testfile.dat (2048MiB)
8 | 36876288 | 9003 | 0.59 | 150.05 | 6.657 | 5.616 | C:\Beheer\testfile.dat (2048MiB)
9 | 37081088 | 9053 | 0.59 | 150.88 | 6.621 | 5.475 | C:\Beheer\testfile.dat (2048MiB)
10 | 37285888 | 9103 | 0.59 | 151.72 | 6.584 | 5.470 | C:\Beheer\testfile.dat (2048MiB)
11 | 37429248 | 9138 | 0.59 | 152.30 | 6.559 | 5.338 | C:\Beheer\testfile.dat (2048MiB)
12 | 37351424 | 9119 | 0.59 | 151.98 | 6.573 | 5.428 | C:\Beheer\testfile.dat (2048MiB)
13 | 37167104 | 9074 | 0.59 | 151.23 | 6.605 | 5.598 | C:\Beheer\testfile.dat (2048MiB)
14 | 37875712 | 9247 | 0.60 | 154.12 | 6.481 | 5.357 | C:\Beheer\testfile.dat (2048MiB)
15 | 37777408 | 9223 | 0.60 | 153.72 | 6.499 | 5.437 | C:\Beheer\testfile.dat (2048MiB)
16 | 37523456 | 9161 | 0.60 | 152.68 | 6.543 | 5.574 | C:\Beheer\testfile.dat (2048MiB)
17 | 37560320 | 9170 | 0.60 | 152.83 | 6.535 | 5.429 | C:\Beheer\testfile.dat (2048MiB)
18 | 37875712 | 9247 | 0.60 | 154.12 | 6.482 | 5.409 | C:\Beheer\testfile.dat (2048MiB)
19 | 37801984 | 9229 | 0.60 | 153.82 | 6.494 | 5.359 | C:\Beheer\testfile.dat (2048MiB)
20 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.400 | C:\Beheer\testfile.dat (2048MiB)
21 | 37576704 | 9174 | 0.60 | 152.90 | 6.534 | 5.392 | C:\Beheer\testfile.dat (2048MiB)
22 | 37806080 | 9230 | 0.60 | 153.83 | 6.493 | 5.502 | C:\Beheer\testfile.dat (2048MiB)
23 | 37711872 | 9207 | 0.60 | 153.45 | 6.510 | 5.481 | C:\Beheer\testfile.dat (2048MiB)
24 | 37953536 | 9266 | 0.60 | 154.43 | 6.469 | 5.248 | C:\Beheer\testfile.dat (2048MiB)
25 | 37490688 | 9153 | 0.60 | 152.55 | 6.548 | 5.431 | C:\Beheer\testfile.dat (2048MiB)
26 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.549 | C:\Beheer\testfile.dat (2048MiB)
27 | 37703680 | 9205 | 0.60 | 153.42 | 6.511 | 5.426 | C:\Beheer\testfile.dat (2048MiB)
28 | 37380096 | 9126 | 0.59 | 152.10 | 6.568 | 5.384 | C:\Beheer\testfile.dat (2048MiB)
29 | 37584896 | 9176 | 0.60 | 152.93 | 6.532 | 5.246 | C:\Beheer\testfile.dat (2048MiB)
30 | 37482496 | 9151 | 0.60 | 152.52 | 6.550 | 5.518 | C:\Beheer\testfile.dat (2048MiB)
31 | 37318656 | 9111 | 0.59 | 151.85 | 6.578 | 5.385 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1199546368 | 292858 | 19.07 | 4880.97 | 6.549 | 5.453
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
1 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
2 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
3 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
4 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
5 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
6 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
7 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
8 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
9 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
10 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
11 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
12 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
13 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
14 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
15 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
16 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
17 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
18 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
19 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
20 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
21 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
22 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
23 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
24 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
25 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
26 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
27 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
28 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
29 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
30 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
31 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 37462016 | 9146 | 0.60 | 152.43 | 6.553 | 5.520 | C:\Beheer\testfile.dat (2048MiB)
1 | 37281792 | 9102 | 0.59 | 151.70 | 6.585 | 5.519 | C:\Beheer\testfile.dat (2048MiB)
2 | 37105664 | 9059 | 0.59 | 150.98 | 6.616 | 5.541 | C:\Beheer\testfile.dat (2048MiB)
3 | 37523456 | 9161 | 0.60 | 152.68 | 6.542 | 5.408 | C:\Beheer\testfile.dat (2048MiB)
4 | 37535744 | 9164 | 0.60 | 152.73 | 6.541 | 5.500 | C:\Beheer\testfile.dat (2048MiB)
5 | 37359616 | 9121 | 0.59 | 152.02 | 6.571 | 5.571 | C:\Beheer\testfile.dat (2048MiB)
6 | 37289984 | 9104 | 0.59 | 151.73 | 6.584 | 5.548 | C:\Beheer\testfile.dat (2048MiB)
7 | 37556224 | 9169 | 0.60 | 152.82 | 6.537 | 5.403 | C:\Beheer\testfile.dat (2048MiB)
8 | 36876288 | 9003 | 0.59 | 150.05 | 6.657 | 5.616 | C:\Beheer\testfile.dat (2048MiB)
9 | 37081088 | 9053 | 0.59 | 150.88 | 6.621 | 5.475 | C:\Beheer\testfile.dat (2048MiB)
10 | 37285888 | 9103 | 0.59 | 151.72 | 6.584 | 5.470 | C:\Beheer\testfile.dat (2048MiB)
11 | 37429248 | 9138 | 0.59 | 152.30 | 6.559 | 5.338 | C:\Beheer\testfile.dat (2048MiB)
12 | 37351424 | 9119 | 0.59 | 151.98 | 6.573 | 5.428 | C:\Beheer\testfile.dat (2048MiB)
13 | 37167104 | 9074 | 0.59 | 151.23 | 6.605 | 5.598 | C:\Beheer\testfile.dat (2048MiB)
14 | 37875712 | 9247 | 0.60 | 154.12 | 6.481 | 5.357 | C:\Beheer\testfile.dat (2048MiB)
15 | 37777408 | 9223 | 0.60 | 153.72 | 6.499 | 5.437 | C:\Beheer\testfile.dat (2048MiB)
16 | 37523456 | 9161 | 0.60 | 152.68 | 6.543 | 5.574 | C:\Beheer\testfile.dat (2048MiB)
17 | 37560320 | 9170 | 0.60 | 152.83 | 6.535 | 5.429 | C:\Beheer\testfile.dat (2048MiB)
18 | 37875712 | 9247 | 0.60 | 154.12 | 6.482 | 5.409 | C:\Beheer\testfile.dat (2048MiB)
19 | 37801984 | 9229 | 0.60 | 153.82 | 6.494 | 5.359 | C:\Beheer\testfile.dat (2048MiB)
20 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.400 | C:\Beheer\testfile.dat (2048MiB)
21 | 37576704 | 9174 | 0.60 | 152.90 | 6.534 | 5.392 | C:\Beheer\testfile.dat (2048MiB)
22 | 37806080 | 9230 | 0.60 | 153.83 | 6.493 | 5.502 | C:\Beheer\testfile.dat (2048MiB)
23 | 37711872 | 9207 | 0.60 | 153.45 | 6.510 | 5.481 | C:\Beheer\testfile.dat (2048MiB)
24 | 37953536 | 9266 | 0.60 | 154.43 | 6.469 | 5.248 | C:\Beheer\testfile.dat (2048MiB)
25 | 37490688 | 9153 | 0.60 | 152.55 | 6.548 | 5.431 | C:\Beheer\testfile.dat (2048MiB)
26 | 37408768 | 9133 | 0.59 | 152.22 | 6.563 | 5.549 | C:\Beheer\testfile.dat (2048MiB)
27 | 37703680 | 9205 | 0.60 | 153.42 | 6.511 | 5.426 | C:\Beheer\testfile.dat (2048MiB)
28 | 37380096 | 9126 | 0.59 | 152.10 | 6.568 | 5.384 | C:\Beheer\testfile.dat (2048MiB)
29 | 37584896 | 9176 | 0.60 | 152.93 | 6.532 | 5.246 | C:\Beheer\testfile.dat (2048MiB)
30 | 37482496 | 9151 | 0.60 | 152.52 | 6.550 | 5.518 | C:\Beheer\testfile.dat (2048MiB)
31 | 37318656 | 9111 | 0.59 | 151.85 | 6.578 | 5.385 | C:\Beheer\testfile.dat (2048MiB)
-----------------------------------------------------------------------------------------------------
total: 1199546368 | 292858 | 19.07 | 4880.97 | 6.549 | 5.453
total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | N/A | 1.124 | 1.124
25th | N/A | 2.903 | 2.903
50th | N/A | 4.403 | 4.403
75th | N/A | 10.784 | 10.784
90th | N/A | 12.787 | 12.787
95th | N/A | 13.543 | 13.543
99th | N/A | 15.070 | 15.070
3-nines | N/A | 31.530 | 31.530
4-nines | N/A | 173.368 | 173.368
5-nines | N/A | 198.035 | 198.035
6-nines | N/A | 199.792 | 199.792
7-nines | N/A | 199.792 | 199.792
8-nines | N/A | 199.792 | 199.792
9-nines | N/A | 199.792 | 199.792
max | N/A | 199.792 | 199.792
The 4k ui test at 32 threads 1 client
Write: 5206 IOPS, Read: 40379 IOPS
admin
2,930 Posts
Quote from admin on December 5, 2019, 2:24 pmThe results seems OK for your system:
4k rand write rados level 1 thread = 707 iops
4k rand write from VM 1 thread = 675 iops
4k rand write rados level 32 threads = 5206 iops
4k rand write from VM 32 threads = 4880 iops
Which correlate well.
Note in Ceph, single client thread performance is not high, in return the system scales when you have a lot of client concurrency : ( many vms, applications, transactions..) .
If you have many datastores and vms, the total iops will be the same as the max ui. You can also bump up vmware limits over 32 queue depth in vmware for a single vm.
In general with Ceph, you can get around 1ms latency for write and 0.3 ms for reads. So a (single) file copy in Windows which uses 256k block size, will not exceed 256 MB/s speed.
The results seems OK for your system:
4k rand write rados level 1 thread = 707 iops
4k rand write from VM 1 thread = 675 iops
4k rand write rados level 32 threads = 5206 iops
4k rand write from VM 32 threads = 4880 iops
Which correlate well.
Note in Ceph, single client thread performance is not high, in return the system scales when you have a lot of client concurrency : ( many vms, applications, transactions..) .
If you have many datastores and vms, the total iops will be the same as the max ui. You can also bump up vmware limits over 32 queue depth in vmware for a single vm.
In general with Ceph, you can get around 1ms latency for write and 0.3 ms for reads. So a (single) file copy in Windows which uses 256k block size, will not exceed 256 MB/s speed.