IOPS PETASAN
tuocmd
54 Posts
October 14, 2019, 12:55 pmQuote from tuocmd on October 14, 2019, 12:55 pmDear admin,
Currently I have a PetaSAN system with a capacity of 10TB.
However, IOPS test only reached the highest level of 2k
One server has 2TB x5 HDD and 01 journal 500GB
Use 01 10GB Switch
However, I think the IOPS speed is not high, Can you guys support me!
Dear admin,
Currently I have a PetaSAN system with a capacity of 10TB.
However, IOPS test only reached the highest level of 2k
One server has 2TB x5 HDD and 01 journal 500GB
Use 01 10GB Switch
However, I think the IOPS speed is not high, Can you guys support me!
admin
2,930 Posts
October 14, 2019, 1:38 pmQuote from admin on October 14, 2019, 1:38 pmI assume you are referring to write speed: not high but not unreal for this type of configuration. These are random iops, assume you are using 3x replication, you are writing 2k x 3 = 6k raw device iops. For 15 total HDDs, each is doing around 400 iops, which is ok.
You can look at %util graph for disk, it will be above 90% busy.
These are pure random iops, in real cases, there will be a lot of sequential ios as well which are handled better by HDDs.
For iops intensive loads, you should be considering SSD pools. Else your options are to add more HDD per node ( for example more 16 - 24 ) , use SSD journal at ratio of 1:4, make sure you use a good SSD type for journal (search what works best with Ceph), controller with write back cache also helps with HDDs. If you are still testing, you can build your cluster with Filestore engine it is a bit better with iops on HDDs.
Next version we are including a write cache device based on relatively new dm-writecache device.
I assume you are referring to write speed: not high but not unreal for this type of configuration. These are random iops, assume you are using 3x replication, you are writing 2k x 3 = 6k raw device iops. For 15 total HDDs, each is doing around 400 iops, which is ok.
You can look at %util graph for disk, it will be above 90% busy.
These are pure random iops, in real cases, there will be a lot of sequential ios as well which are handled better by HDDs.
For iops intensive loads, you should be considering SSD pools. Else your options are to add more HDD per node ( for example more 16 - 24 ) , use SSD journal at ratio of 1:4, make sure you use a good SSD type for journal (search what works best with Ceph), controller with write back cache also helps with HDDs. If you are still testing, you can build your cluster with Filestore engine it is a bit better with iops on HDDs.
Next version we are including a write cache device based on relatively new dm-writecache device.
tuocmd
54 Posts
October 14, 2019, 2:49 pmQuote from tuocmd on October 14, 2019, 2:49 pmI want to build another 10K IOPS system 120TB storage and relication x2
Can you advise me on how much hard drive use on 1 NODE
I plan to use: 3 node mon need not mention
4 osd nodes, each with 10x 8TB HDD and 2x 1TB SSD journal
Use 2 10GB switches for backend and iscsi
I want to build another 10K IOPS system 120TB storage and relication x2
Can you advise me on how much hard drive use on 1 NODE
I plan to use: 3 node mon need not mention
4 osd nodes, each with 10x 8TB HDD and 2x 1TB SSD journal
Use 2 10GB switches for backend and iscsi
admin
2,930 Posts
October 14, 2019, 4:39 pmQuote from admin on October 14, 2019, 4:39 pmWith HDDs, the more you add per node, the higher iops you get in almost linear fashion since they are the bottlenecks. Try to get a decent SSD type for journal, look at some recommended model for Ceph, the best ratio os 1:4 HDDs. For start you can place 4 HDDs + 1 SSD in each node to measure their speed, then need to add as needed.
One thing to double check..look at your current % utilization for disk, make sure your SSD is not saturated as a journal, else you probably not using a good type.
With HDDs, the more you add per node, the higher iops you get in almost linear fashion since they are the bottlenecks. Try to get a decent SSD type for journal, look at some recommended model for Ceph, the best ratio os 1:4 HDDs. For start you can place 4 HDDs + 1 SSD in each node to measure their speed, then need to add as needed.
One thing to double check..look at your current % utilization for disk, make sure your SSD is not saturated as a journal, else you probably not using a good type.
Last edited on October 14, 2019, 4:41 pm by admin · #4
tuocmd
54 Posts
October 14, 2019, 5:05 pmQuote from tuocmd on October 14, 2019, 5:05 pmAs I understand in 1 NODE has 10 hard drives, use 1: 4 ratio. That means 2 SSD 8TB + 8 HDD 8TB. and no need to use magazines with ceph versions 12 and above.
As I understand in 1 NODE has 10 hard drives, use 1: 4 ratio. That means 2 SSD 8TB + 8 HDD 8TB. and no need to use magazines with ceph versions 12 and above.
admin
2,930 Posts
October 14, 2019, 10:28 pmQuote from admin on October 14, 2019, 10:28 pmyes the ratio is correct
yes the ratio is correct
tuocmd
54 Posts
October 15, 2019, 1:11 amQuote from tuocmd on October 15, 2019, 1:11 am
- With a ratio of 1 SSD and 4 HDD, their hard drive types are different. Do they have any effect?
- Petasan has mode replication between 2 systems.
That is, it will transfer all data in system A to system B.
Can you tell me the model and connection way between the 2 systems
- With a ratio of 1 SSD and 4 HDD, their hard drive types are different. Do they have any effect?
- Petasan has mode replication between 2 systems.
That is, it will transfer all data in system A to system B.
Can you tell me the model and connection way between the 2 systems
admin
2,930 Posts
October 15, 2019, 11:23 amQuote from admin on October 15, 2019, 11:23 amhaving different drive types/sizes will function ok, but performance wise it is better to use same types and sizes.
for replication, there is a guide you can download.
having different drive types/sizes will function ok, but performance wise it is better to use same types and sizes.
for replication, there is a guide you can download.
IOPS PETASAN
tuocmd
54 Posts
Quote from tuocmd on October 14, 2019, 12:55 pmDear admin,
Currently I have a PetaSAN system with a capacity of 10TB.
However, IOPS test only reached the highest level of 2k
One server has 2TB x5 HDD and 01 journal 500GB
Use 01 10GB Switch
However, I think the IOPS speed is not high, Can you guys support me!
Dear admin,
Currently I have a PetaSAN system with a capacity of 10TB.
However, IOPS test only reached the highest level of 2k
One server has 2TB x5 HDD and 01 journal 500GB
Use 01 10GB Switch
However, I think the IOPS speed is not high, Can you guys support me!
admin
2,930 Posts
Quote from admin on October 14, 2019, 1:38 pmI assume you are referring to write speed: not high but not unreal for this type of configuration. These are random iops, assume you are using 3x replication, you are writing 2k x 3 = 6k raw device iops. For 15 total HDDs, each is doing around 400 iops, which is ok.
You can look at %util graph for disk, it will be above 90% busy.
These are pure random iops, in real cases, there will be a lot of sequential ios as well which are handled better by HDDs.
For iops intensive loads, you should be considering SSD pools. Else your options are to add more HDD per node ( for example more 16 - 24 ) , use SSD journal at ratio of 1:4, make sure you use a good SSD type for journal (search what works best with Ceph), controller with write back cache also helps with HDDs. If you are still testing, you can build your cluster with Filestore engine it is a bit better with iops on HDDs.
Next version we are including a write cache device based on relatively new dm-writecache device.
I assume you are referring to write speed: not high but not unreal for this type of configuration. These are random iops, assume you are using 3x replication, you are writing 2k x 3 = 6k raw device iops. For 15 total HDDs, each is doing around 400 iops, which is ok.
You can look at %util graph for disk, it will be above 90% busy.
These are pure random iops, in real cases, there will be a lot of sequential ios as well which are handled better by HDDs.
For iops intensive loads, you should be considering SSD pools. Else your options are to add more HDD per node ( for example more 16 - 24 ) , use SSD journal at ratio of 1:4, make sure you use a good SSD type for journal (search what works best with Ceph), controller with write back cache also helps with HDDs. If you are still testing, you can build your cluster with Filestore engine it is a bit better with iops on HDDs.
Next version we are including a write cache device based on relatively new dm-writecache device.
tuocmd
54 Posts
Quote from tuocmd on October 14, 2019, 2:49 pmI want to build another 10K IOPS system 120TB storage and relication x2
Can you advise me on how much hard drive use on 1 NODEI plan to use: 3 node mon need not mention
4 osd nodes, each with 10x 8TB HDD and 2x 1TB SSD journal
Use 2 10GB switches for backend and iscsi
I want to build another 10K IOPS system 120TB storage and relication x2
Can you advise me on how much hard drive use on 1 NODE
I plan to use: 3 node mon need not mention
4 osd nodes, each with 10x 8TB HDD and 2x 1TB SSD journal
Use 2 10GB switches for backend and iscsi
admin
2,930 Posts
Quote from admin on October 14, 2019, 4:39 pmWith HDDs, the more you add per node, the higher iops you get in almost linear fashion since they are the bottlenecks. Try to get a decent SSD type for journal, look at some recommended model for Ceph, the best ratio os 1:4 HDDs. For start you can place 4 HDDs + 1 SSD in each node to measure their speed, then need to add as needed.
One thing to double check..look at your current % utilization for disk, make sure your SSD is not saturated as a journal, else you probably not using a good type.
With HDDs, the more you add per node, the higher iops you get in almost linear fashion since they are the bottlenecks. Try to get a decent SSD type for journal, look at some recommended model for Ceph, the best ratio os 1:4 HDDs. For start you can place 4 HDDs + 1 SSD in each node to measure their speed, then need to add as needed.
One thing to double check..look at your current % utilization for disk, make sure your SSD is not saturated as a journal, else you probably not using a good type.
tuocmd
54 Posts
Quote from tuocmd on October 14, 2019, 5:05 pmAs I understand in 1 NODE has 10 hard drives, use 1: 4 ratio. That means 2 SSD 8TB + 8 HDD 8TB. and no need to use magazines with ceph versions 12 and above.
As I understand in 1 NODE has 10 hard drives, use 1: 4 ratio. That means 2 SSD 8TB + 8 HDD 8TB. and no need to use magazines with ceph versions 12 and above.
admin
2,930 Posts
Quote from admin on October 14, 2019, 10:28 pmyes the ratio is correct
yes the ratio is correct
tuocmd
54 Posts
Quote from tuocmd on October 15, 2019, 1:11 am
- With a ratio of 1 SSD and 4 HDD, their hard drive types are different. Do they have any effect?
- Petasan has mode replication between 2 systems.
That is, it will transfer all data in system A to system B.
Can you tell me the model and connection way between the 2 systems
- With a ratio of 1 SSD and 4 HDD, their hard drive types are different. Do they have any effect?
- Petasan has mode replication between 2 systems.
That is, it will transfer all data in system A to system B.
Can you tell me the model and connection way between the 2 systems
admin
2,930 Posts
Quote from admin on October 15, 2019, 11:23 amhaving different drive types/sizes will function ok, but performance wise it is better to use same types and sizes.
for replication, there is a guide you can download.
having different drive types/sizes will function ok, but performance wise it is better to use same types and sizes.
for replication, there is a guide you can download.