Performance: Not all SSDs are created equal
admin
2,930 Posts
September 13, 2018, 9:30 amQuote from admin on September 13, 2018, 9:30 amIt is important to realize that not all SSDs give similar performance and endurance. Consumer SSDs do show good performance when doing asynchronous writes where data is cached in internal device ram before being physically written, although this gives good performance, there will be data loss in case of power failure. This is probably OK for desktop applications. Applications such as database engines as well as storage systems such as Ceph have a need to do frequent synchronous writes to guarantee data is actually saved, consumer SSDs do show a huge drop in performance when asked to do such sync writes (or worse..some models lie and cache the data anyway). Enterprise SSDs are much faster in iops / small block size writes, which is typical for virtualized workloads.
You can perform raw disk tests in PetaSAN via the node console menu (the blue screen) in case the disk is not already used (it is a destructive test..meaning it will wipe out data but not actually destroy the disk 🙂 ).
You can also find the following useful as it shows performance for many SSD models:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
Beside performance, enterprise SSDs have much better endurance as measured by DWPD ( device writes per day).
You get what you pay for.
It is important to realize that not all SSDs give similar performance and endurance. Consumer SSDs do show good performance when doing asynchronous writes where data is cached in internal device ram before being physically written, although this gives good performance, there will be data loss in case of power failure. This is probably OK for desktop applications. Applications such as database engines as well as storage systems such as Ceph have a need to do frequent synchronous writes to guarantee data is actually saved, consumer SSDs do show a huge drop in performance when asked to do such sync writes (or worse..some models lie and cache the data anyway). Enterprise SSDs are much faster in iops / small block size writes, which is typical for virtualized workloads.
You can perform raw disk tests in PetaSAN via the node console menu (the blue screen) in case the disk is not already used (it is a destructive test..meaning it will wipe out data but not actually destroy the disk 🙂 ).
You can also find the following useful as it shows performance for many SSD models:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
Beside performance, enterprise SSDs have much better endurance as measured by DWPD ( device writes per day).
You get what you pay for.
Last edited on September 13, 2018, 9:41 am by admin · #1
Yipkaiwing
18 Posts
September 13, 2018, 4:25 pmQuote from Yipkaiwing on September 13, 2018, 4:25 pmAdmin
You are right.
Customer SSD and enterprise SSD are TWO different products. For the real production environment, we should use enterprise SSD. Data are priceless. We should reduce the risk and maximize the reliability.
Ben
Admin
You are right.
Customer SSD and enterprise SSD are TWO different products. For the real production environment, we should use enterprise SSD. Data are priceless. We should reduce the risk and maximize the reliability.
Ben
protocol6v
85 Posts
October 4, 2018, 5:27 pmQuote from protocol6v on October 4, 2018, 5:27 pmDo you have any recommendations for journal SSDs? I've ripped through a ton of consumer and enterprise SSDs and have been VERY underwhelmed by performance on just about all of them. The only resource I can really find online is the one you quoted, and the list seems to be a few years out of date. The only thing i haven't done, is spend $1000+ on the really high-end PCIe drives. I've tried a bunch of NVMe's though.
Just curious if you've found some that work well in your dev testing?
Thanks!
Do you have any recommendations for journal SSDs? I've ripped through a ton of consumer and enterprise SSDs and have been VERY underwhelmed by performance on just about all of them. The only resource I can really find online is the one you quoted, and the list seems to be a few years out of date. The only thing i haven't done, is spend $1000+ on the really high-end PCIe drives. I've tried a bunch of NVMe's though.
Just curious if you've found some that work well in your dev testing?
Thanks!
admin
2,930 Posts
October 5, 2018, 9:41 amQuote from admin on October 5, 2018, 9:41 amSome of the recommended ssds/nvme for Ceph:
Samsung SM863a PM863a
Intel P3700 S3600
I would first make sure your journals are the bottleneck. Other factors could be your spinners (if you have) as well as cpu. You should look into the stats charts for % util for disks ( journals + spinners ) + for cpu. The cluster benchmark report will show you this, ideally you could run it on a staging system since it stresses the cluster to its limit.
Some of the recommended ssds/nvme for Ceph:
Samsung SM863a PM863a
Intel P3700 S3600
I would first make sure your journals are the bottleneck. Other factors could be your spinners (if you have) as well as cpu. You should look into the stats charts for % util for disks ( journals + spinners ) + for cpu. The cluster benchmark report will show you this, ideally you could run it on a staging system since it stresses the cluster to its limit.
Last edited on October 5, 2018, 10:00 am by admin · #4
Performance: Not all SSDs are created equal
admin
2,930 Posts
Quote from admin on September 13, 2018, 9:30 amIt is important to realize that not all SSDs give similar performance and endurance. Consumer SSDs do show good performance when doing asynchronous writes where data is cached in internal device ram before being physically written, although this gives good performance, there will be data loss in case of power failure. This is probably OK for desktop applications. Applications such as database engines as well as storage systems such as Ceph have a need to do frequent synchronous writes to guarantee data is actually saved, consumer SSDs do show a huge drop in performance when asked to do such sync writes (or worse..some models lie and cache the data anyway). Enterprise SSDs are much faster in iops / small block size writes, which is typical for virtualized workloads.
You can perform raw disk tests in PetaSAN via the node console menu (the blue screen) in case the disk is not already used (it is a destructive test..meaning it will wipe out data but not actually destroy the disk 🙂 ).
You can also find the following useful as it shows performance for many SSD models:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/Beside performance, enterprise SSDs have much better endurance as measured by DWPD ( device writes per day).
You get what you pay for.
It is important to realize that not all SSDs give similar performance and endurance. Consumer SSDs do show good performance when doing asynchronous writes where data is cached in internal device ram before being physically written, although this gives good performance, there will be data loss in case of power failure. This is probably OK for desktop applications. Applications such as database engines as well as storage systems such as Ceph have a need to do frequent synchronous writes to guarantee data is actually saved, consumer SSDs do show a huge drop in performance when asked to do such sync writes (or worse..some models lie and cache the data anyway). Enterprise SSDs are much faster in iops / small block size writes, which is typical for virtualized workloads.
You can perform raw disk tests in PetaSAN via the node console menu (the blue screen) in case the disk is not already used (it is a destructive test..meaning it will wipe out data but not actually destroy the disk 🙂 ).
You can also find the following useful as it shows performance for many SSD models:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
Beside performance, enterprise SSDs have much better endurance as measured by DWPD ( device writes per day).
You get what you pay for.
Yipkaiwing
18 Posts
Quote from Yipkaiwing on September 13, 2018, 4:25 pmAdmin
You are right.
Customer SSD and enterprise SSD are TWO different products. For the real production environment, we should use enterprise SSD. Data are priceless. We should reduce the risk and maximize the reliability.
Ben
Admin
You are right.
Customer SSD and enterprise SSD are TWO different products. For the real production environment, we should use enterprise SSD. Data are priceless. We should reduce the risk and maximize the reliability.
Ben
protocol6v
85 Posts
Quote from protocol6v on October 4, 2018, 5:27 pmDo you have any recommendations for journal SSDs? I've ripped through a ton of consumer and enterprise SSDs and have been VERY underwhelmed by performance on just about all of them. The only resource I can really find online is the one you quoted, and the list seems to be a few years out of date. The only thing i haven't done, is spend $1000+ on the really high-end PCIe drives. I've tried a bunch of NVMe's though.
Just curious if you've found some that work well in your dev testing?
Thanks!
Do you have any recommendations for journal SSDs? I've ripped through a ton of consumer and enterprise SSDs and have been VERY underwhelmed by performance on just about all of them. The only resource I can really find online is the one you quoted, and the list seems to be a few years out of date. The only thing i haven't done, is spend $1000+ on the really high-end PCIe drives. I've tried a bunch of NVMe's though.
Just curious if you've found some that work well in your dev testing?
Thanks!
admin
2,930 Posts
Quote from admin on October 5, 2018, 9:41 amSome of the recommended ssds/nvme for Ceph:
Samsung SM863a PM863a
Intel P3700 S3600I would first make sure your journals are the bottleneck. Other factors could be your spinners (if you have) as well as cpu. You should look into the stats charts for % util for disks ( journals + spinners ) + for cpu. The cluster benchmark report will show you this, ideally you could run it on a staging system since it stresses the cluster to its limit.
Some of the recommended ssds/nvme for Ceph:
Samsung SM863a PM863a
Intel P3700 S3600
I would first make sure your journals are the bottleneck. Other factors could be your spinners (if you have) as well as cpu. You should look into the stats charts for % util for disks ( journals + spinners ) + for cpu. The cluster benchmark report will show you this, ideally you could run it on a staging system since it stresses the cluster to its limit.