problem using osd
tuocmd
54 Posts
January 15, 2019, 4:46 amQuote from tuocmd on January 15, 2019, 4:46 amIs there any difference between osd and ssd and hdd?
Do you have any advice for using osd Can create multiple pools,
Example: pool1: use osd ssd
pool2 uses osd hdd
This is possible or not thank you
Is there any difference between osd and ssd and hdd?
Do you have any advice for using osd Can create multiple pools,
Example: pool1: use osd ssd
pool2 uses osd hdd
This is possible or not thank you
admin
2,930 Posts
January 15, 2019, 12:07 pmQuote from admin on January 15, 2019, 12:07 pmAn OSD is a service than present a disk ( hdd or SSD ) for storage. Typically there is 1 OSD per disk.
You can, if you want, have several Pools being served by a single OSD. For example one pool with replica 3 another with replica 2, you can create a pool for a specific workload or customer. etc. How pools relate to osds is via crush placement rule and crush hierarchy.
An OSD is a service than present a disk ( hdd or SSD ) for storage. Typically there is 1 OSD per disk.
You can, if you want, have several Pools being served by a single OSD. For example one pool with replica 3 another with replica 2, you can create a pool for a specific workload or customer. etc. How pools relate to osds is via crush placement rule and crush hierarchy.
tuocmd
54 Posts
January 15, 2019, 1:52 pmQuote from tuocmd on January 15, 2019, 1:52 pmDear Admin,
I want compare perfomance of HDD disk with SSD disk in storage.
Can you give me some your recommend for it?
Dear Admin,
I want compare perfomance of HDD disk with SSD disk in storage.
Can you give me some your recommend for it?
Last edited on January 15, 2019, 1:53 pm by tuocmd · #3
admin
2,930 Posts
January 15, 2019, 3:24 pmQuote from admin on January 15, 2019, 3:24 pmYou can create 2 pools hdd-pool and ssd-pool and associate them with different crush rules: "by-host-hdd" and "by-host-ssd". Each pool will be using the different disks separately and you can benchmark mark via the benchmark page.
For expected performance, the hdd will be good for throughput test (good for backups) but not in iops (required for virtualization apps). To get decent iops from hdds, you need a controller with write back cache + add more disks per node.
You can create 2 pools hdd-pool and ssd-pool and associate them with different crush rules: "by-host-hdd" and "by-host-ssd". Each pool will be using the different disks separately and you can benchmark mark via the benchmark page.
For expected performance, the hdd will be good for throughput test (good for backups) but not in iops (required for virtualization apps). To get decent iops from hdds, you need a controller with write back cache + add more disks per node.
tuocmd
54 Posts
January 15, 2019, 4:04 pmQuote from tuocmd on January 15, 2019, 4:04 pmSo it means that i need to add journals as cache?
Can you share the scaling of journal?
I know that ceph version 12 does not need a journal to be a cache!
So it means that i need to add journals as cache?
Can you share the scaling of journal?
I know that ceph version 12 does not need a journal to be a cache!
admin
2,930 Posts
January 15, 2019, 8:10 pmQuote from admin on January 15, 2019, 8:10 pmno, i am referring to using a disk controller that has write back cache support.
in addition your should use an ssd journal device to store wal/db, 1 ssd journal can should support up to 4 hdds.
no, i am referring to using a disk controller that has write back cache support.
in addition your should use an ssd journal device to store wal/db, 1 ssd journal can should support up to 4 hdds.
problem using osd
tuocmd
54 Posts
Quote from tuocmd on January 15, 2019, 4:46 amIs there any difference between osd and ssd and hdd?
Do you have any advice for using osd Can create multiple pools,
Example: pool1: use osd ssd
pool2 uses osd hdd
This is possible or not thank you
Is there any difference between osd and ssd and hdd?
Do you have any advice for using osd Can create multiple pools,
Example: pool1: use osd ssd
pool2 uses osd hdd
This is possible or not thank you
admin
2,930 Posts
Quote from admin on January 15, 2019, 12:07 pmAn OSD is a service than present a disk ( hdd or SSD ) for storage. Typically there is 1 OSD per disk.
You can, if you want, have several Pools being served by a single OSD. For example one pool with replica 3 another with replica 2, you can create a pool for a specific workload or customer. etc. How pools relate to osds is via crush placement rule and crush hierarchy.
An OSD is a service than present a disk ( hdd or SSD ) for storage. Typically there is 1 OSD per disk.
You can, if you want, have several Pools being served by a single OSD. For example one pool with replica 3 another with replica 2, you can create a pool for a specific workload or customer. etc. How pools relate to osds is via crush placement rule and crush hierarchy.
tuocmd
54 Posts
Quote from tuocmd on January 15, 2019, 1:52 pmDear Admin,
I want compare perfomance of HDD disk with SSD disk in storage.
Can you give me some your recommend for it?
Dear Admin,
I want compare perfomance of HDD disk with SSD disk in storage.
Can you give me some your recommend for it?
admin
2,930 Posts
Quote from admin on January 15, 2019, 3:24 pmYou can create 2 pools hdd-pool and ssd-pool and associate them with different crush rules: "by-host-hdd" and "by-host-ssd". Each pool will be using the different disks separately and you can benchmark mark via the benchmark page.
For expected performance, the hdd will be good for throughput test (good for backups) but not in iops (required for virtualization apps). To get decent iops from hdds, you need a controller with write back cache + add more disks per node.
You can create 2 pools hdd-pool and ssd-pool and associate them with different crush rules: "by-host-hdd" and "by-host-ssd". Each pool will be using the different disks separately and you can benchmark mark via the benchmark page.
For expected performance, the hdd will be good for throughput test (good for backups) but not in iops (required for virtualization apps). To get decent iops from hdds, you need a controller with write back cache + add more disks per node.
tuocmd
54 Posts
Quote from tuocmd on January 15, 2019, 4:04 pmSo it means that i need to add journals as cache?
Can you share the scaling of journal?
I know that ceph version 12 does not need a journal to be a cache!
So it means that i need to add journals as cache?
Can you share the scaling of journal?
I know that ceph version 12 does not need a journal to be a cache!
admin
2,930 Posts
Quote from admin on January 15, 2019, 8:10 pmno, i am referring to using a disk controller that has write back cache support.
in addition your should use an ssd journal device to store wal/db, 1 ssd journal can should support up to 4 hdds.
no, i am referring to using a disk controller that has write back cache support.
in addition your should use an ssd journal device to store wal/db, 1 ssd journal can should support up to 4 hdds.