2 osds per device
stbportit
2 Posts
August 11, 2022, 1:54 pmQuote from stbportit on August 11, 2022, 1:54 pmHello,
we are trying out PetaSAN in a full nvme cluster. Reading many CEPH Performance Guides it is advised to create 2 or more OSDs per nvme device to use the full power of nvme drives.
Is this possible with PetaSAN?
Hello,
we are trying out PetaSAN in a full nvme cluster. Reading many CEPH Performance Guides it is advised to create 2 or more OSDs per nvme device to use the full power of nvme drives.
Is this possible with PetaSAN?
admin
2,930 Posts
August 11, 2022, 9:37 pmQuote from admin on August 11, 2022, 9:37 pmThese are probably old guides, recent recommendation is to use single OSD per disk. Even when such recommendations were given, they made sense when you had 1 or 2 nvme per host, if you have say 4 nvmes or more your cpu will be maxed out at 100% by OSD processes, adding more OSD processes will not help. To my knowledge it is also rarely used in production.
These are probably old guides, recent recommendation is to use single OSD per disk. Even when such recommendations were given, they made sense when you had 1 or 2 nvme per host, if you have say 4 nvmes or more your cpu will be maxed out at 100% by OSD processes, adding more OSD processes will not help. To my knowledge it is also rarely used in production.
stbportit
2 Posts
August 16, 2022, 6:45 amQuote from stbportit on August 16, 2022, 6:45 amHello, thank you for your reply.
This is what the official CEPH documentation says about the topic. https://docs.ceph.com/en/latest/start/hardware-recommendations/#data-storage
"Running multiple OSDs on a single SAS / SATA drive is NOT a good idea. NVMe drives, however, can achieve improved performance by being split into two or more OSDs."
We have 40 logical cores per node (2x Intel Xeon Gold 6138) and each node has four nvme drives. We would at least like to try it out and compare the performance.
So, is it possible with PetaSAN?
Hello, thank you for your reply.
This is what the official CEPH documentation says about the topic. https://docs.ceph.com/en/latest/start/hardware-recommendations/#data-storage
"Running multiple OSDs on a single SAS / SATA drive is NOT a good idea. NVMe drives, however, can achieve improved performance by being split into two or more OSDs."
We have 40 logical cores per node (2x Intel Xeon Gold 6138) and each node has four nvme drives. We would at least like to try it out and compare the performance.
So, is it possible with PetaSAN?
admin
2,930 Posts
August 16, 2022, 8:19 amQuote from admin on August 16, 2022, 8:19 amNo we do not support multiple OSDs per device.
I cannot say on where this was discussed recently but maybe in ceph mailing lists, not sure,.
40 cores are good, but OSD processes are multi threaded, i am sure if you run an iops test with 4 or more nvmes per host your iops will cap dues to your cores being saturated at 100%, your disk % util and network util will be fine.
No we do not support multiple OSDs per device.
I cannot say on where this was discussed recently but maybe in ceph mailing lists, not sure,.
40 cores are good, but OSD processes are multi threaded, i am sure if you run an iops test with 4 or more nvmes per host your iops will cap dues to your cores being saturated at 100%, your disk % util and network util will be fine.
rhamon
30 Posts
November 7, 2022, 5:55 amQuote from rhamon on November 7, 2022, 5:55 amFor some NVMe devices you can use namespaces to split the device into multiple ones and configure multiple OSD that way.
https://nvmexpress.org/resources/nvm-express-technology-features/nvme-namespaces/
For some NVMe devices you can use namespaces to split the device into multiple ones and configure multiple OSD that way.
https://nvmexpress.org/resources/nvm-express-technology-features/nvme-namespaces/
Last edited on November 7, 2022, 5:56 am by rhamon · #5
2 osds per device
stbportit
2 Posts
Quote from stbportit on August 11, 2022, 1:54 pmHello,
we are trying out PetaSAN in a full nvme cluster. Reading many CEPH Performance Guides it is advised to create 2 or more OSDs per nvme device to use the full power of nvme drives.
Is this possible with PetaSAN?
Hello,
we are trying out PetaSAN in a full nvme cluster. Reading many CEPH Performance Guides it is advised to create 2 or more OSDs per nvme device to use the full power of nvme drives.
Is this possible with PetaSAN?
admin
2,930 Posts
Quote from admin on August 11, 2022, 9:37 pmThese are probably old guides, recent recommendation is to use single OSD per disk. Even when such recommendations were given, they made sense when you had 1 or 2 nvme per host, if you have say 4 nvmes or more your cpu will be maxed out at 100% by OSD processes, adding more OSD processes will not help. To my knowledge it is also rarely used in production.
These are probably old guides, recent recommendation is to use single OSD per disk. Even when such recommendations were given, they made sense when you had 1 or 2 nvme per host, if you have say 4 nvmes or more your cpu will be maxed out at 100% by OSD processes, adding more OSD processes will not help. To my knowledge it is also rarely used in production.
stbportit
2 Posts
Quote from stbportit on August 16, 2022, 6:45 amHello, thank you for your reply.
This is what the official CEPH documentation says about the topic. https://docs.ceph.com/en/latest/start/hardware-recommendations/#data-storage
"Running multiple OSDs on a single SAS / SATA drive is NOT a good idea. NVMe drives, however, can achieve improved performance by being split into two or more OSDs."
We have 40 logical cores per node (2x Intel Xeon Gold 6138) and each node has four nvme drives. We would at least like to try it out and compare the performance.
So, is it possible with PetaSAN?
Hello, thank you for your reply.
This is what the official CEPH documentation says about the topic. https://docs.ceph.com/en/latest/start/hardware-recommendations/#data-storage
"Running multiple OSDs on a single SAS / SATA drive is NOT a good idea. NVMe drives, however, can achieve improved performance by being split into two or more OSDs."
We have 40 logical cores per node (2x Intel Xeon Gold 6138) and each node has four nvme drives. We would at least like to try it out and compare the performance.
So, is it possible with PetaSAN?
admin
2,930 Posts
Quote from admin on August 16, 2022, 8:19 amNo we do not support multiple OSDs per device.
I cannot say on where this was discussed recently but maybe in ceph mailing lists, not sure,.
40 cores are good, but OSD processes are multi threaded, i am sure if you run an iops test with 4 or more nvmes per host your iops will cap dues to your cores being saturated at 100%, your disk % util and network util will be fine.
No we do not support multiple OSDs per device.
I cannot say on where this was discussed recently but maybe in ceph mailing lists, not sure,.
40 cores are good, but OSD processes are multi threaded, i am sure if you run an iops test with 4 or more nvmes per host your iops will cap dues to your cores being saturated at 100%, your disk % util and network util will be fine.
rhamon
30 Posts
Quote from rhamon on November 7, 2022, 5:55 amFor some NVMe devices you can use namespaces to split the device into multiple ones and configure multiple OSD that way.
https://nvmexpress.org/resources/nvm-express-technology-features/nvme-namespaces/
For some NVMe devices you can use namespaces to split the device into multiple ones and configure multiple OSD that way.
https://nvmexpress.org/resources/nvm-express-technology-features/nvme-namespaces/