Thoughts on this spec:
trydon
2 Posts
June 6, 2017, 12:59 pmQuote from trydon on June 6, 2017, 12:59 pmHi guys,
I would like some input on this spec to create a cluster:
3x HP DL380 G5 - Dual CPU (Xeon X5450) / 16GB memory
Each of the machines with come with 8x 146GB SAS drives but won't cut it for my needs. I am looking at replacing with Western Digital Red 1TB (WD10JFCX).
It will be for an ESXi enviroment so any thoughts, recommendations and advice is recommended before I pull the trigger.
TIA
Hi guys,
I would like some input on this spec to create a cluster:
3x HP DL380 G5 - Dual CPU (Xeon X5450) / 16GB memory
Each of the machines with come with 8x 146GB SAS drives but won't cut it for my needs. I am looking at replacing with Western Digital Red 1TB (WD10JFCX).
It will be for an ESXi enviroment so any thoughts, recommendations and advice is recommended before I pull the trigger.
TIA
admin
2,930 Posts
June 6, 2017, 6:32 pmQuote from admin on June 6, 2017, 6:32 pmHi,
Generally it would be better if you can bump RAM to 32G since the node is used for both Ceph storage as well as iSCSI gateway.
Configure your RAID as RAID0, if you have battery backed write back use that.
Good luck.
Hi,
Generally it would be better if you can bump RAM to 32G since the node is used for both Ceph storage as well as iSCSI gateway.
Configure your RAID as RAID0, if you have battery backed write back use that.
Good luck.
trydon
2 Posts
June 9, 2017, 7:23 amQuote from trydon on June 9, 2017, 7:23 amThanks for the reply.
What would the performance implications be if I only run 16GB to start?
Thanks again.
Thanks for the reply.
What would the performance implications be if I only run 16GB to start?
Thanks again.
admin
2,930 Posts
June 9, 2017, 9:35 amQuote from admin on June 9, 2017, 9:35 amIt varies a lot based on your hardware with your workload. I would suggest testing 2 configurations, one with single disk RAID0 (or JBOD if your controller supports it but i doubt), the other with 2 disk RAID0 so we essentially half the number of OSDs which will require less resources. Ceph gives better performance with more OSDs but you should allocate 2G per OSD.
Perform your tests by simulating your max workload load then force shutdown one storage node during the test after writing at least 100G of data, this will put the highest stress on the system. If you find the first configuration handle this well then use it else choose the second.
Technically we have performance tools: atop/collectl/sar bundled in PetaSAN you can use to monitor cpu/ram/disk load. If your %busy disk is loaded more than cpu/ram then configuration 1 is better else 2 is better. Good luck.
It varies a lot based on your hardware with your workload. I would suggest testing 2 configurations, one with single disk RAID0 (or JBOD if your controller supports it but i doubt), the other with 2 disk RAID0 so we essentially half the number of OSDs which will require less resources. Ceph gives better performance with more OSDs but you should allocate 2G per OSD.
Perform your tests by simulating your max workload load then force shutdown one storage node during the test after writing at least 100G of data, this will put the highest stress on the system. If you find the first configuration handle this well then use it else choose the second.
Technically we have performance tools: atop/collectl/sar bundled in PetaSAN you can use to monitor cpu/ram/disk load. If your %busy disk is loaded more than cpu/ram then configuration 1 is better else 2 is better. Good luck.
Last edited on June 9, 2017, 9:52 am · #4
Thoughts on this spec:
trydon
2 Posts
Quote from trydon on June 6, 2017, 12:59 pmHi guys,
I would like some input on this spec to create a cluster:
3x HP DL380 G5 - Dual CPU (Xeon X5450) / 16GB memory
Each of the machines with come with 8x 146GB SAS drives but won't cut it for my needs. I am looking at replacing with Western Digital Red 1TB (WD10JFCX).
It will be for an ESXi enviroment so any thoughts, recommendations and advice is recommended before I pull the trigger.
TIA
Hi guys,
I would like some input on this spec to create a cluster:
3x HP DL380 G5 - Dual CPU (Xeon X5450) / 16GB memory
Each of the machines with come with 8x 146GB SAS drives but won't cut it for my needs. I am looking at replacing with Western Digital Red 1TB (WD10JFCX).
It will be for an ESXi enviroment so any thoughts, recommendations and advice is recommended before I pull the trigger.
TIA
admin
2,930 Posts
Quote from admin on June 6, 2017, 6:32 pmHi,
Generally it would be better if you can bump RAM to 32G since the node is used for both Ceph storage as well as iSCSI gateway.
Configure your RAID as RAID0, if you have battery backed write back use that.
Good luck.
Hi,
Generally it would be better if you can bump RAM to 32G since the node is used for both Ceph storage as well as iSCSI gateway.
Configure your RAID as RAID0, if you have battery backed write back use that.
Good luck.
trydon
2 Posts
Quote from trydon on June 9, 2017, 7:23 amThanks for the reply.
What would the performance implications be if I only run 16GB to start?
Thanks again.
Thanks for the reply.
What would the performance implications be if I only run 16GB to start?
Thanks again.
admin
2,930 Posts
Quote from admin on June 9, 2017, 9:35 amIt varies a lot based on your hardware with your workload. I would suggest testing 2 configurations, one with single disk RAID0 (or JBOD if your controller supports it but i doubt), the other with 2 disk RAID0 so we essentially half the number of OSDs which will require less resources. Ceph gives better performance with more OSDs but you should allocate 2G per OSD.
Perform your tests by simulating your max workload load then force shutdown one storage node during the test after writing at least 100G of data, this will put the highest stress on the system. If you find the first configuration handle this well then use it else choose the second.
Technically we have performance tools: atop/collectl/sar bundled in PetaSAN you can use to monitor cpu/ram/disk load. If your %busy disk is loaded more than cpu/ram then configuration 1 is better else 2 is better. Good luck.
It varies a lot based on your hardware with your workload. I would suggest testing 2 configurations, one with single disk RAID0 (or JBOD if your controller supports it but i doubt), the other with 2 disk RAID0 so we essentially half the number of OSDs which will require less resources. Ceph gives better performance with more OSDs but you should allocate 2G per OSD.
Perform your tests by simulating your max workload load then force shutdown one storage node during the test after writing at least 100G of data, this will put the highest stress on the system. If you find the first configuration handle this well then use it else choose the second.
Technically we have performance tools: atop/collectl/sar bundled in PetaSAN you can use to monitor cpu/ram/disk load. If your %busy disk is loaded more than cpu/ram then configuration 1 is better else 2 is better. Good luck.