Can you please review this hardware list
R3LZX
50 Posts
June 13, 2019, 10:31 pmQuote from R3LZX on June 13, 2019, 10:31 pmQty 5
HP DL380p Gen8 12B LFF 1x PCI 16-Core 2.60GHz E5-2650 v2
48gb ram
4 X 10TB SAS drives @7200 RPM 12gb capable
eventually 10 X 10TB SAS drives @7200 RPM 12gb capable
all servers have P420i 512MB FBWC
10GB SFP X2 each server feeding into Cisco nexus switch
two questions
Journaling, I've got estimated one drive per server for journaling, is this enough?
Should I raid ZERO the drives and use the raid cards? to use raid card or not to use?
Should I have SSD drives in here for journaling or?
This will be a VMWare 6.5 environment
I tried my best to follow this from redhat
https://www.redhat.com/en/resources/resources-red-hat-ceph-storage-hardware-selection-guide-html
Qty 5
HP DL380p Gen8 12B LFF 1x PCI 16-Core 2.60GHz E5-2650 v2
48gb ram
4 X 10TB SAS drives @7200 RPM 12gb capable
eventually 10 X 10TB SAS drives @7200 RPM 12gb capable
all servers have P420i 512MB FBWC
10GB SFP X2 each server feeding into Cisco nexus switch
two questions
Journaling, I've got estimated one drive per server for journaling, is this enough?
Should I raid ZERO the drives and use the raid cards? to use raid card or not to use?
Should I have SSD drives in here for journaling or?
This will be a VMWare 6.5 environment
I tried my best to follow this from redhat
https://www.redhat.com/en/resources/resources-red-hat-ceph-storage-hardware-selection-guide-html
Last edited on June 13, 2019, 10:34 pm by R3LZX · #1
admin
2,930 Posts
June 14, 2019, 1:47 pmQuote from admin on June 14, 2019, 1:47 pmNot specific to the hardware, but some general comments:
The Red Hat guide is very good, but notice Ceph is used in many different workloads such as S3 cold storage / backups / virtualization. Throughput optimized is good for backup solutions + streaming. For virtualization you should be looking more to iops rather than throughput, in such case the guide recommends all flash, which is also our recommendation.
If however you wish to use hdds in a iops demanding application (something not in the RH guide), we recommend to use a controller with write back cache in addition to ssd journals. The journals will boost write iops by about 2x, the controller by an additional 3-5x, the more cache the better (512 MB is low).
Some controller can enable write back cache in jbod or pass-through mode (Areca) others may require the creation of single disk raid-0. It is not the intent to create raid and group several disks together, which actually slows down Ceph since it works faster if it has more OSDs.
The journal to hdd OSD ratio is 1:4 in case of sata ssds, and 1:12-16 in case of nvmes. In the case of nvme, if the journal disk fails, all its OSDs will fail so its impact is more.
Another thing to boost iops with hdds, is to add as many as you can: 16-20 per host
Not specific to the hardware, but some general comments:
The Red Hat guide is very good, but notice Ceph is used in many different workloads such as S3 cold storage / backups / virtualization. Throughput optimized is good for backup solutions + streaming. For virtualization you should be looking more to iops rather than throughput, in such case the guide recommends all flash, which is also our recommendation.
If however you wish to use hdds in a iops demanding application (something not in the RH guide), we recommend to use a controller with write back cache in addition to ssd journals. The journals will boost write iops by about 2x, the controller by an additional 3-5x, the more cache the better (512 MB is low).
Some controller can enable write back cache in jbod or pass-through mode (Areca) others may require the creation of single disk raid-0. It is not the intent to create raid and group several disks together, which actually slows down Ceph since it works faster if it has more OSDs.
The journal to hdd OSD ratio is 1:4 in case of sata ssds, and 1:12-16 in case of nvmes. In the case of nvme, if the journal disk fails, all its OSDs will fail so its impact is more.
Another thing to boost iops with hdds, is to add as many as you can: 16-20 per host
Kurti2k
16 Posts
June 19, 2019, 7:53 amQuote from Kurti2k on June 19, 2019, 7:53 am@ R3LZX
you can configure the P420(i) to HBA/IT mode with the HP smartStorageAdmin or with CLI command
@ R3LZX
you can configure the P420(i) to HBA/IT mode with the HP smartStorageAdmin or with CLI command
Last edited on June 19, 2019, 7:53 am by Kurti2k · #3
R3LZX
50 Posts
June 24, 2019, 2:24 pmQuote from R3LZX on June 24, 2019, 2:24 pm
Quote from Kurti2k on June 19, 2019, 7:53 am
@ R3LZX
you can configure the P420(i) to HBA/IT mode with the HP smartStorageAdmin or with CLI command
Excellent thank you, this is exactly what I needed, I've been fighting the raid on this thing and since it stores them on the drive no easy way to get out of it while remote.
Quote from Kurti2k on June 19, 2019, 7:53 am
@ R3LZX
you can configure the P420(i) to HBA/IT mode with the HP smartStorageAdmin or with CLI command
Excellent thank you, this is exactly what I needed, I've been fighting the raid on this thing and since it stores them on the drive no easy way to get out of it while remote.
Can you please review this hardware list
R3LZX
50 Posts
Quote from R3LZX on June 13, 2019, 10:31 pmQty 5
HP DL380p Gen8 12B LFF 1x PCI 16-Core 2.60GHz E5-2650 v2
48gb ram
4 X 10TB SAS drives @7200 RPM 12gb capable
eventually 10 X 10TB SAS drives @7200 RPM 12gb capable
all servers have P420i 512MB FBWC
10GB SFP X2 each server feeding into Cisco nexus switch
two questions
Journaling, I've got estimated one drive per server for journaling, is this enough?
Should I raid ZERO the drives and use the raid cards? to use raid card or not to use?
Should I have SSD drives in here for journaling or?
This will be a VMWare 6.5 environment
I tried my best to follow this from redhat
https://www.redhat.com/en/resources/resources-red-hat-ceph-storage-hardware-selection-guide-html
Qty 5
HP DL380p Gen8 12B LFF 1x PCI 16-Core 2.60GHz E5-2650 v2
48gb ram
4 X 10TB SAS drives @7200 RPM 12gb capable
eventually 10 X 10TB SAS drives @7200 RPM 12gb capable
all servers have P420i 512MB FBWC
10GB SFP X2 each server feeding into Cisco nexus switch
two questions
Journaling, I've got estimated one drive per server for journaling, is this enough?
Should I raid ZERO the drives and use the raid cards? to use raid card or not to use?
Should I have SSD drives in here for journaling or?
This will be a VMWare 6.5 environment
I tried my best to follow this from redhat
https://www.redhat.com/en/resources/resources-red-hat-ceph-storage-hardware-selection-guide-html
admin
2,930 Posts
Quote from admin on June 14, 2019, 1:47 pmNot specific to the hardware, but some general comments:
The Red Hat guide is very good, but notice Ceph is used in many different workloads such as S3 cold storage / backups / virtualization. Throughput optimized is good for backup solutions + streaming. For virtualization you should be looking more to iops rather than throughput, in such case the guide recommends all flash, which is also our recommendation.
If however you wish to use hdds in a iops demanding application (something not in the RH guide), we recommend to use a controller with write back cache in addition to ssd journals. The journals will boost write iops by about 2x, the controller by an additional 3-5x, the more cache the better (512 MB is low).
Some controller can enable write back cache in jbod or pass-through mode (Areca) others may require the creation of single disk raid-0. It is not the intent to create raid and group several disks together, which actually slows down Ceph since it works faster if it has more OSDs.
The journal to hdd OSD ratio is 1:4 in case of sata ssds, and 1:12-16 in case of nvmes. In the case of nvme, if the journal disk fails, all its OSDs will fail so its impact is more.
Another thing to boost iops with hdds, is to add as many as you can: 16-20 per host
Not specific to the hardware, but some general comments:
The Red Hat guide is very good, but notice Ceph is used in many different workloads such as S3 cold storage / backups / virtualization. Throughput optimized is good for backup solutions + streaming. For virtualization you should be looking more to iops rather than throughput, in such case the guide recommends all flash, which is also our recommendation.
If however you wish to use hdds in a iops demanding application (something not in the RH guide), we recommend to use a controller with write back cache in addition to ssd journals. The journals will boost write iops by about 2x, the controller by an additional 3-5x, the more cache the better (512 MB is low).
Some controller can enable write back cache in jbod or pass-through mode (Areca) others may require the creation of single disk raid-0. It is not the intent to create raid and group several disks together, which actually slows down Ceph since it works faster if it has more OSDs.
The journal to hdd OSD ratio is 1:4 in case of sata ssds, and 1:12-16 in case of nvmes. In the case of nvme, if the journal disk fails, all its OSDs will fail so its impact is more.
Another thing to boost iops with hdds, is to add as many as you can: 16-20 per host
Kurti2k
16 Posts
Quote from Kurti2k on June 19, 2019, 7:53 am@ R3LZX
you can configure the P420(i) to HBA/IT mode with the HP smartStorageAdmin or with CLI command
@ R3LZX
you can configure the P420(i) to HBA/IT mode with the HP smartStorageAdmin or with CLI command
R3LZX
50 Posts
Quote from R3LZX on June 24, 2019, 2:24 pmQuote from Kurti2k on June 19, 2019, 7:53 am@ R3LZX
you can configure the P420(i) to HBA/IT mode with the HP smartStorageAdmin or with CLI command
Excellent thank you, this is exactly what I needed, I've been fighting the raid on this thing and since it stores them on the drive no easy way to get out of it while remote.
Quote from Kurti2k on June 19, 2019, 7:53 am@ R3LZX
you can configure the P420(i) to HBA/IT mode with the HP smartStorageAdmin or with CLI command
Excellent thank you, this is exactly what I needed, I've been fighting the raid on this thing and since it stores them on the drive no easy way to get out of it while remote.