HPE Blade Support
BILDr
16 Posts
February 22, 2020, 7:04 pmQuote from BILDr on February 22, 2020, 7:04 pmHello everyone,
I am looking to build a blade system based on HP7000 chassis and BL460c G7 blades. Ideally I would like every active (non storage serving?) blade to be virtualised with ESXi and boot from iSCSI SAN.
The subsystem is fully 10GB as is connectivity between centres. Each blade currently has 2 x 300GB internals and I have 3 x D2200sb each with 12 x 300GB
My thoughts are that 3 x blade/D2200 combinations would give me a good start to a scaleable SAN but I have a few questions.....
Has anyone done this on this hardware? Does the PetaSAN software have to run on a dedicated bare metal blade or can it be within VMware? Does Petasan have the drivers required for Blade systems? Lastly, is Petasan robust and reliable enough for a production environment serving many VM's via iSCSI? I would take the commercial support option obviously!
Thanks for your help and advice!
Tony
Hello everyone,
I am looking to build a blade system based on HP7000 chassis and BL460c G7 blades. Ideally I would like every active (non storage serving?) blade to be virtualised with ESXi and boot from iSCSI SAN.
The subsystem is fully 10GB as is connectivity between centres. Each blade currently has 2 x 300GB internals and I have 3 x D2200sb each with 12 x 300GB
My thoughts are that 3 x blade/D2200 combinations would give me a good start to a scaleable SAN but I have a few questions.....
Has anyone done this on this hardware? Does the PetaSAN software have to run on a dedicated bare metal blade or can it be within VMware? Does Petasan have the drivers required for Blade systems? Lastly, is Petasan robust and reliable enough for a production environment serving many VM's via iSCSI? I would take the commercial support option obviously!
Thanks for your help and advice!
Tony
Last edited on February 22, 2020, 7:05 pm by BILDr · #1
admin
2,930 Posts
February 23, 2020, 8:34 amQuote from admin on February 23, 2020, 8:34 amregarding your last question...PetaSAN is rock solid, we have many users in production, some close to 3 years running.
regarding your last question...PetaSAN is rock solid, we have many users in production, some close to 3 years running.
BILDr
16 Posts
February 23, 2020, 9:22 amQuote from BILDr on February 23, 2020, 9:22 amThanks.
So does it have the drivers to run on Gen7 BL460c?
Thanks.
So does it have the drivers to run on Gen7 BL460c?
admin
2,930 Posts
February 23, 2020, 11:42 amQuote from admin on February 23, 2020, 11:42 amit is not something we tested. Can you search and see if if does require any special or custom kernel drivers, else it should be ok.
it is not something we tested. Can you search and see if if does require any special or custom kernel drivers, else it should be ok.
BILDr
16 Posts
February 24, 2020, 9:31 amQuote from BILDr on February 24, 2020, 9:31 amGiven the cheap availability of DL380 Gen8 I decided to go with this. I picked up three with 12 x LFF, 1gb BBWC, 16gb ram and the 2 x 10g + 4 x 1g ethernet options for less than £100 each.
I think this should now be a great solution at a bargain price. I can get 24tb per DL380 using 2tb disks or 96tb using 8tb disks, the 24tb option costing £450 per node total. That's an insane price point for the capacity and comparative functionality versus other solutions.
Am I right in thinking I can offload front end work to three of the blades and just leave the disk boxes as storage nodes and scale out by just adding more?
Given the cheap availability of DL380 Gen8 I decided to go with this. I picked up three with 12 x LFF, 1gb BBWC, 16gb ram and the 2 x 10g + 4 x 1g ethernet options for less than £100 each.
I think this should now be a great solution at a bargain price. I can get 24tb per DL380 using 2tb disks or 96tb using 8tb disks, the 24tb option costing £450 per node total. That's an insane price point for the capacity and comparative functionality versus other solutions.
Am I right in thinking I can offload front end work to three of the blades and just leave the disk boxes as storage nodes and scale out by just adding more?
therm
121 Posts
February 24, 2020, 10:23 amQuote from therm on February 24, 2020, 10:23 amWe are using Petasan for years now with DL380 Gen8 servers. It serves ISCSI-Blockstorage to VMware running on HP Blades of generation 9. I would not recommend to mix those systems(within Petasan), keep it simple. Our Petasan-Cluster consists of 10 Storage Server with those details:
HP ProLiant DL380p Gen8, rfb. :
2x Intel Xeon E5-2690 / 2,9GHz. 8-Core
12x 16GB PC3-12800R Speicher
Smart Array P420/0 Raid Controller (RAID for system disks)
HP H221 HBA (SAS-HBA)
1x HP 530 SFP+ FLR Adapter (Backend network)
1x HP 530 SFP+ PCIe Adapter (was ISCSI, now separated)
1x HP 331T 4x 1 Gbit Adapter (SSH-Login)
additional Riser Card
2x HP Storageworks D2600, rfb. (with 24x 4TB HGST disks)
3 x Intel P4600 2TB HHHL PCIe Card (2 for Journals, their rest capacitiy + the third one for a fast pool)
We are using filestore and have our ISCSI-Servers on two additional DL380p Gen8 (because we had problems at the early beginning with cluster crashes).
Especially the RAM of your Setup will lead to slow performance, as nothing really will be in cache.
We are using Petasan for years now with DL380 Gen8 servers. It serves ISCSI-Blockstorage to VMware running on HP Blades of generation 9. I would not recommend to mix those systems(within Petasan), keep it simple. Our Petasan-Cluster consists of 10 Storage Server with those details:
HP ProLiant DL380p Gen8, rfb. :
2x Intel Xeon E5-2690 / 2,9GHz. 8-Core
12x 16GB PC3-12800R Speicher
Smart Array P420/0 Raid Controller (RAID for system disks)
HP H221 HBA (SAS-HBA)
1x HP 530 SFP+ FLR Adapter (Backend network)
1x HP 530 SFP+ PCIe Adapter (was ISCSI, now separated)
1x HP 331T 4x 1 Gbit Adapter (SSH-Login)
additional Riser Card
2x HP Storageworks D2600, rfb. (with 24x 4TB HGST disks)
3 x Intel P4600 2TB HHHL PCIe Card (2 for Journals, their rest capacitiy + the third one for a fast pool)
We are using filestore and have our ISCSI-Servers on two additional DL380p Gen8 (because we had problems at the early beginning with cluster crashes).
Especially the RAM of your Setup will lead to slow performance, as nothing really will be in cache.
ubx_cloud_steve
7 Posts
March 4, 2020, 7:28 amQuote from ubx_cloud_steve on March 4, 2020, 7:28 amTherm,
can you post benchmarks of your configuration? Also are you using filestore?
Therm,
can you post benchmarks of your configuration? Also are you using filestore?
HPE Blade Support
BILDr
16 Posts
Quote from BILDr on February 22, 2020, 7:04 pmHello everyone,
I am looking to build a blade system based on HP7000 chassis and BL460c G7 blades. Ideally I would like every active (non storage serving?) blade to be virtualised with ESXi and boot from iSCSI SAN.
The subsystem is fully 10GB as is connectivity between centres. Each blade currently has 2 x 300GB internals and I have 3 x D2200sb each with 12 x 300GB
My thoughts are that 3 x blade/D2200 combinations would give me a good start to a scaleable SAN but I have a few questions.....
Has anyone done this on this hardware? Does the PetaSAN software have to run on a dedicated bare metal blade or can it be within VMware? Does Petasan have the drivers required for Blade systems? Lastly, is Petasan robust and reliable enough for a production environment serving many VM's via iSCSI? I would take the commercial support option obviously!
Thanks for your help and advice!
Tony
Hello everyone,
I am looking to build a blade system based on HP7000 chassis and BL460c G7 blades. Ideally I would like every active (non storage serving?) blade to be virtualised with ESXi and boot from iSCSI SAN.
The subsystem is fully 10GB as is connectivity between centres. Each blade currently has 2 x 300GB internals and I have 3 x D2200sb each with 12 x 300GB
My thoughts are that 3 x blade/D2200 combinations would give me a good start to a scaleable SAN but I have a few questions.....
Has anyone done this on this hardware? Does the PetaSAN software have to run on a dedicated bare metal blade or can it be within VMware? Does Petasan have the drivers required for Blade systems? Lastly, is Petasan robust and reliable enough for a production environment serving many VM's via iSCSI? I would take the commercial support option obviously!
Thanks for your help and advice!
Tony
admin
2,930 Posts
Quote from admin on February 23, 2020, 8:34 amregarding your last question...PetaSAN is rock solid, we have many users in production, some close to 3 years running.
regarding your last question...PetaSAN is rock solid, we have many users in production, some close to 3 years running.
BILDr
16 Posts
Quote from BILDr on February 23, 2020, 9:22 amThanks.
So does it have the drivers to run on Gen7 BL460c?
Thanks.
So does it have the drivers to run on Gen7 BL460c?
admin
2,930 Posts
Quote from admin on February 23, 2020, 11:42 amit is not something we tested. Can you search and see if if does require any special or custom kernel drivers, else it should be ok.
it is not something we tested. Can you search and see if if does require any special or custom kernel drivers, else it should be ok.
BILDr
16 Posts
Quote from BILDr on February 24, 2020, 9:31 amGiven the cheap availability of DL380 Gen8 I decided to go with this. I picked up three with 12 x LFF, 1gb BBWC, 16gb ram and the 2 x 10g + 4 x 1g ethernet options for less than £100 each.
I think this should now be a great solution at a bargain price. I can get 24tb per DL380 using 2tb disks or 96tb using 8tb disks, the 24tb option costing £450 per node total. That's an insane price point for the capacity and comparative functionality versus other solutions.
Am I right in thinking I can offload front end work to three of the blades and just leave the disk boxes as storage nodes and scale out by just adding more?
Given the cheap availability of DL380 Gen8 I decided to go with this. I picked up three with 12 x LFF, 1gb BBWC, 16gb ram and the 2 x 10g + 4 x 1g ethernet options for less than £100 each.
I think this should now be a great solution at a bargain price. I can get 24tb per DL380 using 2tb disks or 96tb using 8tb disks, the 24tb option costing £450 per node total. That's an insane price point for the capacity and comparative functionality versus other solutions.
Am I right in thinking I can offload front end work to three of the blades and just leave the disk boxes as storage nodes and scale out by just adding more?
therm
121 Posts
Quote from therm on February 24, 2020, 10:23 amWe are using Petasan for years now with DL380 Gen8 servers. It serves ISCSI-Blockstorage to VMware running on HP Blades of generation 9. I would not recommend to mix those systems(within Petasan), keep it simple. Our Petasan-Cluster consists of 10 Storage Server with those details:
HP ProLiant DL380p Gen8, rfb. :
2x Intel Xeon E5-2690 / 2,9GHz. 8-Core
12x 16GB PC3-12800R Speicher
Smart Array P420/0 Raid Controller (RAID for system disks)
HP H221 HBA (SAS-HBA)
1x HP 530 SFP+ FLR Adapter (Backend network)
1x HP 530 SFP+ PCIe Adapter (was ISCSI, now separated)
1x HP 331T 4x 1 Gbit Adapter (SSH-Login)
additional Riser Card
2x HP Storageworks D2600, rfb. (with 24x 4TB HGST disks)
3 x Intel P4600 2TB HHHL PCIe Card (2 for Journals, their rest capacitiy + the third one for a fast pool)We are using filestore and have our ISCSI-Servers on two additional DL380p Gen8 (because we had problems at the early beginning with cluster crashes).
Especially the RAM of your Setup will lead to slow performance, as nothing really will be in cache.
We are using Petasan for years now with DL380 Gen8 servers. It serves ISCSI-Blockstorage to VMware running on HP Blades of generation 9. I would not recommend to mix those systems(within Petasan), keep it simple. Our Petasan-Cluster consists of 10 Storage Server with those details:
HP ProLiant DL380p Gen8, rfb. :
2x Intel Xeon E5-2690 / 2,9GHz. 8-Core
12x 16GB PC3-12800R Speicher
Smart Array P420/0 Raid Controller (RAID for system disks)
HP H221 HBA (SAS-HBA)
1x HP 530 SFP+ FLR Adapter (Backend network)
1x HP 530 SFP+ PCIe Adapter (was ISCSI, now separated)
1x HP 331T 4x 1 Gbit Adapter (SSH-Login)
additional Riser Card
2x HP Storageworks D2600, rfb. (with 24x 4TB HGST disks)
3 x Intel P4600 2TB HHHL PCIe Card (2 for Journals, their rest capacitiy + the third one for a fast pool)
We are using filestore and have our ISCSI-Servers on two additional DL380p Gen8 (because we had problems at the early beginning with cluster crashes).
Especially the RAM of your Setup will lead to slow performance, as nothing really will be in cache.
ubx_cloud_steve
7 Posts
Quote from ubx_cloud_steve on March 4, 2020, 7:28 amTherm,
can you post benchmarks of your configuration? Also are you using filestore?
Therm,
can you post benchmarks of your configuration? Also are you using filestore?