building full SSD cluster
spiralmatrix
8 Posts
October 29, 2019, 10:24 amQuote from spiralmatrix on October 29, 2019, 10:24 amHi, we are looking at solutions for a production data centre refresh.
I would like to better understand when using full SSD cluster if the following is correct and sized correct.
solution will be starting with 4x Dell R740DX servers, each has:
128GB Ram,Dual CPU 8 core 3.2Ghz, dual HBA adapters split 4:20,4x Write intensive SSD 800GB,20x read / write SSD 1TB
2x 25Gbps ports (back end storage), 8x 10Gbps ports front end storage / iSCSI / management, 4x 800GB for Journal, 20x 1TB for OSD
separate switches for back end storage / front end / iSCSI
running in a VMWare environment with a number of Windows 2019 VM's with MS SQL 2016 SP2 servers as their storage cluster.
backup technology will be Veeam.
my questions are:
Does the above spec seem reasonable per node? as a starting position to run 4 nodes with 3 replicas? (is 3 replicas too much when using all SSD?)
As all the storage is SSD am I correct in adding the 4x Journal SSD discs when the OSD discs are all SSD and not 15K, 10K, SATA for example?
will 128GB Ram per node suffice.
I know I have more 10Gbps interfaces than I need, we have allowed this number in case PetaSan is not a good fit and we have to go VMWare VSAN for example and load a hypervisor on the boxes direct (hopefully not the case).
Any guidance appreciated.
Hi, we are looking at solutions for a production data centre refresh.
I would like to better understand when using full SSD cluster if the following is correct and sized correct.
solution will be starting with 4x Dell R740DX servers, each has:
128GB Ram,Dual CPU 8 core 3.2Ghz, dual HBA adapters split 4:20,4x Write intensive SSD 800GB,20x read / write SSD 1TB
2x 25Gbps ports (back end storage), 8x 10Gbps ports front end storage / iSCSI / management, 4x 800GB for Journal, 20x 1TB for OSD
separate switches for back end storage / front end / iSCSI
running in a VMWare environment with a number of Windows 2019 VM's with MS SQL 2016 SP2 servers as their storage cluster.
backup technology will be Veeam.
my questions are:
Does the above spec seem reasonable per node? as a starting position to run 4 nodes with 3 replicas? (is 3 replicas too much when using all SSD?)
As all the storage is SSD am I correct in adding the 4x Journal SSD discs when the OSD discs are all SSD and not 15K, 10K, SATA for example?
will 128GB Ram per node suffice.
I know I have more 10Gbps interfaces than I need, we have allowed this number in case PetaSan is not a good fit and we have to go VMWare VSAN for example and load a hypervisor on the boxes direct (hopefully not the case).
Any guidance appreciated.
admin
2,930 Posts
October 29, 2019, 7:57 pmQuote from admin on October 29, 2019, 7:57 pmNo need for the journals, just use all SSD OSD without journal
3x replica is the best practice
128G ram is good
Do try to get good enterprise SSDs, some that work well:
Samsung: 863 883 963 983 1633 1725
Intel: S3510 P3600 S3610 P3700 S4500 S4600
Else make sure they have good sync write speed via the PetaSAN blue console menu disk test + good power loss protection and durability, dwpd.
Also recommend you create your vmdk as thick eager zero, else initial performance may be slow in the beginning + for all SSD, it may be better to increase the ESX queue depths per lun for best performance, we are updating our vmware docs to reflect these recommendation soon.
No need for the journals, just use all SSD OSD without journal
3x replica is the best practice
128G ram is good
Do try to get good enterprise SSDs, some that work well:
Samsung: 863 883 963 983 1633 1725
Intel: S3510 P3600 S3610 P3700 S4500 S4600
Else make sure they have good sync write speed via the PetaSAN blue console menu disk test + good power loss protection and durability, dwpd.
Also recommend you create your vmdk as thick eager zero, else initial performance may be slow in the beginning + for all SSD, it may be better to increase the ESX queue depths per lun for best performance, we are updating our vmware docs to reflect these recommendation soon.
spiralmatrix
8 Posts
October 30, 2019, 11:44 amQuote from spiralmatrix on October 30, 2019, 11:44 amHi, thank you for your quick reply, confirming along with tips for VMWare. look forward to reading the updated documentation.
In regards VMware ESXi I noted mention that 6.7 U3 may have some issue so planning to run with Enterprise edition of 6.7 U2.
All SSD discs are enterprise grade with the following spec:
TOSHIBA PX05SVB096Y Revision AS0E
ENTERPRISE GRADE
Capacity: 960GB
Speed: Solid State Memory
Interface Types: SAS Serial Attached SCSI
Form Factor: 2.5inx15mm SFF Server Drive
Sector Size: 512 / 512e
Sustained Throughput: 1900/850
Electrical Interface: SAS-3 Serial Attached SCSI v3 - 12.0Gbps
SSD Type: MLC
SSD Endurance DWPD: Enterprise Mixed Use
Once again, thank you for your assistance and a great product.
Hi, thank you for your quick reply, confirming along with tips for VMWare. look forward to reading the updated documentation.
In regards VMware ESXi I noted mention that 6.7 U3 may have some issue so planning to run with Enterprise edition of 6.7 U2.
All SSD discs are enterprise grade with the following spec:
TOSHIBA PX05SVB096Y Revision AS0E
ENTERPRISE GRADE
Capacity: 960GB
Speed: Solid State Memory
Interface Types: SAS Serial Attached SCSI
Form Factor: 2.5inx15mm SFF Server Drive
Sector Size: 512 / 512e
Sustained Throughput: 1900/850
Electrical Interface: SAS-3 Serial Attached SCSI v3 - 12.0Gbps
SSD Type: MLC
SSD Endurance DWPD: Enterprise Mixed Use
Once again, thank you for your assistance and a great product.
building full SSD cluster
spiralmatrix
8 Posts
Quote from spiralmatrix on October 29, 2019, 10:24 amHi, we are looking at solutions for a production data centre refresh.
I would like to better understand when using full SSD cluster if the following is correct and sized correct.
solution will be starting with 4x Dell R740DX servers, each has:
128GB Ram,Dual CPU 8 core 3.2Ghz, dual HBA adapters split 4:20,4x Write intensive SSD 800GB,20x read / write SSD 1TB
2x 25Gbps ports (back end storage), 8x 10Gbps ports front end storage / iSCSI / management, 4x 800GB for Journal, 20x 1TB for OSD
separate switches for back end storage / front end / iSCSI
running in a VMWare environment with a number of Windows 2019 VM's with MS SQL 2016 SP2 servers as their storage cluster.
backup technology will be Veeam.
my questions are:
Does the above spec seem reasonable per node? as a starting position to run 4 nodes with 3 replicas? (is 3 replicas too much when using all SSD?)
As all the storage is SSD am I correct in adding the 4x Journal SSD discs when the OSD discs are all SSD and not 15K, 10K, SATA for example?
will 128GB Ram per node suffice.
I know I have more 10Gbps interfaces than I need, we have allowed this number in case PetaSan is not a good fit and we have to go VMWare VSAN for example and load a hypervisor on the boxes direct (hopefully not the case).
Any guidance appreciated.
Hi, we are looking at solutions for a production data centre refresh.
I would like to better understand when using full SSD cluster if the following is correct and sized correct.
solution will be starting with 4x Dell R740DX servers, each has:
128GB Ram,Dual CPU 8 core 3.2Ghz, dual HBA adapters split 4:20,4x Write intensive SSD 800GB,20x read / write SSD 1TB
2x 25Gbps ports (back end storage), 8x 10Gbps ports front end storage / iSCSI / management, 4x 800GB for Journal, 20x 1TB for OSD
separate switches for back end storage / front end / iSCSI
running in a VMWare environment with a number of Windows 2019 VM's with MS SQL 2016 SP2 servers as their storage cluster.
backup technology will be Veeam.
my questions are:
Does the above spec seem reasonable per node? as a starting position to run 4 nodes with 3 replicas? (is 3 replicas too much when using all SSD?)
As all the storage is SSD am I correct in adding the 4x Journal SSD discs when the OSD discs are all SSD and not 15K, 10K, SATA for example?
will 128GB Ram per node suffice.
I know I have more 10Gbps interfaces than I need, we have allowed this number in case PetaSan is not a good fit and we have to go VMWare VSAN for example and load a hypervisor on the boxes direct (hopefully not the case).
Any guidance appreciated.
admin
2,930 Posts
Quote from admin on October 29, 2019, 7:57 pmNo need for the journals, just use all SSD OSD without journal
3x replica is the best practice
128G ram is good
Do try to get good enterprise SSDs, some that work well:
Samsung: 863 883 963 983 1633 1725
Intel: S3510 P3600 S3610 P3700 S4500 S4600Else make sure they have good sync write speed via the PetaSAN blue console menu disk test + good power loss protection and durability, dwpd.
Also recommend you create your vmdk as thick eager zero, else initial performance may be slow in the beginning + for all SSD, it may be better to increase the ESX queue depths per lun for best performance, we are updating our vmware docs to reflect these recommendation soon.
No need for the journals, just use all SSD OSD without journal
3x replica is the best practice
128G ram is good
Do try to get good enterprise SSDs, some that work well:
Samsung: 863 883 963 983 1633 1725
Intel: S3510 P3600 S3610 P3700 S4500 S4600
Else make sure they have good sync write speed via the PetaSAN blue console menu disk test + good power loss protection and durability, dwpd.
Also recommend you create your vmdk as thick eager zero, else initial performance may be slow in the beginning + for all SSD, it may be better to increase the ESX queue depths per lun for best performance, we are updating our vmware docs to reflect these recommendation soon.
spiralmatrix
8 Posts
Quote from spiralmatrix on October 30, 2019, 11:44 amHi, thank you for your quick reply, confirming along with tips for VMWare. look forward to reading the updated documentation.
In regards VMware ESXi I noted mention that 6.7 U3 may have some issue so planning to run with Enterprise edition of 6.7 U2.
All SSD discs are enterprise grade with the following spec:
TOSHIBA PX05SVB096Y Revision AS0E
ENTERPRISE GRADE
Capacity: 960GB
Speed: Solid State Memory
Interface Types: SAS Serial Attached SCSI
Form Factor: 2.5inx15mm SFF Server Drive
Sector Size: 512 / 512e
Sustained Throughput: 1900/850
Electrical Interface: SAS-3 Serial Attached SCSI v3 - 12.0Gbps
SSD Type: MLC
SSD Endurance DWPD: Enterprise Mixed Use
Once again, thank you for your assistance and a great product.
Hi, thank you for your quick reply, confirming along with tips for VMWare. look forward to reading the updated documentation.
In regards VMware ESXi I noted mention that 6.7 U3 may have some issue so planning to run with Enterprise edition of 6.7 U2.
All SSD discs are enterprise grade with the following spec:
TOSHIBA PX05SVB096Y Revision AS0E
ENTERPRISE GRADE
Capacity: 960GB
Speed: Solid State Memory
Interface Types: SAS Serial Attached SCSI
Form Factor: 2.5inx15mm SFF Server Drive
Sector Size: 512 / 512e
Sustained Throughput: 1900/850
Electrical Interface: SAS-3 Serial Attached SCSI v3 - 12.0Gbps
SSD Type: MLC
SSD Endurance DWPD: Enterprise Mixed Use
Once again, thank you for your assistance and a great product.