Best Practice
RST
17 Posts
June 19, 2020, 1:32 pmQuote from RST on June 19, 2020, 1:32 pmHi all,
I'm planning to setup a new cluster. I've 14 nodes with 12x 600GB 15k drives each. Each node provides 2x 1Gig LAN ports and 2x 10Gig Mellanox ports.
Now my two questions:
- What is the best configuratinon for the lan side?
Management and Backend on the 1G ports and the two SCSI to the 10Gig?
Or teaming the two 1Gig for Mgmt and Backend / teaming the two 10Gig for SCSI 1 & 2?
Trunking Mgmt & SCSI1 on 1st 10Gig / Trunking Backend & SCSI2 on 2nd 10Gig?
- Where should I install the OS?
On the first disk? Do I loos the whole disk or can I use the rest?
On a seperate SSD?
On a SD-Card?
On a USB-Stick?
Thank you for your inputs.
Kind regards,
Reto
Hi all,
I'm planning to setup a new cluster. I've 14 nodes with 12x 600GB 15k drives each. Each node provides 2x 1Gig LAN ports and 2x 10Gig Mellanox ports.
Now my two questions:
- What is the best configuratinon for the lan side?
Management and Backend on the 1G ports and the two SCSI to the 10Gig?
Or teaming the two 1Gig for Mgmt and Backend / teaming the two 10Gig for SCSI 1 & 2?
Trunking Mgmt & SCSI1 on 1st 10Gig / Trunking Backend & SCSI2 on 2nd 10Gig?
- Where should I install the OS?
On the first disk? Do I loos the whole disk or can I use the rest?
On a seperate SSD?
On a SD-Card?
On a USB-Stick?
Thank you for your inputs.
Kind regards,
Reto
admin
2,930 Posts
June 19, 2020, 2:49 pmQuote from admin on June 19, 2020, 2:49 pmOne thing is the backend network carries the combined traffic of iSCIS 1 and 2 subnets together for reads and 3 x their sums for writes. So the point is backend needs bandwidth. it also is critical so it is better to be bonded.
In you case : create 1 bond from your 2 10G nics, place all networks on this bond. This is done when you deploy the node, during the iso installer you just define the management interface, at this stage just chose 1 of the 10G for management, later you can change it to the bond when deploying.
The other option is to leave the management on the 1G interface as it does not require a lot of traffic by itself, note that the management interface is also the default gateway route.
For OS, we need a real disk not USB/SD Card, 256G is enough, SSD is better as during cluster recovery the monitor database could be doing lots of seek. Generally it is highly recommended you add SSD to use as journal disks for the HDDs ( ratio 1:4)
One thing is the backend network carries the combined traffic of iSCIS 1 and 2 subnets together for reads and 3 x their sums for writes. So the point is backend needs bandwidth. it also is critical so it is better to be bonded.
In you case : create 1 bond from your 2 10G nics, place all networks on this bond. This is done when you deploy the node, during the iso installer you just define the management interface, at this stage just chose 1 of the 10G for management, later you can change it to the bond when deploying.
The other option is to leave the management on the 1G interface as it does not require a lot of traffic by itself, note that the management interface is also the default gateway route.
For OS, we need a real disk not USB/SD Card, 256G is enough, SSD is better as during cluster recovery the monitor database could be doing lots of seek. Generally it is highly recommended you add SSD to use as journal disks for the HDDs ( ratio 1:4)
RST
17 Posts
July 27, 2020, 8:15 pmQuote from RST on July 27, 2020, 8:15 pmThank you.
Next question 🙂
I've 12 disks per node: each 600G, 15k, SAS
Per Node: 8GB RAM and 1x 4/4c cpu, 2.3GHz
What's the best way to provide them?
- All in one RAID5?
- Two RAID5?
- HBA?
- 11 in a RAID5 and 1 as a journal disk?
- ...
Thank you
Thank you.
Next question 🙂
I've 12 disks per node: each 600G, 15k, SAS
Per Node: 8GB RAM and 1x 4/4c cpu, 2.3GHz
What's the best way to provide them?
- All in one RAID5?
- Two RAID5?
- HBA?
- 11 in a RAID5 and 1 as a journal disk?
- ...
Thank you
admin
2,930 Posts
July 27, 2020, 9:25 pmQuote from admin on July 27, 2020, 9:25 pmNo RAID is not recommended. use HBA, give the system as many OSDs as possible to manage.
You could use hardware RAID-1 for the OS, in some case if you use magnetic HDD, if you have a controller that supports write back cache (with battery backing), it could give better latency to configure thr HDDs are single volume RAID-0 to make use of the cache, but the recommendation is to always test power outage tests and see if the battery backed controller does guarantee data safety.
No RAID is not recommended. use HBA, give the system as many OSDs as possible to manage.
You could use hardware RAID-1 for the OS, in some case if you use magnetic HDD, if you have a controller that supports write back cache (with battery backing), it could give better latency to configure thr HDDs are single volume RAID-0 to make use of the cache, but the recommendation is to always test power outage tests and see if the battery backed controller does guarantee data safety.
Best Practice
RST
17 Posts
Quote from RST on June 19, 2020, 1:32 pmHi all,
I'm planning to setup a new cluster. I've 14 nodes with 12x 600GB 15k drives each. Each node provides 2x 1Gig LAN ports and 2x 10Gig Mellanox ports.
Now my two questions:
- What is the best configuratinon for the lan side?
Management and Backend on the 1G ports and the two SCSI to the 10Gig?
Or teaming the two 1Gig for Mgmt and Backend / teaming the two 10Gig for SCSI 1 & 2?
Trunking Mgmt & SCSI1 on 1st 10Gig / Trunking Backend & SCSI2 on 2nd 10Gig?- Where should I install the OS?
On the first disk? Do I loos the whole disk or can I use the rest?
On a seperate SSD?
On a SD-Card?
On a USB-Stick?Thank you for your inputs.
Kind regards,
Reto
Hi all,
I'm planning to setup a new cluster. I've 14 nodes with 12x 600GB 15k drives each. Each node provides 2x 1Gig LAN ports and 2x 10Gig Mellanox ports.
Now my two questions:
- What is the best configuratinon for the lan side?
Management and Backend on the 1G ports and the two SCSI to the 10Gig?
Or teaming the two 1Gig for Mgmt and Backend / teaming the two 10Gig for SCSI 1 & 2?
Trunking Mgmt & SCSI1 on 1st 10Gig / Trunking Backend & SCSI2 on 2nd 10Gig? - Where should I install the OS?
On the first disk? Do I loos the whole disk or can I use the rest?
On a seperate SSD?
On a SD-Card?
On a USB-Stick?
Thank you for your inputs.
Kind regards,
Reto
admin
2,930 Posts
Quote from admin on June 19, 2020, 2:49 pmOne thing is the backend network carries the combined traffic of iSCIS 1 and 2 subnets together for reads and 3 x their sums for writes. So the point is backend needs bandwidth. it also is critical so it is better to be bonded.
In you case : create 1 bond from your 2 10G nics, place all networks on this bond. This is done when you deploy the node, during the iso installer you just define the management interface, at this stage just chose 1 of the 10G for management, later you can change it to the bond when deploying.
The other option is to leave the management on the 1G interface as it does not require a lot of traffic by itself, note that the management interface is also the default gateway route.
For OS, we need a real disk not USB/SD Card, 256G is enough, SSD is better as during cluster recovery the monitor database could be doing lots of seek. Generally it is highly recommended you add SSD to use as journal disks for the HDDs ( ratio 1:4)
One thing is the backend network carries the combined traffic of iSCIS 1 and 2 subnets together for reads and 3 x their sums for writes. So the point is backend needs bandwidth. it also is critical so it is better to be bonded.
In you case : create 1 bond from your 2 10G nics, place all networks on this bond. This is done when you deploy the node, during the iso installer you just define the management interface, at this stage just chose 1 of the 10G for management, later you can change it to the bond when deploying.
The other option is to leave the management on the 1G interface as it does not require a lot of traffic by itself, note that the management interface is also the default gateway route.
For OS, we need a real disk not USB/SD Card, 256G is enough, SSD is better as during cluster recovery the monitor database could be doing lots of seek. Generally it is highly recommended you add SSD to use as journal disks for the HDDs ( ratio 1:4)
RST
17 Posts
Quote from RST on July 27, 2020, 8:15 pmThank you.
Next question 🙂
I've 12 disks per node: each 600G, 15k, SAS
Per Node: 8GB RAM and 1x 4/4c cpu, 2.3GHz
What's the best way to provide them?
- All in one RAID5?
- Two RAID5?
- HBA?
- 11 in a RAID5 and 1 as a journal disk?
- ...
Thank you
Thank you.
Next question 🙂
I've 12 disks per node: each 600G, 15k, SAS
Per Node: 8GB RAM and 1x 4/4c cpu, 2.3GHz
What's the best way to provide them?
- All in one RAID5?
- Two RAID5?
- HBA?
- 11 in a RAID5 and 1 as a journal disk?
- ...
Thank you
admin
2,930 Posts
Quote from admin on July 27, 2020, 9:25 pmNo RAID is not recommended. use HBA, give the system as many OSDs as possible to manage.
You could use hardware RAID-1 for the OS, in some case if you use magnetic HDD, if you have a controller that supports write back cache (with battery backing), it could give better latency to configure thr HDDs are single volume RAID-0 to make use of the cache, but the recommendation is to always test power outage tests and see if the battery backed controller does guarantee data safety.
No RAID is not recommended. use HBA, give the system as many OSDs as possible to manage.
You could use hardware RAID-1 for the OS, in some case if you use magnetic HDD, if you have a controller that supports write back cache (with battery backing), it could give better latency to configure thr HDDs are single volume RAID-0 to make use of the cache, but the recommendation is to always test power outage tests and see if the battery backed controller does guarantee data safety.