First Petasan Cluster - planning stage
petasanrd911
19 Posts
April 20, 2021, 9:58 pmQuote from petasanrd911 on April 20, 2021, 9:58 pmI came across Petasan after doing some research on Ceph and seeing the videos on 45drives YouTube channel.
I am working on designing a 5 node cluster for my home lab to store content and possibly use XCP-NG hypervisor and iSCSI mounts for lab testing.
4 x 1U Chenbro NR12000 12 drive servers with dual 10GB sfp+ ports, 16GB of Ram, QUAD CORE E3-1230v2 3.3GHz
1 x 1U Chenbro NR12000 12 drive servers with dual 10GB sfp+ ports, 16GB of Ram, QUAD CORE E3-1220 3.1GHz
Initially each server will have 2x 6tb hard drives with plans to add an additional 3 x 6tb hard drives once I migrate data. I also have 20x4TB drives to add once I migrate data from current storage.
total storage for each server will be 5x6tb =30TB, 5 servers 5x30tb=150 TB, add an in the 20x4TB drives would give me 230TB total storage.
I plan on using 2 replica setup for redundancy
This will mostly be for Plex server storage, VM's and other files
Questions :
1.) hardware specs wise - for a simple home storage is 4core 8thread servers and 16 gb ram ok?
2.) 1 of the servers is a 4 core only varrient, will that cause issues mixing with 4core 8thread servers?
3.) anyone have experience with XCP-NG as a hypervisor and mounting iSCSI? tips/suggestions
4.) any recommendations for setting up a 5 node cluster in general?
thanks in advance
Ray
I came across Petasan after doing some research on Ceph and seeing the videos on 45drives YouTube channel.
I am working on designing a 5 node cluster for my home lab to store content and possibly use XCP-NG hypervisor and iSCSI mounts for lab testing.
4 x 1U Chenbro NR12000 12 drive servers with dual 10GB sfp+ ports, 16GB of Ram, QUAD CORE E3-1230v2 3.3GHz
1 x 1U Chenbro NR12000 12 drive servers with dual 10GB sfp+ ports, 16GB of Ram, QUAD CORE E3-1220 3.1GHz
Initially each server will have 2x 6tb hard drives with plans to add an additional 3 x 6tb hard drives once I migrate data. I also have 20x4TB drives to add once I migrate data from current storage.
total storage for each server will be 5x6tb =30TB, 5 servers 5x30tb=150 TB, add an in the 20x4TB drives would give me 230TB total storage.
I plan on using 2 replica setup for redundancy
This will mostly be for Plex server storage, VM's and other files
Questions :
1.) hardware specs wise - for a simple home storage is 4core 8thread servers and 16 gb ram ok?
2.) 1 of the servers is a 4 core only varrient, will that cause issues mixing with 4core 8thread servers?
3.) anyone have experience with XCP-NG as a hypervisor and mounting iSCSI? tips/suggestions
4.) any recommendations for setting up a 5 node cluster in general?
thanks in advance
Ray
admin
2,930 Posts
April 21, 2021, 3:26 amQuote from admin on April 21, 2021, 3:26 amSome comments:
You should plan 4 GB ram per HDD/OSD. So if when you start with 2 then 16 GB is fine but if you later you should increase ram. if this is not possible you need to specify a limit of 1G per OSD via configuration settings but this will impact performance.
For virtualisation workloads, we recommend all SSD, else you would use HDD with SSD journal disks with a ratio 4:1 to help performance. SSD should be 256 or 512 GB is fine.
Generally it is better to have similar hardware : disks / cpu model..but for your case it should be ok.
Some comments:
You should plan 4 GB ram per HDD/OSD. So if when you start with 2 then 16 GB is fine but if you later you should increase ram. if this is not possible you need to specify a limit of 1G per OSD via configuration settings but this will impact performance.
For virtualisation workloads, we recommend all SSD, else you would use HDD with SSD journal disks with a ratio 4:1 to help performance. SSD should be 256 or 512 GB is fine.
Generally it is better to have similar hardware : disks / cpu model..but for your case it should be ok.
Last edited on April 21, 2021, 3:27 am by admin · #2
petasanrd911
19 Posts
April 23, 2021, 12:18 amQuote from petasanrd911 on April 23, 2021, 12:18 amThanks for the feedback
Thanks for the feedback
pedro6161
36 Posts
May 6, 2021, 12:05 pmQuote from pedro6161 on May 6, 2021, 12:05 pmhow if we don't have SSD ? only SATA or SAS or MIX ? can we use 4 SATA + 1 SAS for Journal or may be admin has another options instead of SSD
Quote from admin on April 21, 2021, 3:26 am
Some comments:
You should plan 4 GB ram per HDD/OSD. So if when you start with 2 then 16 GB is fine but if you later you should increase ram. if this is not possible you need to specify a limit of 1G per OSD via configuration settings but this will impact performance.
For virtualisation workloads, we recommend all SSD, else you would use HDD with SSD journal disks with a ratio 4:1 to help performance. SSD should be 256 or 512 GB is fine.
Generally it is better to have similar hardware : disks / cpu model..but for your case it should be ok.
how if we don't have SSD ? only SATA or SAS or MIX ? can we use 4 SATA + 1 SAS for Journal or may be admin has another options instead of SSD
Quote from admin on April 21, 2021, 3:26 am
Some comments:
You should plan 4 GB ram per HDD/OSD. So if when you start with 2 then 16 GB is fine but if you later you should increase ram. if this is not possible you need to specify a limit of 1G per OSD via configuration settings but this will impact performance.
For virtualisation workloads, we recommend all SSD, else you would use HDD with SSD journal disks with a ratio 4:1 to help performance. SSD should be 256 or 512 GB is fine.
Generally it is better to have similar hardware : disks / cpu model..but for your case it should be ok.
admin
2,930 Posts
May 7, 2021, 10:35 pmQuote from admin on May 7, 2021, 10:35 pmno do not use HDD as journal for another HDD.
if you do not have SSDs, do not create an external journal for the HDD. mind you expect your performance to drop by x2 or more as all metadata iops will be taken from the HDD.
no do not use HDD as journal for another HDD.
if you do not have SSDs, do not create an external journal for the HDD. mind you expect your performance to drop by x2 or more as all metadata iops will be taken from the HDD.
Shiori
86 Posts
December 17, 2021, 3:47 pmQuote from Shiori on December 17, 2021, 3:47 pmWe are doing in production what you are planning (done?) to do.
SSD journals are a must but use 500gb or bigger, especially with 2 and 3tb drives. They will last longer as the wear on them can be spread out over the storage matix inside.
Spinning drives should be 7200rpm or faster and PMR/CMR not SMR. Many smaller 2 and 3tb drives are faster than a few larger drives. Sata 6gbps are more than fast enough but use SAS non-raid controllers. I would build your cluster and migrate your data to the 6TB drives, then as you add the 4tb drives, wait for rebalance to finish and remove the 6tb drives. Having larger drives in your cluster means they will be used more often and thus be busier than the rest and lower your available IOPS. What you can do is use the 6tb drive is a second rdb image and use them for mass storage of non vm related workloads. Or you can customize Ceph to treat them the same as a smaller drive and leave them in the cluster at the cost of losing some of the auto balancing that Ceph does and loosing about 33% of their available capacity since Ceph wont be able to create duplicate shards for the remaining 2tb of space without you adding an additional 2TB to two or more nodes making an off-balanced cluster.
Use a raid 1 mirror for your boot drive, this will save you a huge headache.
SSDs are a better option for VM storage but not a requirement as adding more nodes will improve performance as you add more VMs, adding SSD journals is a requirement though as it improves write response times.
Plan 1cpu core per OSD, per service (2 for iscsi) and one for the OS. Plan 4GB of ram per OSD, 2GB per 10gbps port, 4GB for the OS and add at least 10GB headroom for OSD rebalancing.
Split your network into two separate switches. One for iSCSI #1 & management the other for iSCSI #2 & backend. This will improve your networks ability to cope with the data surge that happens during periodic rebalancing or if you make changes to the cluster (add drives or nodes).
Your hardware does not have to by homogeneous, we run a mix of AMD and Intel cpus and server types, just as long as the number of network ports is the same as or more than the current number in your cluster.
We are doing in production what you are planning (done?) to do.
SSD journals are a must but use 500gb or bigger, especially with 2 and 3tb drives. They will last longer as the wear on them can be spread out over the storage matix inside.
Spinning drives should be 7200rpm or faster and PMR/CMR not SMR. Many smaller 2 and 3tb drives are faster than a few larger drives. Sata 6gbps are more than fast enough but use SAS non-raid controllers. I would build your cluster and migrate your data to the 6TB drives, then as you add the 4tb drives, wait for rebalance to finish and remove the 6tb drives. Having larger drives in your cluster means they will be used more often and thus be busier than the rest and lower your available IOPS. What you can do is use the 6tb drive is a second rdb image and use them for mass storage of non vm related workloads. Or you can customize Ceph to treat them the same as a smaller drive and leave them in the cluster at the cost of losing some of the auto balancing that Ceph does and loosing about 33% of their available capacity since Ceph wont be able to create duplicate shards for the remaining 2tb of space without you adding an additional 2TB to two or more nodes making an off-balanced cluster.
Use a raid 1 mirror for your boot drive, this will save you a huge headache.
SSDs are a better option for VM storage but not a requirement as adding more nodes will improve performance as you add more VMs, adding SSD journals is a requirement though as it improves write response times.
Plan 1cpu core per OSD, per service (2 for iscsi) and one for the OS. Plan 4GB of ram per OSD, 2GB per 10gbps port, 4GB for the OS and add at least 10GB headroom for OSD rebalancing.
Split your network into two separate switches. One for iSCSI #1 & management the other for iSCSI #2 & backend. This will improve your networks ability to cope with the data surge that happens during periodic rebalancing or if you make changes to the cluster (add drives or nodes).
Your hardware does not have to by homogeneous, we run a mix of AMD and Intel cpus and server types, just as long as the number of network ports is the same as or more than the current number in your cluster.
First Petasan Cluster - planning stage
petasanrd911
19 Posts
Quote from petasanrd911 on April 20, 2021, 9:58 pmI came across Petasan after doing some research on Ceph and seeing the videos on 45drives YouTube channel.
I am working on designing a 5 node cluster for my home lab to store content and possibly use XCP-NG hypervisor and iSCSI mounts for lab testing.
4 x 1U Chenbro NR12000 12 drive servers with dual 10GB sfp+ ports, 16GB of Ram, QUAD CORE E3-1230v2 3.3GHz
1 x 1U Chenbro NR12000 12 drive servers with dual 10GB sfp+ ports, 16GB of Ram, QUAD CORE E3-1220 3.1GHz
Initially each server will have 2x 6tb hard drives with plans to add an additional 3 x 6tb hard drives once I migrate data. I also have 20x4TB drives to add once I migrate data from current storage.
total storage for each server will be 5x6tb =30TB, 5 servers 5x30tb=150 TB, add an in the 20x4TB drives would give me 230TB total storage.
I plan on using 2 replica setup for redundancy
This will mostly be for Plex server storage, VM's and other files
Questions :
1.) hardware specs wise - for a simple home storage is 4core 8thread servers and 16 gb ram ok?
2.) 1 of the servers is a 4 core only varrient, will that cause issues mixing with 4core 8thread servers?
3.) anyone have experience with XCP-NG as a hypervisor and mounting iSCSI? tips/suggestions
4.) any recommendations for setting up a 5 node cluster in general?
thanks in advance
Ray
I came across Petasan after doing some research on Ceph and seeing the videos on 45drives YouTube channel.
I am working on designing a 5 node cluster for my home lab to store content and possibly use XCP-NG hypervisor and iSCSI mounts for lab testing.
4 x 1U Chenbro NR12000 12 drive servers with dual 10GB sfp+ ports, 16GB of Ram, QUAD CORE E3-1230v2 3.3GHz
1 x 1U Chenbro NR12000 12 drive servers with dual 10GB sfp+ ports, 16GB of Ram, QUAD CORE E3-1220 3.1GHz
Initially each server will have 2x 6tb hard drives with plans to add an additional 3 x 6tb hard drives once I migrate data. I also have 20x4TB drives to add once I migrate data from current storage.
total storage for each server will be 5x6tb =30TB, 5 servers 5x30tb=150 TB, add an in the 20x4TB drives would give me 230TB total storage.
I plan on using 2 replica setup for redundancy
This will mostly be for Plex server storage, VM's and other files
Questions :
1.) hardware specs wise - for a simple home storage is 4core 8thread servers and 16 gb ram ok?
2.) 1 of the servers is a 4 core only varrient, will that cause issues mixing with 4core 8thread servers?
3.) anyone have experience with XCP-NG as a hypervisor and mounting iSCSI? tips/suggestions
4.) any recommendations for setting up a 5 node cluster in general?
thanks in advance
Ray
admin
2,930 Posts
Quote from admin on April 21, 2021, 3:26 amSome comments:
You should plan 4 GB ram per HDD/OSD. So if when you start with 2 then 16 GB is fine but if you later you should increase ram. if this is not possible you need to specify a limit of 1G per OSD via configuration settings but this will impact performance.
For virtualisation workloads, we recommend all SSD, else you would use HDD with SSD journal disks with a ratio 4:1 to help performance. SSD should be 256 or 512 GB is fine.
Generally it is better to have similar hardware : disks / cpu model..but for your case it should be ok.
Some comments:
You should plan 4 GB ram per HDD/OSD. So if when you start with 2 then 16 GB is fine but if you later you should increase ram. if this is not possible you need to specify a limit of 1G per OSD via configuration settings but this will impact performance.
For virtualisation workloads, we recommend all SSD, else you would use HDD with SSD journal disks with a ratio 4:1 to help performance. SSD should be 256 or 512 GB is fine.
Generally it is better to have similar hardware : disks / cpu model..but for your case it should be ok.
petasanrd911
19 Posts
Quote from petasanrd911 on April 23, 2021, 12:18 amThanks for the feedback
Thanks for the feedback
pedro6161
36 Posts
Quote from pedro6161 on May 6, 2021, 12:05 pmhow if we don't have SSD ? only SATA or SAS or MIX ? can we use 4 SATA + 1 SAS for Journal or may be admin has another options instead of SSDQuote from admin on April 21, 2021, 3:26 amSome comments:
You should plan 4 GB ram per HDD/OSD. So if when you start with 2 then 16 GB is fine but if you later you should increase ram. if this is not possible you need to specify a limit of 1G per OSD via configuration settings but this will impact performance.
For virtualisation workloads, we recommend all SSD, else you would use HDD with SSD journal disks with a ratio 4:1 to help performance. SSD should be 256 or 512 GB is fine.
Generally it is better to have similar hardware : disks / cpu model..but for your case it should be ok.
Quote from admin on April 21, 2021, 3:26 amSome comments:
You should plan 4 GB ram per HDD/OSD. So if when you start with 2 then 16 GB is fine but if you later you should increase ram. if this is not possible you need to specify a limit of 1G per OSD via configuration settings but this will impact performance.
For virtualisation workloads, we recommend all SSD, else you would use HDD with SSD journal disks with a ratio 4:1 to help performance. SSD should be 256 or 512 GB is fine.
Generally it is better to have similar hardware : disks / cpu model..but for your case it should be ok.
admin
2,930 Posts
Quote from admin on May 7, 2021, 10:35 pmno do not use HDD as journal for another HDD.
if you do not have SSDs, do not create an external journal for the HDD. mind you expect your performance to drop by x2 or more as all metadata iops will be taken from the HDD.
no do not use HDD as journal for another HDD.
if you do not have SSDs, do not create an external journal for the HDD. mind you expect your performance to drop by x2 or more as all metadata iops will be taken from the HDD.
Shiori
86 Posts
Quote from Shiori on December 17, 2021, 3:47 pmWe are doing in production what you are planning (done?) to do.
SSD journals are a must but use 500gb or bigger, especially with 2 and 3tb drives. They will last longer as the wear on them can be spread out over the storage matix inside.
Spinning drives should be 7200rpm or faster and PMR/CMR not SMR. Many smaller 2 and 3tb drives are faster than a few larger drives. Sata 6gbps are more than fast enough but use SAS non-raid controllers. I would build your cluster and migrate your data to the 6TB drives, then as you add the 4tb drives, wait for rebalance to finish and remove the 6tb drives. Having larger drives in your cluster means they will be used more often and thus be busier than the rest and lower your available IOPS. What you can do is use the 6tb drive is a second rdb image and use them for mass storage of non vm related workloads. Or you can customize Ceph to treat them the same as a smaller drive and leave them in the cluster at the cost of losing some of the auto balancing that Ceph does and loosing about 33% of their available capacity since Ceph wont be able to create duplicate shards for the remaining 2tb of space without you adding an additional 2TB to two or more nodes making an off-balanced cluster.
Use a raid 1 mirror for your boot drive, this will save you a huge headache.
SSDs are a better option for VM storage but not a requirement as adding more nodes will improve performance as you add more VMs, adding SSD journals is a requirement though as it improves write response times.
Plan 1cpu core per OSD, per service (2 for iscsi) and one for the OS. Plan 4GB of ram per OSD, 2GB per 10gbps port, 4GB for the OS and add at least 10GB headroom for OSD rebalancing.
Split your network into two separate switches. One for iSCSI #1 & management the other for iSCSI #2 & backend. This will improve your networks ability to cope with the data surge that happens during periodic rebalancing or if you make changes to the cluster (add drives or nodes).
Your hardware does not have to by homogeneous, we run a mix of AMD and Intel cpus and server types, just as long as the number of network ports is the same as or more than the current number in your cluster.
We are doing in production what you are planning (done?) to do.
SSD journals are a must but use 500gb or bigger, especially with 2 and 3tb drives. They will last longer as the wear on them can be spread out over the storage matix inside.
Spinning drives should be 7200rpm or faster and PMR/CMR not SMR. Many smaller 2 and 3tb drives are faster than a few larger drives. Sata 6gbps are more than fast enough but use SAS non-raid controllers. I would build your cluster and migrate your data to the 6TB drives, then as you add the 4tb drives, wait for rebalance to finish and remove the 6tb drives. Having larger drives in your cluster means they will be used more often and thus be busier than the rest and lower your available IOPS. What you can do is use the 6tb drive is a second rdb image and use them for mass storage of non vm related workloads. Or you can customize Ceph to treat them the same as a smaller drive and leave them in the cluster at the cost of losing some of the auto balancing that Ceph does and loosing about 33% of their available capacity since Ceph wont be able to create duplicate shards for the remaining 2tb of space without you adding an additional 2TB to two or more nodes making an off-balanced cluster.
Use a raid 1 mirror for your boot drive, this will save you a huge headache.
SSDs are a better option for VM storage but not a requirement as adding more nodes will improve performance as you add more VMs, adding SSD journals is a requirement though as it improves write response times.
Plan 1cpu core per OSD, per service (2 for iscsi) and one for the OS. Plan 4GB of ram per OSD, 2GB per 10gbps port, 4GB for the OS and add at least 10GB headroom for OSD rebalancing.
Split your network into two separate switches. One for iSCSI #1 & management the other for iSCSI #2 & backend. This will improve your networks ability to cope with the data surge that happens during periodic rebalancing or if you make changes to the cluster (add drives or nodes).
Your hardware does not have to by homogeneous, we run a mix of AMD and Intel cpus and server types, just as long as the number of network ports is the same as or more than the current number in your cluster.