PetaSAN 2.2 Released!
admin
2,930 Posts
November 7, 2018, 6:31 pmQuote from admin on November 7, 2018, 6:31 pmHappy to announce PetaSAN 2.2 with the following main additions:
- Erasure Coded Pools for lower storage overhead.
- VLAN tagging support.
- SMART disk health monitoring and notifications.
The installer allows upgrade from version 2.1.
Happy to announce PetaSAN 2.2 with the following main additions:
- Erasure Coded Pools for lower storage overhead.
- VLAN tagging support.
- SMART disk health monitoring and notifications.
The installer allows upgrade from version 2.1.
Last edited on November 8, 2018, 8:49 am by admin · #1
kokojamba
15 Posts
November 8, 2018, 9:15 amQuote from kokojamba on November 8, 2018, 9:15 amGreat news !!!!!!!!!!!
Great news !!!!!!!!!!!
Kurti2k
16 Posts
November 8, 2018, 9:54 amQuote from Kurti2k on November 8, 2018, 9:54 amNice - so i can reinstall my broken/misconfigured cluster 🙂
Nice - so i can reinstall my broken/misconfigured cluster 🙂
Last edited on November 8, 2018, 10:03 am by Kurti2k · #3
msalem
87 Posts
November 9, 2018, 11:47 amQuote from msalem on November 9, 2018, 11:47 amThanks for the new release.
I am installing the cluster now. I do not see the replica option, only 2 and 3 . ?
is there something I am misssing.
Thanks
Thanks for the new release.
I am installing the cluster now. I do not see the replica option, only 2 and 3 . ?
is there something I am misssing.
Thanks
admin
2,930 Posts
November 9, 2018, 12:57 pmQuote from admin on November 9, 2018, 12:57 pmDuring initial deployment/cluster creation, we create a default replicated pool named "rbd", you can choose it to be either 2x or 3x replicas.
Once the cluster is built, in the management interface you can go to the Pools page and add/delete/edit both replicated and EC pools. You can delete the default pool or increase its replica size if you wish.
During initial deployment/cluster creation, we create a default replicated pool named "rbd", you can choose it to be either 2x or 3x replicas.
Once the cluster is built, in the management interface you can go to the Pools page and add/delete/edit both replicated and EC pools. You can delete the default pool or increase its replica size if you wish.
Last edited on November 9, 2018, 1:00 pm by admin · #5
msalem
87 Posts
November 9, 2018, 3:34 pmQuote from msalem on November 9, 2018, 3:34 pmHello Admin,
I have finished creating the cluster. https://ibb.co/cEoBOV
Can you please let me know which of these options will give us EC. and if you can tell us if there is a calculator that would give me a proper sizing. now I have 3 nodes and the size is  218.30 TB (0.01%) - which is raw. with EC what would it be. and I will be putting 3 mode nodes ( total of six ) - let me know if that would change the size or number of nodes we can lose without loosing the cluster.
Thanks
Hello Admin,
I have finished creating the cluster. https://ibb.co/cEoBOV
Can you please let me know which of these options will give us EC. and if you can tell us if there is a calculator that would give me a proper sizing. now I have 3 nodes and the size is  218.30 TB (0.01%) - which is raw. with EC what would it be. and I will be putting 3 mode nodes ( total of six ) - let me know if that would change the size or number of nodes we can lose without loosing the cluster.
Thanks
admin
2,930 Posts
November 9, 2018, 4:14 pmQuote from admin on November 9, 2018, 4:14 pmthis is the edit pool page, you cannot change the pool type of an existing pool. what you need is to add a new pool, from the Pools page there is an add button. You will also need first create an EC rule from the rules page, you can pick from existing templates.
for six nodes: you should use the k=4 m=2 profile, this is a recommended profile, its overhead is 6/4= 1.5 so if you write 100 GB it will use 150 GB raw. If you use compression, you will get lower usage. This setup allows 2 nodes with their disks to be destroyed without loss of data.
You also need a replicated pool in addition to the EC pool to create rbd images/iSCSI disks, the first to store metadata. so with the above profile you should have your replicated pool 3x to also sustain 2 node losses.
this is the edit pool page, you cannot change the pool type of an existing pool. what you need is to add a new pool, from the Pools page there is an add button. You will also need first create an EC rule from the rules page, you can pick from existing templates.
for six nodes: you should use the k=4 m=2 profile, this is a recommended profile, its overhead is 6/4= 1.5 so if you write 100 GB it will use 150 GB raw. If you use compression, you will get lower usage. This setup allows 2 nodes with their disks to be destroyed without loss of data.
You also need a replicated pool in addition to the EC pool to create rbd images/iSCSI disks, the first to store metadata. so with the above profile you should have your replicated pool 3x to also sustain 2 node losses.
Last edited on November 9, 2018, 4:19 pm by admin · #7
msalem
87 Posts
November 9, 2018, 4:29 pmQuote from msalem on November 9, 2018, 4:29 pmThanks for the reply
Can I start with the same setup with 3 and just add the nodes later. I just want to testit before I addthe other nodes
Thanks
Thanks for the reply
Can I start with the same setup with 3 and just add the nodes later. I just want to testit before I addthe other nodes
Thanks
admin
2,930 Posts
November 9, 2018, 5:02 pmQuote from admin on November 9, 2018, 5:02 pmYou need k+m nodes/racks to run EC
For 3Â nodes you can use the k=2 m=1 profile for testing but this is not recommended for real production. for testing you can set the min pool size to 2 (we set it to k+1 = 3) so your io will still be active if 1 node fails..again this is not safe and not recommended as you would be writing to a pool which now has no redundancy.
You need k+m nodes/racks to run EC
For 3Â nodes you can use the k=2 m=1 profile for testing but this is not recommended for real production. for testing you can set the min pool size to 2 (we set it to k+1 = 3) so your io will still be active if 1 node fails..again this is not safe and not recommended as you would be writing to a pool which now has no redundancy.
Last edited on November 9, 2018, 5:04 pm by admin · #9
msalem
87 Posts
November 10, 2018, 9:24 amQuote from msalem on November 10, 2018, 9:24 amThanks Admin,
Do you have any docs or notes that you can share with us. Just to give scenarios, and different types of setups. in addition to a description of K , M etc.
Thanks
Thanks Admin,
Do you have any docs or notes that you can share with us. Just to give scenarios, and different types of setups. in addition to a description of K , M etc.
Thanks
PetaSAN 2.2 Released!
admin
2,930 Posts
Quote from admin on November 7, 2018, 6:31 pmHappy to announce PetaSAN 2.2 with the following main additions:
- Erasure Coded Pools for lower storage overhead.
- VLAN tagging support.
- SMART disk health monitoring and notifications.
The installer allows upgrade from version 2.1.
Happy to announce PetaSAN 2.2 with the following main additions:
- Erasure Coded Pools for lower storage overhead.
- VLAN tagging support.
- SMART disk health monitoring and notifications.
The installer allows upgrade from version 2.1.
kokojamba
15 Posts
Quote from kokojamba on November 8, 2018, 9:15 amGreat news !!!!!!!!!!!
Great news !!!!!!!!!!!
Kurti2k
16 Posts
Quote from Kurti2k on November 8, 2018, 9:54 amNice - so i can reinstall my broken/misconfigured cluster 🙂
Nice - so i can reinstall my broken/misconfigured cluster 🙂
msalem
87 Posts
Quote from msalem on November 9, 2018, 11:47 amThanks for the new release.
I am installing the cluster now. I do not see the replica option, only 2 and 3 . ?
is there something I am misssing.
Thanks
Thanks for the new release.
I am installing the cluster now. I do not see the replica option, only 2 and 3 . ?
is there something I am misssing.
Thanks
admin
2,930 Posts
Quote from admin on November 9, 2018, 12:57 pmDuring initial deployment/cluster creation, we create a default replicated pool named "rbd", you can choose it to be either 2x or 3x replicas.
Once the cluster is built, in the management interface you can go to the Pools page and add/delete/edit both replicated and EC pools. You can delete the default pool or increase its replica size if you wish.
During initial deployment/cluster creation, we create a default replicated pool named "rbd", you can choose it to be either 2x or 3x replicas.
Once the cluster is built, in the management interface you can go to the Pools page and add/delete/edit both replicated and EC pools. You can delete the default pool or increase its replica size if you wish.
msalem
87 Posts
Quote from msalem on November 9, 2018, 3:34 pmHello Admin,
I have finished creating the cluster. https://ibb.co/cEoBOV
Can you please let me know which of these options will give us EC. and if you can tell us if there is a calculator that would give me a proper sizing. now I have 3 nodes and the size is  218.30 TB (0.01%) - which is raw. with EC what would it be. and I will be putting 3 mode nodes ( total of six ) - let me know if that would change the size or number of nodes we can lose without loosing the cluster.
Thanks
Hello Admin,
I have finished creating the cluster. https://ibb.co/cEoBOV
Can you please let me know which of these options will give us EC. and if you can tell us if there is a calculator that would give me a proper sizing. now I have 3 nodes and the size is  218.30 TB (0.01%) - which is raw. with EC what would it be. and I will be putting 3 mode nodes ( total of six ) - let me know if that would change the size or number of nodes we can lose without loosing the cluster.
Thanks
admin
2,930 Posts
Quote from admin on November 9, 2018, 4:14 pmthis is the edit pool page, you cannot change the pool type of an existing pool. what you need is to add a new pool, from the Pools page there is an add button. You will also need first create an EC rule from the rules page, you can pick from existing templates.
for six nodes: you should use the k=4 m=2 profile, this is a recommended profile, its overhead is 6/4= 1.5 so if you write 100 GB it will use 150 GB raw. If you use compression, you will get lower usage. This setup allows 2 nodes with their disks to be destroyed without loss of data.
You also need a replicated pool in addition to the EC pool to create rbd images/iSCSI disks, the first to store metadata. so with the above profile you should have your replicated pool 3x to also sustain 2 node losses.
this is the edit pool page, you cannot change the pool type of an existing pool. what you need is to add a new pool, from the Pools page there is an add button. You will also need first create an EC rule from the rules page, you can pick from existing templates.
for six nodes: you should use the k=4 m=2 profile, this is a recommended profile, its overhead is 6/4= 1.5 so if you write 100 GB it will use 150 GB raw. If you use compression, you will get lower usage. This setup allows 2 nodes with their disks to be destroyed without loss of data.
You also need a replicated pool in addition to the EC pool to create rbd images/iSCSI disks, the first to store metadata. so with the above profile you should have your replicated pool 3x to also sustain 2 node losses.
msalem
87 Posts
Quote from msalem on November 9, 2018, 4:29 pmThanks for the reply
Can I start with the same setup with 3 and just add the nodes later. I just want to testit before I addthe other nodes
Thanks
Thanks for the reply
Can I start with the same setup with 3 and just add the nodes later. I just want to testit before I addthe other nodes
Thanks
admin
2,930 Posts
Quote from admin on November 9, 2018, 5:02 pmYou need k+m nodes/racks to run EC
For 3Â nodes you can use the k=2 m=1 profile for testing but this is not recommended for real production. for testing you can set the min pool size to 2 (we set it to k+1 = 3) so your io will still be active if 1 node fails..again this is not safe and not recommended as you would be writing to a pool which now has no redundancy.
You need k+m nodes/racks to run EC
For 3Â nodes you can use the k=2 m=1 profile for testing but this is not recommended for real production. for testing you can set the min pool size to 2 (we set it to k+1 = 3) so your io will still be active if 1 node fails..again this is not safe and not recommended as you would be writing to a pool which now has no redundancy.
msalem
87 Posts
Quote from msalem on November 10, 2018, 9:24 amThanks Admin,
Do you have any docs or notes that you can share with us. Just to give scenarios, and different types of setups. in addition to a description of K , M etc.
Thanks
Thanks Admin,
Do you have any docs or notes that you can share with us. Just to give scenarios, and different types of setups. in addition to a description of K , M etc.
Thanks