How to create an ec pool
alienn
37 Posts
January 11, 2019, 2:41 pmQuote from alienn on January 11, 2019, 2:41 pmHi,
as written in the other section I created a new petasan cluster (24 OSDs, 3 journal drives). I wanted to use an ec pool, but I'm failing in creating one.
What did I do:
- create a crush rule "ec-by-host-hdd" from template
- create a crush rule "ec-by-host-ssd" from template
- create an ec profile "ec-84-profile" (K8, M4; plugin: jerasure; technique: reed_sol_van)
- add an pool (existing pool: 3 replicas, 512 placement groups):
- name: ceph-ec-84
- Pool type: EC
- EC Profile: ec-84-profile
- Number of PGs: 512
- Min Size: 9
- Compression: disabled
- Rule Name: ec-by-host-hdd
The profile get created but never comes of the state "inactive". When investigating a little I found out that all new pgs are stale and never get distributed to any osds.
Can anyone tell me what I did wrong? I already redid the whole cluster twice, but always ran into the same problem.
The "replicated" pool is created normally and can be used normally.
Cheers,
alienn
Hi,
as written in the other section I created a new petasan cluster (24 OSDs, 3 journal drives). I wanted to use an ec pool, but I'm failing in creating one.
What did I do:
- create a crush rule "ec-by-host-hdd" from template
- create a crush rule "ec-by-host-ssd" from template
- create an ec profile "ec-84-profile" (K8, M4; plugin: jerasure; technique: reed_sol_van)
- add an pool (existing pool: 3 replicas, 512 placement groups):
- name: ceph-ec-84
- Pool type: EC
- EC Profile: ec-84-profile
- Number of PGs: 512
- Min Size: 9
- Compression: disabled
- Rule Name: ec-by-host-hdd
The profile get created but never comes of the state "inactive". When investigating a little I found out that all new pgs are stale and never get distributed to any osds.
Can anyone tell me what I did wrong? I already redid the whole cluster twice, but always ran into the same problem.
The "replicated" pool is created normally and can be used normally.
Cheers,
alienn
admin
2,930 Posts
January 11, 2019, 5:16 pmQuote from admin on January 11, 2019, 5:16 pmThe steps are correct, but you do not have enough nodes.
You chose a high ec profile : "ec-84-profile" (K8, M4) which makes 12 chunks per object. The rule "by-host-hdd" will place each chunk on a host, so this setup requires 12 hosts.
To use ec pools, we recommend a min of 5 nodes to use 32 profile or much better 6 nodes and use the 42 profile.
For 3 nodes you can use the 21 profile, but is not recommended.
The steps are correct, but you do not have enough nodes.
You chose a high ec profile : "ec-84-profile" (K8, M4) which makes 12 chunks per object. The rule "by-host-hdd" will place each chunk on a host, so this setup requires 12 hosts.
To use ec pools, we recommend a min of 5 nodes to use 32 profile or much better 6 nodes and use the 42 profile.
For 3 nodes you can use the 21 profile, but is not recommended.
alienn
37 Posts
January 22, 2019, 4:30 pmQuote from alienn on January 22, 2019, 4:30 pmThanks for this clarification. I thought that the chunks would be spread out over the osds and not over the hosts.
All in all it makes sense, but I had a severe misunderstanding here. 🙂
Thanks for this clarification. I thought that the chunks would be spread out over the osds and not over the hosts.
All in all it makes sense, but I had a severe misunderstanding here. 🙂
How to create an ec pool
alienn
37 Posts
Quote from alienn on January 11, 2019, 2:41 pmHi,
as written in the other section I created a new petasan cluster (24 OSDs, 3 journal drives). I wanted to use an ec pool, but I'm failing in creating one.
What did I do:
- create a crush rule "ec-by-host-hdd" from template
- create a crush rule "ec-by-host-ssd" from template
- create an ec profile "ec-84-profile" (K8, M4; plugin: jerasure; technique: reed_sol_van)
- add an pool (existing pool: 3 replicas, 512 placement groups):
- name: ceph-ec-84
- Pool type: EC
- EC Profile: ec-84-profile
- Number of PGs: 512
- Min Size: 9
- Compression: disabled
- Rule Name: ec-by-host-hdd
The profile get created but never comes of the state "inactive". When investigating a little I found out that all new pgs are stale and never get distributed to any osds.
Can anyone tell me what I did wrong? I already redid the whole cluster twice, but always ran into the same problem.
The "replicated" pool is created normally and can be used normally.
Cheers,
alienn
Hi,
as written in the other section I created a new petasan cluster (24 OSDs, 3 journal drives). I wanted to use an ec pool, but I'm failing in creating one.
What did I do:
- create a crush rule "ec-by-host-hdd" from template
- create a crush rule "ec-by-host-ssd" from template
- create an ec profile "ec-84-profile" (K8, M4; plugin: jerasure; technique: reed_sol_van)
- add an pool (existing pool: 3 replicas, 512 placement groups):
- name: ceph-ec-84
- Pool type: EC
- EC Profile: ec-84-profile
- Number of PGs: 512
- Min Size: 9
- Compression: disabled
- Rule Name: ec-by-host-hdd
The profile get created but never comes of the state "inactive". When investigating a little I found out that all new pgs are stale and never get distributed to any osds.
Can anyone tell me what I did wrong? I already redid the whole cluster twice, but always ran into the same problem.
The "replicated" pool is created normally and can be used normally.
Cheers,
alienn
admin
2,930 Posts
Quote from admin on January 11, 2019, 5:16 pmThe steps are correct, but you do not have enough nodes.
You chose a high ec profile : "ec-84-profile" (K8, M4) which makes 12 chunks per object. The rule "by-host-hdd" will place each chunk on a host, so this setup requires 12 hosts.
To use ec pools, we recommend a min of 5 nodes to use 32 profile or much better 6 nodes and use the 42 profile.
For 3 nodes you can use the 21 profile, but is not recommended.
The steps are correct, but you do not have enough nodes.
You chose a high ec profile : "ec-84-profile" (K8, M4) which makes 12 chunks per object. The rule "by-host-hdd" will place each chunk on a host, so this setup requires 12 hosts.
To use ec pools, we recommend a min of 5 nodes to use 32 profile or much better 6 nodes and use the 42 profile.
For 3 nodes you can use the 21 profile, but is not recommended.
alienn
37 Posts
Quote from alienn on January 22, 2019, 4:30 pmThanks for this clarification. I thought that the chunks would be spread out over the osds and not over the hosts.
All in all it makes sense, but I had a severe misunderstanding here. 🙂
Thanks for this clarification. I thought that the chunks would be spread out over the osds and not over the hosts.
All in all it makes sense, but I had a severe misunderstanding here. 🙂