Unable to create file system with erasure coding
cperg
4 Posts
March 30, 2021, 2:51 pmQuote from cperg on March 30, 2021, 2:51 pmHello...
I am trying to create a file system for CIFS and/or NFS export on PetaSAN 2.7.3, that's using a data pool with 3+2 erasure coding, but PetaSAN will only show replicated pools available for selection in the "Add File System" dialog.
According to this
https://docs.ceph.com/en/latest/cephfs/createfs/
using erasure coded pools for cephfs should be possible, as long as they have "overwrites" enabled. No idea what that is, but tried to activate it with this command like suggested in the ceph documentation:
ceph osd pool set my_ec_pool allow_ec_overwrites true
But no change. PetaSAN will still not allow me to select the ec coded data pool when adding a file system...
Suggestions anyone?
Hello...
I am trying to create a file system for CIFS and/or NFS export on PetaSAN 2.7.3, that's using a data pool with 3+2 erasure coding, but PetaSAN will only show replicated pools available for selection in the "Add File System" dialog.
According to this
https://docs.ceph.com/en/latest/cephfs/createfs/
using erasure coded pools for cephfs should be possible, as long as they have "overwrites" enabled. No idea what that is, but tried to activate it with this command like suggested in the ceph documentation:
ceph osd pool set my_ec_pool allow_ec_overwrites true
But no change. PetaSAN will still not allow me to select the ec coded data pool when adding a file system...
Suggestions anyone?
admin
2,930 Posts
March 30, 2021, 3:53 pmQuote from admin on March 30, 2021, 3:53 pmas per the link you posted:
If erasure-coded pools are planned for the file system, it is usually better to use a replicated pool for the default data pool to improve small-object write and read performance for updating backtraces. Separately, another erasure-coded data pool can be added (see also Erasure code) that can be used on an entire hierarchy of directories and files (see also File layouts).
so after you create a filesystem with replicated data pool, you can create a filelayout in PetaSAN with an EC pool. Later when you use CIFS/NFS to create shares, you will specify the layout to store the share data.
as per the link you posted:
If erasure-coded pools are planned for the file system, it is usually better to use a replicated pool for the default data pool to improve small-object write and read performance for updating backtraces. Separately, another erasure-coded data pool can be added (see also Erasure code) that can be used on an entire hierarchy of directories and files (see also File layouts).
so after you create a filesystem with replicated data pool, you can create a filelayout in PetaSAN with an EC pool. Later when you use CIFS/NFS to create shares, you will specify the layout to store the share data.
Last edited on March 30, 2021, 3:54 pm by admin · #2
cperg
4 Posts
March 30, 2021, 7:53 pmQuote from cperg on March 30, 2021, 7:53 pmIf it only was that easy... I could crate a file system using a replicated pool. But when I click on the "+ Add Layout" button I only get a big red banner saying: "Alert! Cannot open the Add Layout page."
If it only was that easy... I could crate a file system using a replicated pool. But when I click on the "+ Add Layout" button I only get a big red banner saying: "Alert! Cannot open the Add Layout page."
admin
2,930 Posts
March 30, 2021, 8:17 pmQuote from admin on March 30, 2021, 8:17 pmactually it should be that easy 🙂
Is the cluster health ok ? can you show the output of ceph status
actually it should be that easy 🙂
Is the cluster health ok ? can you show the output of ceph status
cperg
4 Posts
March 30, 2021, 9:09 pmQuote from cperg on March 30, 2021, 9:09 pmroot@psan1-nd1:~# ceph status
cluster:
id: 4c9b0226-31dd-44fd-b869-86b58cccc078
health: HEALTH_OK
services:
mon: 3 daemons, quorum psan1-nd3,psan1-nd1,psan1-nd2 (age 21h)
mgr: psan1-nd3(active, since 21h), standbys: psan1-nd2, psan1-nd1
mds: NFS:1 {0=psan1-nd1=up:active} 2 up:standby
osd: 5 osds: 5 up (since 20h), 5 in (since 20h)
task status:
scrub status:
mds.psan1-nd1: idle
data:
pools: 2 pools, 192 pgs
objects: 22 objects, 18 KiB
usage: 5.1 GiB used, 55 TiB / 55 TiB avail
pgs: 192 active+clean
root@psan1-nd1:~# ceph status
cluster:
id: 4c9b0226-31dd-44fd-b869-86b58cccc078
health: HEALTH_OK
services:
mon: 3 daemons, quorum psan1-nd3,psan1-nd1,psan1-nd2 (age 21h)
mgr: psan1-nd3(active, since 21h), standbys: psan1-nd2, psan1-nd1
mds: NFS:1 {0=psan1-nd1=up:active} 2 up:standby
osd: 5 osds: 5 up (since 20h), 5 in (since 20h)
task status:
scrub status:
mds.psan1-nd1: idle
data:
pools: 2 pools, 192 pgs
objects: 22 objects, 18 KiB
usage: 5.1 GiB used, 55 TiB / 55 TiB avail
pgs: 192 active+clean
admin
2,930 Posts
March 30, 2021, 9:55 pmQuote from admin on March 30, 2021, 9:55 pmHealth looks good. What is strange is at this stage you should have 3 pools not 2. As i understand you did create 2 replicated pools from the pools page , their usage should be cephfs then created a filesystem using these 2 pools : one for data then other for metadata, many ceph users name these pools as cephfs_data and cephfs_metadata. After this you create a new EC pool also with cephfs as usage, then you add a new layout to the existing filesystem and specify the EC pool you just created.
Health looks good. What is strange is at this stage you should have 3 pools not 2. As i understand you did create 2 replicated pools from the pools page , their usage should be cephfs then created a filesystem using these 2 pools : one for data then other for metadata, many ceph users name these pools as cephfs_data and cephfs_metadata. After this you create a new EC pool also with cephfs as usage, then you add a new layout to the existing filesystem and specify the EC pool you just created.
cperg
4 Posts
April 2, 2021, 10:36 amQuote from cperg on April 2, 2021, 10:36 amOk, sorry for making a beginners mistake here. I did not know that it's not allowed to use the same pool for metadata and storage. So after creating and using separate pools, I could indeed add layouts as intended - including one with erasure coding.
Unfortunately, the problems did not end here. First off, I was unable to connect to NFS exports from Proxmox (obviously no reaction to the showmount command???). However I could successfully connect using CIFS. So everything fine? Not quite... because performance was utterly mediocre here. And I don't mean just a little slow - I mean slow to the point of uselessness. I mean... what's the point of using erasure coding anyway? Isn't it because someone has to store a huge amount of data (a bunch of terabytes) and wants to save some money on harddrives in comparison to using replication? But I am sure that this guy would not want to wait for weeks until his data is copied to the cluster - unless he is using ultra high performance hardware like 100% SSDs and top notch CPUs, possibly combined with huge amounts of RAM, which would result not in saving, but in spending even more money...
So the bottom line here for me is: Save yourself the headache and use replication. Its less complicated, and you get decent performance out of standard hardware. Yes, you have to buy a few more drives, but that's obviously worth it.
Ok, sorry for making a beginners mistake here. I did not know that it's not allowed to use the same pool for metadata and storage. So after creating and using separate pools, I could indeed add layouts as intended - including one with erasure coding.
Unfortunately, the problems did not end here. First off, I was unable to connect to NFS exports from Proxmox (obviously no reaction to the showmount command???). However I could successfully connect using CIFS. So everything fine? Not quite... because performance was utterly mediocre here. And I don't mean just a little slow - I mean slow to the point of uselessness. I mean... what's the point of using erasure coding anyway? Isn't it because someone has to store a huge amount of data (a bunch of terabytes) and wants to save some money on harddrives in comparison to using replication? But I am sure that this guy would not want to wait for weeks until his data is copied to the cluster - unless he is using ultra high performance hardware like 100% SSDs and top notch CPUs, possibly combined with huge amounts of RAM, which would result not in saving, but in spending even more money...
So the bottom line here for me is: Save yourself the headache and use replication. Its less complicated, and you get decent performance out of standard hardware. Yes, you have to buy a few more drives, but that's obviously worth it.
admin
2,930 Posts
April 2, 2021, 5:47 pmQuote from admin on April 2, 2021, 5:47 pmHard to say as it depends on what hardware you have, what performance you get, what workload and what client you use to test, We have many installations using EC and they work very well. When doing writes with large block size you can saturate your network. Generally EC is good for backups and large file copy tools, but is poor for small block size applications like virtualisation and databases, in the latter case it would be approx 2 times slower even with decent hardware.
Not sure why NFS is not working, we do have a bug if using custom NFS gateway, else things work really well. Check your mount command and check running it from other clients such as from other PetaSAN nodes to see if it is client related. I assume your cluster is healthy and your NFS Status page show the service is up.
Hard to say as it depends on what hardware you have, what performance you get, what workload and what client you use to test, We have many installations using EC and they work very well. When doing writes with large block size you can saturate your network. Generally EC is good for backups and large file copy tools, but is poor for small block size applications like virtualisation and databases, in the latter case it would be approx 2 times slower even with decent hardware.
Not sure why NFS is not working, we do have a bug if using custom NFS gateway, else things work really well. Check your mount command and check running it from other clients such as from other PetaSAN nodes to see if it is client related. I assume your cluster is healthy and your NFS Status page show the service is up.
Unable to create file system with erasure coding
cperg
4 Posts
Quote from cperg on March 30, 2021, 2:51 pmHello...
I am trying to create a file system for CIFS and/or NFS export on PetaSAN 2.7.3, that's using a data pool with 3+2 erasure coding, but PetaSAN will only show replicated pools available for selection in the "Add File System" dialog.
According to this
https://docs.ceph.com/en/latest/cephfs/createfs/
using erasure coded pools for cephfs should be possible, as long as they have "overwrites" enabled. No idea what that is, but tried to activate it with this command like suggested in the ceph documentation:
ceph osd pool set my_ec_pool allow_ec_overwrites true
But no change. PetaSAN will still not allow me to select the ec coded data pool when adding a file system...
Suggestions anyone?
Hello...
I am trying to create a file system for CIFS and/or NFS export on PetaSAN 2.7.3, that's using a data pool with 3+2 erasure coding, but PetaSAN will only show replicated pools available for selection in the "Add File System" dialog.
According to this
https://docs.ceph.com/en/latest/cephfs/createfs/
using erasure coded pools for cephfs should be possible, as long as they have "overwrites" enabled. No idea what that is, but tried to activate it with this command like suggested in the ceph documentation:
ceph osd pool set my_ec_pool allow_ec_overwrites true
But no change. PetaSAN will still not allow me to select the ec coded data pool when adding a file system...
Suggestions anyone?
admin
2,930 Posts
Quote from admin on March 30, 2021, 3:53 pmas per the link you posted:
If erasure-coded pools are planned for the file system, it is usually better to use a replicated pool for the default data pool to improve small-object write and read performance for updating backtraces. Separately, another erasure-coded data pool can be added (see also Erasure code) that can be used on an entire hierarchy of directories and files (see also File layouts).
so after you create a filesystem with replicated data pool, you can create a filelayout in PetaSAN with an EC pool. Later when you use CIFS/NFS to create shares, you will specify the layout to store the share data.
as per the link you posted:
If erasure-coded pools are planned for the file system, it is usually better to use a replicated pool for the default data pool to improve small-object write and read performance for updating backtraces. Separately, another erasure-coded data pool can be added (see also Erasure code) that can be used on an entire hierarchy of directories and files (see also File layouts).
so after you create a filesystem with replicated data pool, you can create a filelayout in PetaSAN with an EC pool. Later when you use CIFS/NFS to create shares, you will specify the layout to store the share data.
cperg
4 Posts
Quote from cperg on March 30, 2021, 7:53 pmIf it only was that easy... I could crate a file system using a replicated pool. But when I click on the "+ Add Layout" button I only get a big red banner saying: "Alert! Cannot open the Add Layout page."
If it only was that easy... I could crate a file system using a replicated pool. But when I click on the "+ Add Layout" button I only get a big red banner saying: "Alert! Cannot open the Add Layout page."
admin
2,930 Posts
Quote from admin on March 30, 2021, 8:17 pmactually it should be that easy 🙂
Is the cluster health ok ? can you show the output of ceph status
actually it should be that easy 🙂
Is the cluster health ok ? can you show the output of ceph status
cperg
4 Posts
Quote from cperg on March 30, 2021, 9:09 pmroot@psan1-nd1:~# ceph status
cluster:
id: 4c9b0226-31dd-44fd-b869-86b58cccc078
health: HEALTH_OKservices:
mon: 3 daemons, quorum psan1-nd3,psan1-nd1,psan1-nd2 (age 21h)
mgr: psan1-nd3(active, since 21h), standbys: psan1-nd2, psan1-nd1
mds: NFS:1 {0=psan1-nd1=up:active} 2 up:standby
osd: 5 osds: 5 up (since 20h), 5 in (since 20h)task status:
scrub status:
mds.psan1-nd1: idledata:
pools: 2 pools, 192 pgs
objects: 22 objects, 18 KiB
usage: 5.1 GiB used, 55 TiB / 55 TiB avail
pgs: 192 active+clean
root@psan1-nd1:~# ceph status
cluster:
id: 4c9b0226-31dd-44fd-b869-86b58cccc078
health: HEALTH_OK
services:
mon: 3 daemons, quorum psan1-nd3,psan1-nd1,psan1-nd2 (age 21h)
mgr: psan1-nd3(active, since 21h), standbys: psan1-nd2, psan1-nd1
mds: NFS:1 {0=psan1-nd1=up:active} 2 up:standby
osd: 5 osds: 5 up (since 20h), 5 in (since 20h)
task status:
scrub status:
mds.psan1-nd1: idle
data:
pools: 2 pools, 192 pgs
objects: 22 objects, 18 KiB
usage: 5.1 GiB used, 55 TiB / 55 TiB avail
pgs: 192 active+clean
admin
2,930 Posts
Quote from admin on March 30, 2021, 9:55 pmHealth looks good. What is strange is at this stage you should have 3 pools not 2. As i understand you did create 2 replicated pools from the pools page , their usage should be cephfs then created a filesystem using these 2 pools : one for data then other for metadata, many ceph users name these pools as cephfs_data and cephfs_metadata. After this you create a new EC pool also with cephfs as usage, then you add a new layout to the existing filesystem and specify the EC pool you just created.
Health looks good. What is strange is at this stage you should have 3 pools not 2. As i understand you did create 2 replicated pools from the pools page , their usage should be cephfs then created a filesystem using these 2 pools : one for data then other for metadata, many ceph users name these pools as cephfs_data and cephfs_metadata. After this you create a new EC pool also with cephfs as usage, then you add a new layout to the existing filesystem and specify the EC pool you just created.
cperg
4 Posts
Quote from cperg on April 2, 2021, 10:36 amOk, sorry for making a beginners mistake here. I did not know that it's not allowed to use the same pool for metadata and storage. So after creating and using separate pools, I could indeed add layouts as intended - including one with erasure coding.
Unfortunately, the problems did not end here. First off, I was unable to connect to NFS exports from Proxmox (obviously no reaction to the showmount command???). However I could successfully connect using CIFS. So everything fine? Not quite... because performance was utterly mediocre here. And I don't mean just a little slow - I mean slow to the point of uselessness. I mean... what's the point of using erasure coding anyway? Isn't it because someone has to store a huge amount of data (a bunch of terabytes) and wants to save some money on harddrives in comparison to using replication? But I am sure that this guy would not want to wait for weeks until his data is copied to the cluster - unless he is using ultra high performance hardware like 100% SSDs and top notch CPUs, possibly combined with huge amounts of RAM, which would result not in saving, but in spending even more money...
So the bottom line here for me is: Save yourself the headache and use replication. Its less complicated, and you get decent performance out of standard hardware. Yes, you have to buy a few more drives, but that's obviously worth it.
Ok, sorry for making a beginners mistake here. I did not know that it's not allowed to use the same pool for metadata and storage. So after creating and using separate pools, I could indeed add layouts as intended - including one with erasure coding.
Unfortunately, the problems did not end here. First off, I was unable to connect to NFS exports from Proxmox (obviously no reaction to the showmount command???). However I could successfully connect using CIFS. So everything fine? Not quite... because performance was utterly mediocre here. And I don't mean just a little slow - I mean slow to the point of uselessness. I mean... what's the point of using erasure coding anyway? Isn't it because someone has to store a huge amount of data (a bunch of terabytes) and wants to save some money on harddrives in comparison to using replication? But I am sure that this guy would not want to wait for weeks until his data is copied to the cluster - unless he is using ultra high performance hardware like 100% SSDs and top notch CPUs, possibly combined with huge amounts of RAM, which would result not in saving, but in spending even more money...
So the bottom line here for me is: Save yourself the headache and use replication. Its less complicated, and you get decent performance out of standard hardware. Yes, you have to buy a few more drives, but that's obviously worth it.
admin
2,930 Posts
Quote from admin on April 2, 2021, 5:47 pmHard to say as it depends on what hardware you have, what performance you get, what workload and what client you use to test, We have many installations using EC and they work very well. When doing writes with large block size you can saturate your network. Generally EC is good for backups and large file copy tools, but is poor for small block size applications like virtualisation and databases, in the latter case it would be approx 2 times slower even with decent hardware.
Not sure why NFS is not working, we do have a bug if using custom NFS gateway, else things work really well. Check your mount command and check running it from other clients such as from other PetaSAN nodes to see if it is client related. I assume your cluster is healthy and your NFS Status page show the service is up.
Hard to say as it depends on what hardware you have, what performance you get, what workload and what client you use to test, We have many installations using EC and they work very well. When doing writes with large block size you can saturate your network. Generally EC is good for backups and large file copy tools, but is poor for small block size applications like virtualisation and databases, in the latter case it would be approx 2 times slower even with decent hardware.
Not sure why NFS is not working, we do have a bug if using custom NFS gateway, else things work really well. Check your mount command and check running it from other clients such as from other PetaSAN nodes to see if it is client related. I assume your cluster is healthy and your NFS Status page show the service is up.