About petasan and ceph and iscsi
shadowlin
67 Posts
December 8, 2017, 10:22 amQuote from shadowlin on December 8, 2017, 10:22 amIs it possible to make petasan work with an existing ceph cluster?I have read some threads about that and learned that petasan want to be a plug in and play solution,but I am testing a arm based ceph cluster so it is impossible to install petasan on my arm servers:<
I read about that lrbd is the way to use iscsi with existing ceph cluster in this thread http://www.petasan.org/forums/?view=thread&id=69
I have tested to export iscsi from ceph by scst,tgt,lio+tcum. The performance is not good. Tgt is the slowest.SCST and lio+tcum are better but not good enough(scst with fileio gives the best performance).I can have a throughput of around 500MB/s with just rbd mounted on.If I use iscsi with scst with blockio I can only get about 100MB/s and with lio+tcmu I can get about 200MB/s.
I haven't test with lrbd. Would it performance better?Is petasan also based on lrbd? Would it support ha multipath?
Is it possible to make petasan work with an existing ceph cluster?I have read some threads about that and learned that petasan want to be a plug in and play solution,but I am testing a arm based ceph cluster so it is impossible to install petasan on my arm servers:<
I read about that lrbd is the way to use iscsi with existing ceph cluster in this thread http://www.petasan.org/forums/?view=thread&id=69
I have tested to export iscsi from ceph by scst,tgt,lio+tcum. The performance is not good. Tgt is the slowest.SCST and lio+tcum are better but not good enough(scst with fileio gives the best performance).I can have a throughput of around 500MB/s with just rbd mounted on.If I use iscsi with scst with blockio I can only get about 100MB/s and with lio+tcmu I can get about 200MB/s.
I haven't test with lrbd. Would it performance better?Is petasan also based on lrbd? Would it support ha multipath?
admin
2,930 Posts
December 8, 2017, 11:53 amQuote from admin on December 8, 2017, 11:53 amYes we target PetaSAN as plug and play appliance. I am not sure about the performance on other target and arm but i would recommend testing with lrbd with a SUSE SLE 12 SP3 kernel. PetaSAN uses this kernel. I believe they do have am arm build.
lrbd supports HA on a node level, so if you have 2 MPIO paths to 2 nodes and a node fails, the other path will still be working, It is not as flexible as moving paths between nodes but it is still effective in providing HA.
Yes we target PetaSAN as plug and play appliance. I am not sure about the performance on other target and arm but i would recommend testing with lrbd with a SUSE SLE 12 SP3 kernel. PetaSAN uses this kernel. I believe they do have am arm build.
lrbd supports HA on a node level, so if you have 2 MPIO paths to 2 nodes and a node fails, the other path will still be working, It is not as flexible as moving paths between nodes but it is still effective in providing HA.
shadowlin
67 Posts
December 10, 2017, 4:26 amQuote from shadowlin on December 10, 2017, 4:26 amThanks for your quick and informative reply.
I did't make it clear it last post.We currently running all osd on ARM but the iscsi gateway and ceph moninter are running on x86 servers.
So I am interested in the possible of migrating my currently running ceph into PetaSan system.
And about the iscsi performance, I want to know if the iscsi gateway would be the bottle neck of the whole system? Is there any performance drawback to export the rbd as iscsi?For example if the same workload can has throughput of 500MB in krbd mode,what would the iscsi performance(with 10Gb nic)?
Thanks
Thanks for your quick and informative reply.
I did't make it clear it last post.We currently running all osd on ARM but the iscsi gateway and ceph moninter are running on x86 servers.
So I am interested in the possible of migrating my currently running ceph into PetaSan system.
And about the iscsi performance, I want to know if the iscsi gateway would be the bottle neck of the whole system? Is there any performance drawback to export the rbd as iscsi?For example if the same workload can has throughput of 500MB in krbd mode,what would the iscsi performance(with 10Gb nic)?
Thanks
admin
2,930 Posts
December 10, 2017, 7:29 amQuote from admin on December 10, 2017, 7:29 amHi,
The iSCSI gateway is an extra layer between client and ceph, so it will affect performance. This varies quite a lot with your hardware and also your io pattern so it is much better to try it yourself.
Having said this, from our experience if your clients io block size is large ( 1M < ) you will get close to native performance. At around 64K size and you use fast SSD, latency of cpu and network start to kick in and your extra gateway hop will add to this, we get about 65% of native. At very small sizes ( 4k or less ) and using SSDs, latency we get around 40% of native, in ver 1.5 we have done some kernel improvements which gives us over 50% at such block sizes with SSDs. Again it is much better if you test yourself.
Hi,
The iSCSI gateway is an extra layer between client and ceph, so it will affect performance. This varies quite a lot with your hardware and also your io pattern so it is much better to try it yourself.
Having said this, from our experience if your clients io block size is large ( 1M < ) you will get close to native performance. At around 64K size and you use fast SSD, latency of cpu and network start to kick in and your extra gateway hop will add to this, we get about 65% of native. At very small sizes ( 4k or less ) and using SSDs, latency we get around 40% of native, in ver 1.5 we have done some kernel improvements which gives us over 50% at such block sizes with SSDs. Again it is much better if you test yourself.
shadowlin
67 Posts
December 10, 2017, 12:01 pmQuote from shadowlin on December 10, 2017, 12:01 pmThanks for your information.
I will try myself and compare to your data as a baseline.
Thanks for your information.
I will try myself and compare to your data as a baseline.
shadowlin
67 Posts
December 20, 2017, 12:44 pmQuote from shadowlin on December 20, 2017, 12:44 pm
Quote from admin on December 8, 2017, 11:53 am
Yes we target PetaSAN as plug and play appliance. I am not sure about the performance on other target and arm but i would recommend testing with lrbd with a SUSE SLE 12 SP3 kernel. PetaSAN uses this kernel. I believe they do have am arm build.
lrbd supports HA on a node level, so if you have 2 MPIO paths to 2 nodes and a node fails, the other path will still be working, It is not as flexible as moving paths between nodes but it is still effective in providing HA.
I am studying lrbd and I am confused how lrbd works.
It looks like lrbd still use krbd to map rbd image to the gateway as a block device and then export it to iscsi,is that true?
Quote from admin on December 8, 2017, 11:53 am
Yes we target PetaSAN as plug and play appliance. I am not sure about the performance on other target and arm but i would recommend testing with lrbd with a SUSE SLE 12 SP3 kernel. PetaSAN uses this kernel. I believe they do have am arm build.
lrbd supports HA on a node level, so if you have 2 MPIO paths to 2 nodes and a node fails, the other path will still be working, It is not as flexible as moving paths between nodes but it is still effective in providing HA.
I am studying lrbd and I am confused how lrbd works.
It looks like lrbd still use krbd to map rbd image to the gateway as a block device and then export it to iscsi,is that true?
admin
2,930 Posts
December 20, 2017, 1:44 pmQuote from admin on December 20, 2017, 1:44 pmFor regular kernels yes, but SUSE kernels have a LIO rbd backstore module target_core_rbd , it still makes use of krbd but it uses it directly bypassing the block layer (+ adds iSCSI specific operations like pr + vmware vaai)
in case of SUSE kernel, lrbd will setup target_core_rbd as backstore, else it will use regular iblock mapping an rbd block device. PetaSAN does use the SUSE kernel and target_core_rbd, but we configure them directly without lrbd.
For regular kernels yes, but SUSE kernels have a LIO rbd backstore module target_core_rbd , it still makes use of krbd but it uses it directly bypassing the block layer (+ adds iSCSI specific operations like pr + vmware vaai)
in case of SUSE kernel, lrbd will setup target_core_rbd as backstore, else it will use regular iblock mapping an rbd block device. PetaSAN does use the SUSE kernel and target_core_rbd, but we configure them directly without lrbd.
shadowlin
67 Posts
December 20, 2017, 2:24 pmQuote from shadowlin on December 20, 2017, 2:24 pm
Quote from admin on December 20, 2017, 1:44 pm
For regular kernels yes, but SUSE kernels have a LIO rbd backstore module target_core_rbd , it still makes use of krbd but it uses it directly bypassing the block layer (+ adds iSCSI specific operations like pr + vmware vaai)
in case of SUSE kernel, lrbd will setup target_core_rbd as backstore, else it will use regular iblock mapping an rbd block device. PetaSAN does use the SUSE kernel and target_core_rbd, but we configure them directly without lrbd.
I see.Does target_core_rbd support all the rbd features like object map etc?
Quote from admin on December 20, 2017, 1:44 pm
For regular kernels yes, but SUSE kernels have a LIO rbd backstore module target_core_rbd , it still makes use of krbd but it uses it directly bypassing the block layer (+ adds iSCSI specific operations like pr + vmware vaai)
in case of SUSE kernel, lrbd will setup target_core_rbd as backstore, else it will use regular iblock mapping an rbd block device. PetaSAN does use the SUSE kernel and target_core_rbd, but we configure them directly without lrbd.
I see.Does target_core_rbd support all the rbd features like object map etc?
admin
2,930 Posts
December 20, 2017, 3:14 pmQuote from admin on December 20, 2017, 3:14 pmit uses krbd, so it supports rbd features provided by krbd. SLE is based on the 4.4 kernel so object map is not supported in kernel. It does support layering.
it uses krbd, so it supports rbd features provided by krbd. SLE is based on the 4.4 kernel so object map is not supported in kernel. It does support layering.
shadowlin
67 Posts
December 21, 2017, 3:47 amQuote from shadowlin on December 21, 2017, 3:47 am
Quote from admin on December 20, 2017, 3:14 pm
it uses krbd, so it supports rbd features provided by krbd. SLE is based on the 4.4 kernel so object map is not supported in kernel. It does support layering.
I see.Have you tried to use intel's spdk iscsi target?I learned that spdk support rbd backend with librbd and librados.
Quote from admin on December 20, 2017, 3:14 pm
it uses krbd, so it supports rbd features provided by krbd. SLE is based on the 4.4 kernel so object map is not supported in kernel. It does support layering.
I see.Have you tried to use intel's spdk iscsi target?I learned that spdk support rbd backend with librbd and librados.
About petasan and ceph and iscsi
shadowlin
67 Posts
Quote from shadowlin on December 8, 2017, 10:22 amIs it possible to make petasan work with an existing ceph cluster?I have read some threads about that and learned that petasan want to be a plug in and play solution,but I am testing a arm based ceph cluster so it is impossible to install petasan on my arm servers:<
I read about that lrbd is the way to use iscsi with existing ceph cluster in this thread http://www.petasan.org/forums/?view=thread&id=69
I have tested to export iscsi from ceph by scst,tgt,lio+tcum. The performance is not good. Tgt is the slowest.SCST and lio+tcum are better but not good enough(scst with fileio gives the best performance).I can have a throughput of around 500MB/s with just rbd mounted on.If I use iscsi with scst with blockio I can only get about 100MB/s and with lio+tcmu I can get about 200MB/s.
I haven't test with lrbd. Would it performance better?Is petasan also based on lrbd? Would it support ha multipath?
Is it possible to make petasan work with an existing ceph cluster?I have read some threads about that and learned that petasan want to be a plug in and play solution,but I am testing a arm based ceph cluster so it is impossible to install petasan on my arm servers:<
I read about that lrbd is the way to use iscsi with existing ceph cluster in this thread http://www.petasan.org/forums/?view=thread&id=69
I have tested to export iscsi from ceph by scst,tgt,lio+tcum. The performance is not good. Tgt is the slowest.SCST and lio+tcum are better but not good enough(scst with fileio gives the best performance).I can have a throughput of around 500MB/s with just rbd mounted on.If I use iscsi with scst with blockio I can only get about 100MB/s and with lio+tcmu I can get about 200MB/s.
I haven't test with lrbd. Would it performance better?Is petasan also based on lrbd? Would it support ha multipath?
admin
2,930 Posts
Quote from admin on December 8, 2017, 11:53 amYes we target PetaSAN as plug and play appliance. I am not sure about the performance on other target and arm but i would recommend testing with lrbd with a SUSE SLE 12 SP3 kernel. PetaSAN uses this kernel. I believe they do have am arm build.
lrbd supports HA on a node level, so if you have 2 MPIO paths to 2 nodes and a node fails, the other path will still be working, It is not as flexible as moving paths between nodes but it is still effective in providing HA.
Yes we target PetaSAN as plug and play appliance. I am not sure about the performance on other target and arm but i would recommend testing with lrbd with a SUSE SLE 12 SP3 kernel. PetaSAN uses this kernel. I believe they do have am arm build.
lrbd supports HA on a node level, so if you have 2 MPIO paths to 2 nodes and a node fails, the other path will still be working, It is not as flexible as moving paths between nodes but it is still effective in providing HA.
shadowlin
67 Posts
Quote from shadowlin on December 10, 2017, 4:26 amThanks for your quick and informative reply.
I did't make it clear it last post.We currently running all osd on ARM but the iscsi gateway and ceph moninter are running on x86 servers.
So I am interested in the possible of migrating my currently running ceph into PetaSan system.
And about the iscsi performance, I want to know if the iscsi gateway would be the bottle neck of the whole system? Is there any performance drawback to export the rbd as iscsi?For example if the same workload can has throughput of 500MB in krbd mode,what would the iscsi performance(with 10Gb nic)?
Thanks
Thanks for your quick and informative reply.
I did't make it clear it last post.We currently running all osd on ARM but the iscsi gateway and ceph moninter are running on x86 servers.
So I am interested in the possible of migrating my currently running ceph into PetaSan system.
And about the iscsi performance, I want to know if the iscsi gateway would be the bottle neck of the whole system? Is there any performance drawback to export the rbd as iscsi?For example if the same workload can has throughput of 500MB in krbd mode,what would the iscsi performance(with 10Gb nic)?
Thanks
admin
2,930 Posts
Quote from admin on December 10, 2017, 7:29 amHi,
The iSCSI gateway is an extra layer between client and ceph, so it will affect performance. This varies quite a lot with your hardware and also your io pattern so it is much better to try it yourself.
Having said this, from our experience if your clients io block size is large ( 1M < ) you will get close to native performance. At around 64K size and you use fast SSD, latency of cpu and network start to kick in and your extra gateway hop will add to this, we get about 65% of native. At very small sizes ( 4k or less ) and using SSDs, latency we get around 40% of native, in ver 1.5 we have done some kernel improvements which gives us over 50% at such block sizes with SSDs. Again it is much better if you test yourself.
Hi,
The iSCSI gateway is an extra layer between client and ceph, so it will affect performance. This varies quite a lot with your hardware and also your io pattern so it is much better to try it yourself.
Having said this, from our experience if your clients io block size is large ( 1M < ) you will get close to native performance. At around 64K size and you use fast SSD, latency of cpu and network start to kick in and your extra gateway hop will add to this, we get about 65% of native. At very small sizes ( 4k or less ) and using SSDs, latency we get around 40% of native, in ver 1.5 we have done some kernel improvements which gives us over 50% at such block sizes with SSDs. Again it is much better if you test yourself.
shadowlin
67 Posts
Quote from shadowlin on December 10, 2017, 12:01 pmThanks for your information.
I will try myself and compare to your data as a baseline.
Thanks for your information.
I will try myself and compare to your data as a baseline.
shadowlin
67 Posts
Quote from shadowlin on December 20, 2017, 12:44 pmQuote from admin on December 8, 2017, 11:53 amYes we target PetaSAN as plug and play appliance. I am not sure about the performance on other target and arm but i would recommend testing with lrbd with a SUSE SLE 12 SP3 kernel. PetaSAN uses this kernel. I believe they do have am arm build.
lrbd supports HA on a node level, so if you have 2 MPIO paths to 2 nodes and a node fails, the other path will still be working, It is not as flexible as moving paths between nodes but it is still effective in providing HA.
I am studying lrbd and I am confused how lrbd works.
It looks like lrbd still use krbd to map rbd image to the gateway as a block device and then export it to iscsi,is that true?
Quote from admin on December 8, 2017, 11:53 amYes we target PetaSAN as plug and play appliance. I am not sure about the performance on other target and arm but i would recommend testing with lrbd with a SUSE SLE 12 SP3 kernel. PetaSAN uses this kernel. I believe they do have am arm build.
lrbd supports HA on a node level, so if you have 2 MPIO paths to 2 nodes and a node fails, the other path will still be working, It is not as flexible as moving paths between nodes but it is still effective in providing HA.
I am studying lrbd and I am confused how lrbd works.
It looks like lrbd still use krbd to map rbd image to the gateway as a block device and then export it to iscsi,is that true?
admin
2,930 Posts
Quote from admin on December 20, 2017, 1:44 pmFor regular kernels yes, but SUSE kernels have a LIO rbd backstore module target_core_rbd , it still makes use of krbd but it uses it directly bypassing the block layer (+ adds iSCSI specific operations like pr + vmware vaai)
in case of SUSE kernel, lrbd will setup target_core_rbd as backstore, else it will use regular iblock mapping an rbd block device. PetaSAN does use the SUSE kernel and target_core_rbd, but we configure them directly without lrbd.
For regular kernels yes, but SUSE kernels have a LIO rbd backstore module target_core_rbd , it still makes use of krbd but it uses it directly bypassing the block layer (+ adds iSCSI specific operations like pr + vmware vaai)
in case of SUSE kernel, lrbd will setup target_core_rbd as backstore, else it will use regular iblock mapping an rbd block device. PetaSAN does use the SUSE kernel and target_core_rbd, but we configure them directly without lrbd.
shadowlin
67 Posts
Quote from shadowlin on December 20, 2017, 2:24 pmQuote from admin on December 20, 2017, 1:44 pmFor regular kernels yes, but SUSE kernels have a LIO rbd backstore module target_core_rbd , it still makes use of krbd but it uses it directly bypassing the block layer (+ adds iSCSI specific operations like pr + vmware vaai)
in case of SUSE kernel, lrbd will setup target_core_rbd as backstore, else it will use regular iblock mapping an rbd block device. PetaSAN does use the SUSE kernel and target_core_rbd, but we configure them directly without lrbd.
I see.Does target_core_rbd support all the rbd features like object map etc?
Quote from admin on December 20, 2017, 1:44 pmFor regular kernels yes, but SUSE kernels have a LIO rbd backstore module target_core_rbd , it still makes use of krbd but it uses it directly bypassing the block layer (+ adds iSCSI specific operations like pr + vmware vaai)
in case of SUSE kernel, lrbd will setup target_core_rbd as backstore, else it will use regular iblock mapping an rbd block device. PetaSAN does use the SUSE kernel and target_core_rbd, but we configure them directly without lrbd.
I see.Does target_core_rbd support all the rbd features like object map etc?
admin
2,930 Posts
Quote from admin on December 20, 2017, 3:14 pmit uses krbd, so it supports rbd features provided by krbd. SLE is based on the 4.4 kernel so object map is not supported in kernel. It does support layering.
it uses krbd, so it supports rbd features provided by krbd. SLE is based on the 4.4 kernel so object map is not supported in kernel. It does support layering.
shadowlin
67 Posts
Quote from shadowlin on December 21, 2017, 3:47 amQuote from admin on December 20, 2017, 3:14 pmit uses krbd, so it supports rbd features provided by krbd. SLE is based on the 4.4 kernel so object map is not supported in kernel. It does support layering.
I see.Have you tried to use intel's spdk iscsi target?I learned that spdk support rbd backend with librbd and librados.
Quote from admin on December 20, 2017, 3:14 pmit uses krbd, so it supports rbd features provided by krbd. SLE is based on the 4.4 kernel so object map is not supported in kernel. It does support layering.
I see.Have you tried to use intel's spdk iscsi target?I learned that spdk support rbd backend with librbd and librados.