PetaSAN against existing cluster
bdeetz
2 Posts
April 21, 2017, 4:46 amQuote from bdeetz on April 21, 2017, 4:46 amWe already have a sizable Ceph Jewel cluster that we'd like to add iSCSI to. Is it possible in PetaSAN to simply join a new node to the cluster and expose iSCSI for rbd living in a pool?
Do we have to use the PetaSAN nodes to store data or can they just act as a gateway to our existing Ceph infrastructure? The quick start guide implies that it is a self hosted solution only.
We already have a sizable Ceph Jewel cluster that we'd like to add iSCSI to. Is it possible in PetaSAN to simply join a new node to the cluster and expose iSCSI for rbd living in a pool?
Do we have to use the PetaSAN nodes to store data or can they just act as a gateway to our existing Ceph infrastructure? The quick start guide implies that it is a self hosted solution only.
admin
2,930 Posts
April 21, 2017, 11:41 amQuote from admin on April 21, 2017, 11:41 amCurrent version is self hosted.
If you need a generic iSCSI gateway solution that will work with an existing cluster that you manage, i would recommend using SUSE Enterprise Linux which supports n-nodes active/active iSCSI using a custom rbd kernel module and the lrbd configuration tool:
https://github.com/SUSE/lrbd/wiki
This will not give you HA at the path level (no dynamic ips that move from node to node) but you still achieve HA by having multiple nodes per iSCSI disk. You can also setup the same solution by installing PetaSAN on nodes but not deploying them, and use ssh to access the lrbd tool which is included. But frankly it would not provide any value add over the SUSE solution we are based on, unless you need to connect from Windows in such case we have done small changes that will work be better.
Current version is self hosted.
If you need a generic iSCSI gateway solution that will work with an existing cluster that you manage, i would recommend using SUSE Enterprise Linux which supports n-nodes active/active iSCSI using a custom rbd kernel module and the lrbd configuration tool:
https://github.com/SUSE/lrbd/wiki
This will not give you HA at the path level (no dynamic ips that move from node to node) but you still achieve HA by having multiple nodes per iSCSI disk. You can also setup the same solution by installing PetaSAN on nodes but not deploying them, and use ssh to access the lrbd tool which is included. But frankly it would not provide any value add over the SUSE solution we are based on, unless you need to connect from Windows in such case we have done small changes that will work be better.
Last edited on April 21, 2017, 11:56 am · #2
bdeetz
2 Posts
April 22, 2017, 7:23 amQuote from bdeetz on April 22, 2017, 7:23 amI appreciate the feedback. For HA, could that be achieved with corosync/pacemaker, or is there more customization/magic to it?
I appreciate the feedback. For HA, could that be achieved with corosync/pacemaker, or is there more customization/magic to it?
admin
2,930 Posts
April 22, 2017, 8:31 pmQuote from admin on April 22, 2017, 8:31 pmYes you can add Pacemaker to the solution, in fact the early design of PetaSAN relied on it. Just some points to consider:
-With lrbd, you can setup many gateways per iSCSI lun, each gateway will handle some of ips/paths, so if a gateway fails, the client will lose some paths for the lun but will still have other paths running. I would recommend this solution since it works without too much work.
-If you want to use Pacemaker to failover the ips to another host you should note that lrbd is designed to handle itself the configuration of LIO for an entire gateway host, so you can either use Pacemaker just to failover an entire backup node, but if you need more intelligent resource assignment by Pacemaker LIO agents, you will probably need to do non minor customization.
-We had initially based the PetaSAN design on Pacemaker,but early on we did some stress tests and it was not scaling to the levels we wanted and so switched to Consul. So if you do have a large number of targets/luns/paths you may want to test if Pacemaker will be able to handle that many resources.
Yes you can add Pacemaker to the solution, in fact the early design of PetaSAN relied on it. Just some points to consider:
-With lrbd, you can setup many gateways per iSCSI lun, each gateway will handle some of ips/paths, so if a gateway fails, the client will lose some paths for the lun but will still have other paths running. I would recommend this solution since it works without too much work.
-If you want to use Pacemaker to failover the ips to another host you should note that lrbd is designed to handle itself the configuration of LIO for an entire gateway host, so you can either use Pacemaker just to failover an entire backup node, but if you need more intelligent resource assignment by Pacemaker LIO agents, you will probably need to do non minor customization.
-We had initially based the PetaSAN design on Pacemaker,but early on we did some stress tests and it was not scaling to the levels we wanted and so switched to Consul. So if you do have a large number of targets/luns/paths you may want to test if Pacemaker will be able to handle that many resources.
Last edited on April 22, 2017, 8:35 pm · #4
PetaSAN against existing cluster
bdeetz
2 Posts
Quote from bdeetz on April 21, 2017, 4:46 amWe already have a sizable Ceph Jewel cluster that we'd like to add iSCSI to. Is it possible in PetaSAN to simply join a new node to the cluster and expose iSCSI for rbd living in a pool?
Do we have to use the PetaSAN nodes to store data or can they just act as a gateway to our existing Ceph infrastructure? The quick start guide implies that it is a self hosted solution only.
We already have a sizable Ceph Jewel cluster that we'd like to add iSCSI to. Is it possible in PetaSAN to simply join a new node to the cluster and expose iSCSI for rbd living in a pool?
Do we have to use the PetaSAN nodes to store data or can they just act as a gateway to our existing Ceph infrastructure? The quick start guide implies that it is a self hosted solution only.
admin
2,930 Posts
Quote from admin on April 21, 2017, 11:41 amCurrent version is self hosted.
If you need a generic iSCSI gateway solution that will work with an existing cluster that you manage, i would recommend using SUSE Enterprise Linux which supports n-nodes active/active iSCSI using a custom rbd kernel module and the lrbd configuration tool:
https://github.com/SUSE/lrbd/wiki
This will not give you HA at the path level (no dynamic ips that move from node to node) but you still achieve HA by having multiple nodes per iSCSI disk. You can also setup the same solution by installing PetaSAN on nodes but not deploying them, and use ssh to access the lrbd tool which is included. But frankly it would not provide any value add over the SUSE solution we are based on, unless you need to connect from Windows in such case we have done small changes that will work be better.
Current version is self hosted.
If you need a generic iSCSI gateway solution that will work with an existing cluster that you manage, i would recommend using SUSE Enterprise Linux which supports n-nodes active/active iSCSI using a custom rbd kernel module and the lrbd configuration tool:
https://github.com/SUSE/lrbd/wiki
This will not give you HA at the path level (no dynamic ips that move from node to node) but you still achieve HA by having multiple nodes per iSCSI disk. You can also setup the same solution by installing PetaSAN on nodes but not deploying them, and use ssh to access the lrbd tool which is included. But frankly it would not provide any value add over the SUSE solution we are based on, unless you need to connect from Windows in such case we have done small changes that will work be better.
bdeetz
2 Posts
Quote from bdeetz on April 22, 2017, 7:23 amI appreciate the feedback. For HA, could that be achieved with corosync/pacemaker, or is there more customization/magic to it?
I appreciate the feedback. For HA, could that be achieved with corosync/pacemaker, or is there more customization/magic to it?
admin
2,930 Posts
Quote from admin on April 22, 2017, 8:31 pmYes you can add Pacemaker to the solution, in fact the early design of PetaSAN relied on it. Just some points to consider:
-With lrbd, you can setup many gateways per iSCSI lun, each gateway will handle some of ips/paths, so if a gateway fails, the client will lose some paths for the lun but will still have other paths running. I would recommend this solution since it works without too much work.
-If you want to use Pacemaker to failover the ips to another host you should note that lrbd is designed to handle itself the configuration of LIO for an entire gateway host, so you can either use Pacemaker just to failover an entire backup node, but if you need more intelligent resource assignment by Pacemaker LIO agents, you will probably need to do non minor customization.
-We had initially based the PetaSAN design on Pacemaker,but early on we did some stress tests and it was not scaling to the levels we wanted and so switched to Consul. So if you do have a large number of targets/luns/paths you may want to test if Pacemaker will be able to handle that many resources.
Yes you can add Pacemaker to the solution, in fact the early design of PetaSAN relied on it. Just some points to consider:
-With lrbd, you can setup many gateways per iSCSI lun, each gateway will handle some of ips/paths, so if a gateway fails, the client will lose some paths for the lun but will still have other paths running. I would recommend this solution since it works without too much work.
-If you want to use Pacemaker to failover the ips to another host you should note that lrbd is designed to handle itself the configuration of LIO for an entire gateway host, so you can either use Pacemaker just to failover an entire backup node, but if you need more intelligent resource assignment by Pacemaker LIO agents, you will probably need to do non minor customization.
-We had initially based the PetaSAN design on Pacemaker,but early on we did some stress tests and it was not scaling to the levels we wanted and so switched to Consul. So if you do have a large number of targets/luns/paths you may want to test if Pacemaker will be able to handle that many resources.