Mutiple iSCSI disks utilizing a single IP per subnet
dws88
7 Posts
May 12, 2017, 2:54 pmQuote from dws88 on May 12, 2017, 2:54 pmIn FreeNAS you can add "extents" to a single storage pool utilizing 1 portal IP. I would like to know if ceph has that capability and if it is a feature that might be implemented, it would be nice to have to avoid chewing through address space for each iSCSI disk I create.
In FreeNAS you can add "extents" to a single storage pool utilizing 1 portal IP. I would like to know if ceph has that capability and if it is a feature that might be implemented, it would be nice to have to avoid chewing through address space for each iSCSI disk I create.
admin
2,930 Posts
May 12, 2017, 5:54 pmQuote from admin on May 12, 2017, 5:54 pmActually Ceph does not support iSCSI itself, PetaSAN does it by layering the linux kernel LIO target and Consul for load balancing.
To your issue, we do use 1 ip per active disk path, so yes you are correct that we chew a lot of ips 🙂
In return however, the system is highly scalable since for a single iSCSI disk we have many active paths that are load balanced across many nodes. There is also higher granularity in HA when a node fails, we have much more control on how to reassign the ips to other nodes. Also in the future we would be able to re-assign single paths dynamically at run time based on load stats. Also since theses are internal backend ips, in most cases you would not have issues allocating large pools of them.
Also since you can scale your disks, you can typically create larger disks that can serve more clients concurrently, for example serve as a datastore for many Hyper-V or VMWare clients. This in turn will lead to a smaller total number of disks being used.
Actually Ceph does not support iSCSI itself, PetaSAN does it by layering the linux kernel LIO target and Consul for load balancing.
To your issue, we do use 1 ip per active disk path, so yes you are correct that we chew a lot of ips 🙂
In return however, the system is highly scalable since for a single iSCSI disk we have many active paths that are load balanced across many nodes. There is also higher granularity in HA when a node fails, we have much more control on how to reassign the ips to other nodes. Also in the future we would be able to re-assign single paths dynamically at run time based on load stats. Also since theses are internal backend ips, in most cases you would not have issues allocating large pools of them.
Also since you can scale your disks, you can typically create larger disks that can serve more clients concurrently, for example serve as a datastore for many Hyper-V or VMWare clients. This in turn will lead to a smaller total number of disks being used.
Last edited on May 12, 2017, 6:21 pm · #2
Mutiple iSCSI disks utilizing a single IP per subnet
dws88
7 Posts
Quote from dws88 on May 12, 2017, 2:54 pmIn FreeNAS you can add "extents" to a single storage pool utilizing 1 portal IP. I would like to know if ceph has that capability and if it is a feature that might be implemented, it would be nice to have to avoid chewing through address space for each iSCSI disk I create.
In FreeNAS you can add "extents" to a single storage pool utilizing 1 portal IP. I would like to know if ceph has that capability and if it is a feature that might be implemented, it would be nice to have to avoid chewing through address space for each iSCSI disk I create.
admin
2,930 Posts
Quote from admin on May 12, 2017, 5:54 pmActually Ceph does not support iSCSI itself, PetaSAN does it by layering the linux kernel LIO target and Consul for load balancing.
To your issue, we do use 1 ip per active disk path, so yes you are correct that we chew a lot of ips 🙂
In return however, the system is highly scalable since for a single iSCSI disk we have many active paths that are load balanced across many nodes. There is also higher granularity in HA when a node fails, we have much more control on how to reassign the ips to other nodes. Also in the future we would be able to re-assign single paths dynamically at run time based on load stats. Also since theses are internal backend ips, in most cases you would not have issues allocating large pools of them.
Also since you can scale your disks, you can typically create larger disks that can serve more clients concurrently, for example serve as a datastore for many Hyper-V or VMWare clients. This in turn will lead to a smaller total number of disks being used.
Actually Ceph does not support iSCSI itself, PetaSAN does it by layering the linux kernel LIO target and Consul for load balancing.
To your issue, we do use 1 ip per active disk path, so yes you are correct that we chew a lot of ips 🙂
In return however, the system is highly scalable since for a single iSCSI disk we have many active paths that are load balanced across many nodes. There is also higher granularity in HA when a node fails, we have much more control on how to reassign the ips to other nodes. Also in the future we would be able to re-assign single paths dynamically at run time based on load stats. Also since theses are internal backend ips, in most cases you would not have issues allocating large pools of them.
Also since you can scale your disks, you can typically create larger disks that can serve more clients concurrently, for example serve as a datastore for many Hyper-V or VMWare clients. This in turn will lead to a smaller total number of disks being used.