Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

IOPS Control.

Hello there.

 

I managed to install all 3 nodes and run the cluster, I would like to say the ISCSI dynamic IP is a sweet touch. I can failover LUN's and the IP will take over by other nodes. good for systems that does not support multipathing.

I would like to ask.

Is there a way to control the IOPS on the ISCSI, e.g I would like to restrict IOPS or even read/writes ?

There is no current support. There is ongoing work in Ceph to add QoS support but this is still under development. We do have plans to add this when stable.

Glad you liked the ip takeover, note we still recommend you also use multipath, they are not exclusive to one another. Most current implementation are as follows:

Use active/passive multipath, where the passive path is on a backup node.

Use active/active multipath to the same node, you get better performance, if the node dies its paths are taken via an ip failover to another node.

The above are the most common SAN setups. PetaSAN is better by providing multiple active/active paths on different nodes so you get true scale out at the iSCSI layer.

Thank you for the reply,

So far everything looks good, a couple of things I just want to make sure it will be a feature set.

1 - Can I put a node in Maintenance mode, say for updates or hardware maintenance.

2 - Is there a way to drop OSD's in the GUI, for example, a disk fails I disable that disk and replace it.

3 - API access to the GUI ?

 

Thanks

1 - Can I put a node in Maintenance mode, say for updates or hardware maintenance.

Yes this done in via the maintenance tab, for example you can turn off fencing or tell ceph not to treat the osds as down and start recovery. Just be careful if you update software yourself, do not update kernel or major os components since this could affect PetaSAN, wait for new releases for major component updates.

2 - Is there a way to drop OSD's in the GUI, for example, a disk fails I disable that disk and replace it.

When a disk fails,  the ui will show you an error on the disk and allows you to delete it from the cluster. Ceph's has recovery built in, so whether you remove the disk or not, Ceph will start its recovery to make sure all 3 replicas of all data is stored on the remaining active disks and will report its status as OK when done, the failure is treated as if the cluster has shrunk. When you add a new disk Ceph will treat this as a cluster expansion, it does not differentiate between a new disk addition or if it was to replace a previous failure. Of course you should delete the failed OSD from the ui so it does not show forever as down as well as to make the next osd you add utilize the same id which minimizes the amount of data being moved.

3 - API access to the GUI ?

No published API, but you can look at our core and backend python layers and call them directly. We offer custom software development on a case by case basis, if you need this service to let us know.