Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

PetaSAN 2.6 Released!

Happy to announce our latest release 2.6 with the following main features:

  • Scale-out active/active and highly available NFS 4 using NFS Ganesha.
  • Custom CephFS file layouts for multi-pool filesystem support.
  • Ability to change network interfaces for iSCSI/CIFS/NFS Services post deployment from user interface.
  • General enhancements and bug fixes.

Note: For online upgrades refer to Online Upgrade Guide

Do you have some info on setting up NFS shares?

 

We are updating the Quick Start and Admin guides, should be done by tomorrow.

Thanks for the update!

Couple questions:

  • I see there is no configuration to limit clients by hostname/IP for NFS exports - is this beingp lanned to be done in upcoming versions?
  • Say if I wanted to limit the clients for specific export now, I suppose would need to get into Ganesha containers and changing /etc/ganesha/ganesha.conf ? I assume it will get overwritten on any changes in GUI.....
  • How will HA work for NFS shares? I see that Ganesha daemons are started on multiple nodes with multiple IPs - how do I get the client to be aware of failover IPs?

All PetaSAN services are scale-out, active/active and highly available, this is achieved with the Consul service mesh layer, it distributes services, ips among the nodes (specifically the nodes assigned the service role). The ips themselves are virtual ips not tied to a node. This is the case for iSCSI/CIFS/SMB and NFS.

Currently we do not set client limits on an NFS export. Since we are scale out and active/active, all exports are accessible by all servers, so if the number of client increases and your load on your servers increases, just add more servers. Maybe if there are other needs (other than load) to put max connection limits, i suppose we can, but we have not done it for iSCSI, CIFS. Our future thinking is more on load distribution and moving connections dynamically across the servers based on load statistics we collect.

The client is not aware of failover, well at the lower protocol layer in its OS/kernel, at the connection level will it be asked to re-transmit some session data, but it is un-aware it is now talking to a different server, its i/o will resume where it left off.

Assume you have 1000 client hosts/applications, 5 servers and allocated 50 virtual ips so each servers has 10 ips. You can either manually distribute the 50 ips across your clients so each client/application gets 1 ip or better use a round-robin dns to automatically do this and have all clients connect using a single nfs-service name. If a server fails, its 10 ips will be auto-magically moved/distributed to the 4 other servers so they end up with 12-13 ips each, this will be transparent to the client/application as i/o will experience a short pause then resume where it left off.  If the Quick Start was not clarifying this let us know.

I think I did not phrase my question clearly....

I mean to ask if you plan to support being able to specify which clients have access to which shares. My undersatnding Ganesha support sit in CLIENT section of EXPORT block, kind of like below. I can probably do it by manually updating configs in the containers, but it would be nicer to be able to do it through GUI.

 

EXPORT
{
Export_ID = 1;
Path = /nfs/testnfs/;
Pseudo = /testnfs/;
Protocols = 4;
Transports = TCP;
Squash = No_root_squash;
Attr_Expiration_Time = 0;

FSAL
{
Name = CEPH;
Filesystem = testfs; # currently supports cephfs only
}

CLIENT {

Clients = 192.169.30.182;
Squash = No_Root_Squash;
Access_Type = RW;

}
}

Oh, sorry i mis-understood, i thought this was a load limit on the number of clients.

If it is security blocking, yes we do have plans, it could be by simple ip and/or via Active Directory/Kerberos as we have done with CIFS, the later type is used by VMWare with NFS 4. We may add both.