Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Failed to attach NFS SR to XCP-ng

Pages: 1 2

It's not trying to mount as v3. In fact, the rpcinfo call is specified as version 4 as well.

From proxmox NFSPlugin.pm:

# nfsv4 uses a pseudo-filesystem always beginning with /
# no exports are listed
$cmd = ['/usr/sbin/rpcinfo', '-T', $transport, $ip, 'nfs', '4'];

Which would expand to 'rpcinfo -T tcp 10.20.3.20 nfs 4' for me.

response:

# rpcinfo -T tcp 10.20.3.20 nfs 4
rpcinfo: RPC: Program not registered

So I believe we should get some response from that command that is not coming. I'm open to ideas, but two hypervisors with the same issue..

but your prev post was on mountd and showmount which are not supported in NFS 4. I can look into rpcinfo, is this what Proxmox uses ?

Yes. Dug further into the issue and found where the check happens in the code. It uses the exact command above to check that the server is responsive before it will attempt the mount

If there’s a better way to get the status I’m willing to test.

Thanks for the info. but as NFS 4 does not use rpcbind/portmap/showmount/mountd

In VMWare/ESXi you just tell it the ip and export name to mount an NFS 4 datastore, it does not discover any exports itself. Can you do that with Proxmox ?

In fact NFS4 protocol does not provide a way to get list of exports such as NFS3. Part of the issue is that most NFS servers do export in both NFS 3 and NFS 4, this could lead to some sloppy clients which may use NFS 3 for discovery of share but mount it as NFS 4. Most NFS servers are either single host or active/passive hosts so they can export using both protocols. PetaSAN does use scale out active/active NFS which is not compatible with NFS 3, we cannot allow an NFS 3  client to connect to multinode active/active setup as it will cause problems. Maybe it would be worth it if we add support to NFS 3 as an option in NFS settings which will run the NFS cluster as active/passive failover mode ? would that be useful feature ?

It checks if nfsv4 is specified, which is available to set in both GUI and pvesm cli, and if it is, uses rpcinfo only. It can be specified as 4, 4.1 or 4.2, but all run the exact same rpcinfo command to test (rpcinfo -T tcp host nfs 4). Then the export name would have to be manually added. I can confirm if i short circuit the rpcinfo check, mount works as expected in gui and console. I'm not sure it's worth building out all it would take to fully support nfsv3 - seems the support for v4 is there, we're just not getting a reply it's expecting. I suspect that it does work perfectly with some nfsv4 implementation.

Expected output (i think). Enabled nfsv4 on truenas to test.

$rpcinfo -T tcp 10.79.1.7 nfs 4
program 100003 version 4 is ready and waiting

Code snippet:

if (defined($opts) && $opts =~ /vers=4.*/) {
< snip >
# nfsv4 uses a pseudo-filesystem always beginning with /
# no exports are listed
$cmd = ['/usr/sbin/rpcinfo', '-T', $transport, $ip, 'nfs', '4'];
} else {
$cmd = ['/sbin/showmount', '--no-headers', '--exports', $server];
}

I am not sure what Proxmox is trying to do, why does not it accept ip and export name and tries to fiddle with rpc.

NFS 4 does not use rpc:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/4/html/reference_guide/ch-nfs

I need to look more into this.

Pages: 1 2