Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Failed to attach NFS SR to XCP-ng

Pages: 1 2

Hi, I have been trying to add NFS SR to XCP-ng. But it keep saying that there is problem when trying to mount from my Petasan NFS.

Jan 29 00:14:04 xenserver-01 SM: [17831] sr_create {'sr_uuid': '1d248f42-98b0-ae86-8c69-dea957a8a18a', 'subtask_of': 'DummyRef:|8de0328c-8dae-49e8-855b-15dd96f3c973|SR.create', 'args': ['0'], 'host_ref': 'OpaqueRef:c17315c5-4d6e-52fb-3f28-c4624f1daf82', 'session_ref': 'OpaqueRef:4d7fc894-62bd-4bed-b437-706486c25dc3', 'device_config': {'server': '10.11.2.10', 'options': 'proto=tcp', 'SRmaster': 'true', 'serverpath': '/XenStore01', 'nfsversion': '4.1'}, 'command': 'sr_create', 'sr_ref': 'OpaqueRef:5c8102cd-642e-425b-b293-3749acf83cfa', 'local_cache_sr': 'c9a732c1-d847-b593-0363-18985c09b070'}

Jan 29 00:14:04 xenserver-01 SM: [17831] _testHost: Testing host/port: 10.11.2.10,2049

Jan 29 00:14:04 xenserver-01 SM: [17831] ['/usr/sbin/rpcinfo', '-p', '10.11.2.10']

Jan 29 00:14:04 xenserver-01 SM: [17831]   pread SUCCESS

Jan 29 00:14:04 xenserver-01 SM: [17831] ['/usr/sbin/rpcinfo', '-s', '10.11.2.10']

Jan 29 00:14:04 xenserver-01 SM: [17831]   pread SUCCESS

Jan 29 00:14:04 xenserver-01 SM: [17831] NFS service not ready on server 10.11.2.10

Jan 29 00:14:34 xenserver-01 SM: [17831] ['/usr/sbin/rpcinfo', '-s', '10.11.2.10']

Jan 29 00:14:34 xenserver-01 SM: [17831]   pread SUCCESS

Jan 29 00:14:34 xenserver-01 SM: [17831] NFS service not ready on server 10.11.2.10

Jan 29 00:15:04 xenserver-01 SM: [17831] ['/usr/sbin/rpcinfo', '-s', '10.11.2.10']

Jan 29 00:15:04 xenserver-01 SM: [17831]   pread SUCCESS

Jan 29 00:15:04 xenserver-01 SM: [17831] NFS service not ready on server 10.11.2.10

Jan 29 00:15:35 xenserver-01 SM: [17831] ['/usr/sbin/rpcinfo', '-s', '10.11.2.10']

Jan 29 00:15:35 xenserver-01 SM: [17831]   pread SUCCESS

Jan 29 00:15:35 xenserver-01 SM: [17831] NFS service not ready on server 10.11.2.10

Jan 29 00:16:05 xenserver-01 SM: [17831] ['/usr/sbin/rpcinfo', '-s', '10.11.2.10']

Jan 29 00:16:05 xenserver-01 SM: [17831]   pread SUCCESS

Jan 29 00:16:05 xenserver-01 SM: [17831] NFS service not ready on server 10.11.2.10

Jan 29 00:16:35 xenserver-01 SM: [17831] ['/usr/sbin/rpcinfo', '-s', '10.11.2.10']

Jan 29 00:16:35 xenserver-01 SM: [17831]   pread SUCCESS

Jan 29 00:16:35 xenserver-01 SM: [17831] NFS service not ready on server 10.11.2.10

Jan 29 00:16:35 xenserver-01 SM: [17831] Raising exception [73, NFS mount error [opterr=Failed to detect NFS service on server 10.11.2.10]]

Jan 29 00:16:35 xenserver-01 SM: [17831] lock: released /var/lock/sm/1d248f42-98b0-ae86-8c69-dea957a8a18a/sr

Jan 29 00:16:35 xenserver-01 SM: [17831] ***** generic exception: sr_create: EXCEPTION <class 'SR.SROSError'>, NFS mount error [opterr=Failed to detect NFS service on server 10.11.2.10]

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/SRCommand.py", line 110, in run

Jan 29 00:16:35 xenserver-01 SM: [17831]     return self._run_locked(sr)

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked

Jan 29 00:16:35 xenserver-01 SM: [17831]     rv = self._run(sr, target)

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/SRCommand.py", line 323, in _run

Jan 29 00:16:35 xenserver-01 SM: [17831]     return sr.create(self.params['sr_uuid'], long(self.params['args'][0]))

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/NFSSR", line 213, in create

Jan 29 00:16:35 xenserver-01 SM: [17831]     raise exn

Jan 29 00:16:35 xenserver-01 SM: [17831]

Jan 29 00:16:35 xenserver-01 SM: [17831] ***** NFS VHD: EXCEPTION <class 'SR.SROSError'>, NFS mount error [opterr=Failed to detect NFS service on server 10.11.2.10]

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/SRCommand.py", line 378, in run

Jan 29 00:16:35 xenserver-01 SM: [17831]     ret = cmd.run(sr)

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/SRCommand.py", line 110, in run

Jan 29 00:16:35 xenserver-01 SM: [17831]     return self._run_locked(sr)

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked

Jan 29 00:16:35 xenserver-01 SM: [17831]     rv = self._run(sr, target)

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/SRCommand.py", line 323, in _run

Jan 29 00:16:35 xenserver-01 SM: [17831]     return sr.create(self.params['sr_uuid'], long(self.params['args'][0]))

Jan 29 00:16:35 xenserver-01 SM: [17831]   File "/opt/xensource/sm/NFSSR", line 213, in create

Jan 29 00:16:35 xenserver-01 SM: [17831]     raise exn

Jan 29 00:16:35 xenserver-01 SM: [17831]

I tried mounting the shared folder manually and it worked ok. Does it related to showmount -e command. Because when I try the showmount -e on the ip it says error

[00:22 xenserver-01 ~]# showmount -e 10.11.2.10

clnt_create: RPC: Program not registered

Please help me.

Try to test mount the share from a Linux client machine
Note we support NFS version 4.1 and 4.2 which is required for multi-node active/active NFS servers

example to mount:
mount -t nfs -o nfsvers=4.1,proto=tcp 10.0.1.50:/export1 /mnt/export1/

I can mount without problem with mount -t nfs -o nfsvers=4.1,proto=tcp10.11.2.10:/XenStore01 /mnt/XenStore01

but still showmount -e does not give any reliable output.

searching i got this

The showmount command returns information maintained by the mountd daemon. Because NFS Version 4 does not use the mountd daemon, showmount will not return information about version 4 mounts.

I was suspecting that too. Looking at my FreeNAS setup I find a lot of other rpcbind service. but for petasan only portmaper.

This is from cmd rpcinfo -p <address>
FreeNAS:

[08:22 xenserver-01 ~]# rpcinfo -p 192.168.10.110

   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100000    4     7    111  portmapper
    100000    3     7    111  portmapper
    100000    2     7    111  portmapper
    100005    1   udp    891  mountd
    100005    3   udp    891  mountd
    100005    1   tcp    891  mountd
    100005    3   tcp    891  mountd
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100024    1   udp    644  status
    100024    1   tcp    644  status
    100021    0   udp   1000  nlockmgr
    100021    0   tcp    926  nlockmgr
    100021    1   udp   1000  nlockmgr
    100021    1   tcp    926  nlockmgr
    100021    3   udp   1000  nlockmgr
    100021    3   tcp    926  nlockmgr
    100021    4   udp   1000  nlockmgr
    100021    4   tcp    926  nlockmgr

PetaSAN:

[08:23 xenserver-01 ~]# rpcinfo -p 10.11.2.10

   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper

So there is no solution for this?

Many NFS server setups do export using both NFS 3 and 4, so the showmount will show the exports.

PetaSAN only supports NFS 4.1 and 4.2 which are required for muti-node active/active NFS.

I still dont have solutions for this, the only way I can do this is by creating local file SR and mounting the NFS manually. A bad way of doing it as the server can easily disconnect and XCP-ng will have problem. Or I better off with iscsi

I am not sure i understand the issue, you need to mount the NFS 4 export from client as per mount command shown earlier. If the connection breaks due to node going down, the connection will still resume due to the system high availability. Or are you looking for a way that if there is a major problem occurs and connection is lost, that there will be a mechanism to re-mount the export from client slide ? like a periodic mount attempts until it is back ? did i understand correct ?

 

I have the same issue with proxmox. Without mountd, the hypervisor cannot/will not mount, it simply reports that the server is down. It needs to see a list of exports which showmount -e would show on a 'normal' nfs server that ran mountd.

root@prox1:/etc/pve# pvesm add nfs PetaSAN --server 10.20.3.20 --export /export --options soft,vers=4.2,mountproto=tcp
create storage failed: storage 'PetaSAN' is not online
root@prox1:/etc/pve# pvesm add nfs PetaSAN --server 10.20.3.20 --export /export --options soft,vers=4.1,mountproto=tcp
create storage failed: storage 'PetaSAN' is not online
root@prox1:/etc/pve# rpcinfo -p 10.20.3.20
program vers proto   port  service
100000    4   tcp    111  portmapper
100000    3   tcp    111  portmapper
100000    2   tcp    111  portmapper
100000    4   udp    111  portmapper
100000    3   udp    111  portmapper
100000    2   udp    111  portmapper

root@prox1:/etc/pve# showmount -e 10.20.3.20
clnt_create: RPC: Program not registered

root@prox1:/etc/pve# mount -t nfs -o soft,tcp,vers=4.2 10.20.3.20:/export /mnt/pve/PetaSAN/
root@prox1:/etc/pve#

root@prox1:/etc/pve# df -h | egrep '(Filesystem|10.20.3.)'
Filesystem                        Size  Used Avail Use% Mounted on
10.20.3.20:/export                9.6T     0  9.6T   0% /mnt/pve/PetaSAN

root@prox1:/etc/pve# touch /mnt/pve/PetaSAN/testfile

root@prox1:/etc/pve# ls -l /mnt/pve/PetaSAN/
total 0
-rw-r--r-- 1 nobody 4294967294 0 Mar 30 12:38 testfile

 

 

Result on a FreeNAS nfs server that does work:

root@prox1:/etc/pve# rpcinfo -p 172.16.14.58
program vers proto   port  service
100000    4   tcp    111  portmapper
100000    3   tcp    111  portmapper
100000    2   tcp    111  portmapper
100000    4   udp    111  portmapper
100000    3   udp    111  portmapper
100000    2   udp    111  portmapper
100000    4     7    111  portmapper
100000    3     7    111  portmapper
100000    2     7    111  portmapper
100005    1   udp    884  mountd
100005    3   udp    884  mountd
100005    1   tcp    884  mountd
100005    3   tcp    884  mountd
100003    2   tcp   2049  nfs
100003    3   tcp   2049  nfs
100024    1   udp    702  status
100024    1   tcp    702  status
100021    0   udp    726  nlockmgr
100021    0   tcp   1008  nlockmgr
100021    1   udp    726  nlockmgr
100021    1   tcp   1008  nlockmgr
100021    3   udp    726  nlockmgr
100021    3   tcp   1008  nlockmgr
100021    4   udp    726  nlockmgr
100021    4   tcp   1008  nlockmgr
root@prox1:/etc/pve# showmount -e 172.16.14.58
Export list for 172.16.14.58:
/mnt/pool1 (everyone)

As per prev

The showmount command returns information maintained by the mountd daemon. Because NFS Version 4 does not use the mountd daemon, showmount will not return information about version 4 mounts.

PetaSAN only supports NFS 4.1 and 4.2 which are required for muti-node active/active NFS.

FreeNAS can support NFS 3 because it does not support multi-node active/active.

This probably needs to be fixed at the client side, to support NFS 4. Do you think otherwise ? Note we have done a lot of testing with VMWare and it works very well with PetaSAN NFS 4.

 

Pages: 1 2