Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Custom route nfs issue?

Hi,

Testing out petasan at present. Config is eth0 - mgmt - default gateway defined.
eth1 - backend - flat subnet

eth2 through eth4 are cifs and iscsi interfaces.

eth5 is nfs. custom gateway defined.

From cluster nodes can ping gateway up on eth5.
tcpdump on firewall and node I’m testing show tcp 2049 allowed through, reaches node. No return traffic observed either on mgmt Nic or the nfs Nic. Traffic hits the Nic on cluster node. Nfs export configured as no to restricted access. Nfs status is healthy.

version: 2.7.0

telnet on 2049 between nodes in cluster on local subnet works fine. The external route/gateway configuration doesn’t appear to commit. Netstat -rn on the nodes does not display route added in custom under nfs settings.

 

This appears to be a bug but not 100% sure. Could be a mis configuration although everything appears healthy.

 

it is something we tested and works here.

Note the public NFS to/from IPs are the dynamic (highly availabe) ips assigned to NFS services (could be more than 1 ip per node ), together with the subnet masks detrmine the local NFS subnet. First make sure you can access NFS exports from an NFS client within this NFS subnet, ie without routing.

If accesing via client with NFS subnet works, then it is a routing issue. I would expect a router setup issue, but to double check can you show the output of:

ip route show table all | grep petasan
ip rule show | grep petasan

Thanks for replying. I have confirmed it is a routing issue as implementing source NAT on the upstream firewall instantly resolved the issue. The petasan appliance doesn't appear to be returning traffic when the source is outside of the local subnet. NFS definitely works if source NAT is applied on the firewall and when clients are in local subnet.

Route output from one of the nodes:

root@testsannode3:~# ip route show table all | grep petasan
root@testsannode3:~# ip rule show | grep petasan
32765:    from 10.100.243.0/24 lookup petasan_nf

The NFS subnet is VLAN 44, netmask 255.255.255.0, network 10.100.243.0/24. NFS IPs assigned are 10.100.243.100 - 10.100.243.110. Three node cluster. NFS health shows IPs assigned across the three nodes and healthy. The custom gateway added [10.100.243.1] doesn't appear to be taking. I'm wondering if there are logs that may be able to diagnose what is happening in terms of route install to help triage this better?

In terms of topology, its mail server > | firewall | > nfs server - where both networks are directly connected from a central firewall point of view. E.G... a missing route on the firewall won't explain this behaviour, no upstream L3 devices. I can confirm that from a cluster node perspective, it has an ARP entry for the NFS network (10.100.243.1) - verifying L2 is OK end to end. I can also verify that at least from a NFS client perspective, traffic passes the firewall, reaches the SAN node but no response traffic is seen on the upstream firewall (NFS interface side) or observed as reaching the client. TCP handshake doesn't get past SYN - no SYN/ACK.

I know it isn't optimal to have storage passed through firewall but this is primarily for testing purposes. NFS is working through the firewall with the source NAT in place as an initial work around so verifies a return route issue?

 

Thanks for the detailed feedback. it should work without source NAT. Can you try to test it without vlan tagging please.

Removing the VLAN tag results in the route being identified in "iproute show table all | grep petasan" output.

Looks like the custom default route isn't committing if it's a vlan sub-interface off Eth5.

root@testsan3:~# ip route show table all | grep petasan
default via 10.100.243.1 dev eth5 table petasan_nfs
root@testsan3:~# ip rule show | grep petasan
32765:    from 10.100.243.0/24 lookup petasan_nfs