Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

2.0 Cluster setup will not proceed in HyperV VM if Jumbo Packets enabled

Good morning,

We've played with versions 1.4 and 1.5 , and are trying to test 2.0 and tune for maximum performance on our hardware.

We are running Petasan on 3x Hyper V 2016, on Dell Power Edge R630's, dual Xeon 8c 2.8ghz, 128gb ram, quad gig nic plus dual 10gbe. Management is on gig, iscsi and backend is on 10gbe.

Building with standard MTU's completes without issue (it's up and running now).

If I enabled Jumbo Packet (9000 mtu) the setup page will not proceed past the Network Configuration section  (I've waited for over 10 minutes after providing the Back end IP addresses and clicking Next).

 

Here's the troubleshooting I've done so far....

 

1) Verified Jumbo Packet support on Network and Hypervisor's. I can successfully pass packets up to 8972 between hyper v hosts on the relevant nics.

2) Attempted to modify the python node setup script prior to config, lowering packet size to 8900. No change, configuration still sticks at Network backend config

3) Attempted to modify to Jumbo (by updating both cluster_config.json and the python node start script) after building Cluster on standard. After doing this and restarting, nodes will not complete starting petasan services, and I'm unable to login to the VM (it seems to go unresponsive).

 

What am I missing? Is this a bug?

We do not currently support virtualized setups, we just do not test them (enough) to know what does not work. There could be several factors that may lead to this, including setting the switches ( real + virtual ). I would recommend you do not set jumbo frames in this setup, if you want to investigate further yourself, i would suggest you add an extra interface  to the vms not used by PetaSAN and try to setup jumbo frames manually yourself and see if it works, it it does not then it is probably a configuration issue outside the vms, if it does work then we can look into it.

Thank you for the clarification.

I did as you suggested. It appears that any attempt to create an interface with a Jumbo MTU (in my case, I was targeting 8900) causes the network stack to become unstable inside of the VM.

In my case, I added an additional NIC on one Node, booted, manually assigned an MTU of 8900 via ifconfig, and the vm hang. Force reboot, and it would hang on restart.

 

I guess I'll hang up my spurs on this one (unless you have other suggestions).

This means it is either a configuration issue outside of the vm or an issue with the vm hyper-v virtual nic kernel driver (hv_netvsc.ko). I did a quick info search on this driver and it should have no issue supporting jumbo frames.

I will schedule this for testing but will not be in coming days since our test cluster is being used. If you want to pursue this further at your end i would suggest double check your setup (specifically virtual switches),  for the driver issues you can try a stock SUSE SLE 12 SP3 /OpenSUSE Leap 42.3 since it is the same kernel we use.