Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

petaSAN vm

Our environment is esxi 6.5 hosts each with a vm running petaSAN, have tried both 1.3 and 1.3.1 & from reading here wanted to create the vm's using "VMware paravirtual" but with that, petaSAN never will install. When I try to use "VMware paravirtual" the petaSAN 1.3.1 installation works, but then I can never cannot connect to :5001 to configure a petaSAN cluster. When I connect to the console and check it out, although the vm has 5 total Ethernet adapters, only eth0 is listed, it has all of the correct ip settings but can't connect to any network, unable to ping it's gateway. The only way I can get petaSAN to install as a vm on esxi 6.5 hosts is with the default LSI Logic Parallel SCSi adapter. Your notes here indicate this setup is inferior to using the "VMware paravirtual" but have you ever tried this with esxi 6.5? thanks

In our tests we used ESXi 6. We used VMXNET3 paravirtualzed network adaptor (for the network) as well as PVSCSI paravirtualized SCSI adaptor (for storage) without problems.

It is not clear from your post,  does your network interfaces work when you choose the LSI Logic Parallel SCSi storage adaptor and not work when you use the  PVSCSI paravirtualized SCSI adaptor ? i probably read it wrong but it is not clear.

yeah this has nothing to do with the network adapter at all. The issue is that petaSAN won't install on a VM if the vm SCSI adapter is VMware paravirtual and using esxi 6.5. I've tried both 1.3 and 1.3.1 first of all, ifconfig doesn't even show the additional 4 network adapters, it only shows eth0 which has all the proper settings. But it cannot even ping the gateway so for all intents and purposes the one Ethernet adapter that come up after the petaSAN iso install process is useless. So you can never go to next steps to create a cluster. I have to use the LSI SCSI adapter to get things working.

Interesting. Give us a couple of days to verify this on since we are currently using our test cluster for 1,4 release.

But from this link it could very well be that PVSCSI is not supported for major Linux distros under ESXi 6.5

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010398

if this is true, it sounds strange to me why there is no backward comparability for supporting older PVSCSI drivers.

I will also look into if there is a way to update the kernel driver to support 6.5.

The fact that you see only 1 eth in ifconfig is normal as the other interfaces have not been configured yet. I suspect that due to a bad disk driver that the vms are in a messed up shape that could lead to eth0 not pinging,

 

 

This makes sense, especially since these vm's are using RDM (raw device mapping) connecting the VM's to the SSD drives

FYI- I'm working on rebuilding the cluster using clean 1.4 installs (because when I upgraded 1.3.0 to 1.3.1 to 1.4.0 the benchmarks didn't work) & just want to share 1 observation: Still unable to use VMware paravirtualization for the petaSAN 1.4. vm's running on esxi 6.5. This is very frustrating when you hear that paravirtualization controller is fastest. But I guess it's a driver issue so nothing we could do about it. Stuck with LSI Logic Parallel

I understand how you feel. The 6.5 particularized driver is not support for major linux distributions yet.

Another way instead of using rdm mapping is to use pci pass through, this should let PetaSAN see the pci controller directly without going through the vmware disk io path.

You can find some useful links on how to setup FreeNAS under vmware using pci passthrough that could be useful. Look for newer posts as some of the older ones recommend against hyper convergence. This is still uncharted territory for Ceph.

example: