Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Performance Enhancement for PETA SAN cluster

Hello Team,

I have been trying to setup a new cluster on ESXs. Here is the details of my setup :

3 ESXis.

1 VM per ESX and assigned all SSD to the VM with RDM on hosts.

Looking for any settings here which can enhance the performance as with the similar setup we are observing a latency in performances on the VMs  running on the PetaSAN cluster.

Thanks

-Nitin.

Unfortunately we have not done testing PetaSAN in a hyper-converged setup under ESXi, but it is being worked on.

Please update this thread once you do that kind of setup. However in my standard setup i am facing below error while adding up a 3rd node:

 

Disk /dev/sdb prepare failure on node PS03

 

This is a SSD which i have mapped as raw device to the VM, but somehow my cluster formation is not going through.

Can you please advise here.

we did use rdm in ESXi 6 using paravirtual scsi driver and were able to install PetaSAN.

the error you get is ceph trying to format the disk but not able to. it could be a rdm configuration issue. i suggest you give it another try and maybe also try to install another linux distro other than PetaSAN and see if they can access the disk.

 

Thanks!

I will try that and let you now.

However is there a log file for such things. Because on ceph logs i dont see a file which writes cluster logs.

-Nitin.

I have few more questions here (sorry if these sound dumb ones):

* Do we need to keep all the ESXs into one single vmotion cluster for creating PetaSAN env?

* The SSDs we use, should those be RAIDed ?

 

Thanks

-Nitin.

PetaSAN has a log in /opt/petasan/log/PetaSAN.log, you can also access it from the ui in the node page.

Ceph logs in /var/log/ceph/, but by default it logs very little as it affects performance, but you can change the log levels if you need to troubleshoot issues.

Re SSD, RAID: in most cases it is better not to RAID disks and let Ceph handle the disks itself. If you have RAID card you should either configure it in JBOD mode or single disk RAID0. In hyper-converged setup you could  try to use PCI passthrough and let the VM own the controller, this should give the best result but we have not tested it yet.

Re vMotion: note that PetaSAN vms will be using local storage themselves so they would not be subject to HA or vMotion although other vms (outside of PetaSAN) runing under the same ESXi will be able to use HA/vMotion since they view the storage provided by PetaSAN as shared and not local. PetaSAN's HA is already designed into Ceph.