Performance Enhancement for PETA SAN cluster
nitin.mehta
5 Posts
August 1, 2017, 5:58 amQuote from nitin.mehta on August 1, 2017, 5:58 amHello Team,
I have been trying to setup a new cluster on ESXs. Here is the details of my setup :
3 ESXis.
1 VM per ESX and assigned all SSD to the VM with RDM on hosts.
Looking for any settings here which can enhance the performance as with the similar setup we are observing a latency in performances on the VMs running on the PetaSAN cluster.
Thanks
-Nitin.
Hello Team,
I have been trying to setup a new cluster on ESXs. Here is the details of my setup :
3 ESXis.
1 VM per ESX and assigned all SSD to the VM with RDM on hosts.
Looking for any settings here which can enhance the performance as with the similar setup we are observing a latency in performances on the VMs running on the PetaSAN cluster.
Thanks
-Nitin.
admin
2,930 Posts
August 1, 2017, 6:57 pmQuote from admin on August 1, 2017, 6:57 pmUnfortunately we have not done testing PetaSAN in a hyper-converged setup under ESXi, but it is being worked on.
Unfortunately we have not done testing PetaSAN in a hyper-converged setup under ESXi, but it is being worked on.
Last edited on August 1, 2017, 6:57 pm · #2
nitin.mehta
5 Posts
August 2, 2017, 12:21 pmQuote from nitin.mehta on August 2, 2017, 12:21 pmPlease update this thread once you do that kind of setup. However in my standard setup i am facing below error while adding up a 3rd node:
Disk /dev/sdb prepare failure on node PS03
This is a SSD which i have mapped as raw device to the VM, but somehow my cluster formation is not going through.
Can you please advise here.
Please update this thread once you do that kind of setup. However in my standard setup i am facing below error while adding up a 3rd node:
Disk /dev/sdb prepare failure on node PS03
This is a SSD which i have mapped as raw device to the VM, but somehow my cluster formation is not going through.
Can you please advise here.
admin
2,930 Posts
August 2, 2017, 4:30 pmQuote from admin on August 2, 2017, 4:30 pmwe did use rdm in ESXi 6 using paravirtual scsi driver and were able to install PetaSAN.
the error you get is ceph trying to format the disk but not able to. it could be a rdm configuration issue. i suggest you give it another try and maybe also try to install another linux distro other than PetaSAN and see if they can access the disk.
we did use rdm in ESXi 6 using paravirtual scsi driver and were able to install PetaSAN.
the error you get is ceph trying to format the disk but not able to. it could be a rdm configuration issue. i suggest you give it another try and maybe also try to install another linux distro other than PetaSAN and see if they can access the disk.
nitin.mehta
5 Posts
August 3, 2017, 6:19 amQuote from nitin.mehta on August 3, 2017, 6:19 amThanks!
I will try that and let you now.
However is there a log file for such things. Because on ceph logs i dont see a file which writes cluster logs.
-Nitin.
Thanks!
I will try that and let you now.
However is there a log file for such things. Because on ceph logs i dont see a file which writes cluster logs.
-Nitin.
nitin.mehta
5 Posts
August 3, 2017, 7:01 amQuote from nitin.mehta on August 3, 2017, 7:01 amI have few more questions here (sorry if these sound dumb ones):
* Do we need to keep all the ESXs into one single vmotion cluster for creating PetaSAN env?
* The SSDs we use, should those be RAIDed ?
Thanks
-Nitin.
I have few more questions here (sorry if these sound dumb ones):
* Do we need to keep all the ESXs into one single vmotion cluster for creating PetaSAN env?
* The SSDs we use, should those be RAIDed ?
Thanks
-Nitin.
admin
2,930 Posts
August 3, 2017, 12:20 pmQuote from admin on August 3, 2017, 12:20 pmPetaSAN has a log in /opt/petasan/log/PetaSAN.log, you can also access it from the ui in the node page.
Ceph logs in /var/log/ceph/, but by default it logs very little as it affects performance, but you can change the log levels if you need to troubleshoot issues.
Re SSD, RAID: in most cases it is better not to RAID disks and let Ceph handle the disks itself. If you have RAID card you should either configure it in JBOD mode or single disk RAID0. In hyper-converged setup you could try to use PCI passthrough and let the VM own the controller, this should give the best result but we have not tested it yet.
Re vMotion: note that PetaSAN vms will be using local storage themselves so they would not be subject to HA or vMotion although other vms (outside of PetaSAN) runing under the same ESXi will be able to use HA/vMotion since they view the storage provided by PetaSAN as shared and not local. PetaSAN's HA is already designed into Ceph.
PetaSAN has a log in /opt/petasan/log/PetaSAN.log, you can also access it from the ui in the node page.
Ceph logs in /var/log/ceph/, but by default it logs very little as it affects performance, but you can change the log levels if you need to troubleshoot issues.
Re SSD, RAID: in most cases it is better not to RAID disks and let Ceph handle the disks itself. If you have RAID card you should either configure it in JBOD mode or single disk RAID0. In hyper-converged setup you could try to use PCI passthrough and let the VM own the controller, this should give the best result but we have not tested it yet.
Re vMotion: note that PetaSAN vms will be using local storage themselves so they would not be subject to HA or vMotion although other vms (outside of PetaSAN) runing under the same ESXi will be able to use HA/vMotion since they view the storage provided by PetaSAN as shared and not local. PetaSAN's HA is already designed into Ceph.
Performance Enhancement for PETA SAN cluster
nitin.mehta
5 Posts
Quote from nitin.mehta on August 1, 2017, 5:58 amHello Team,
I have been trying to setup a new cluster on ESXs. Here is the details of my setup :
3 ESXis.
1 VM per ESX and assigned all SSD to the VM with RDM on hosts.
Looking for any settings here which can enhance the performance as with the similar setup we are observing a latency in performances on the VMs running on the PetaSAN cluster.
Thanks
-Nitin.
Hello Team,
I have been trying to setup a new cluster on ESXs. Here is the details of my setup :
3 ESXis.
1 VM per ESX and assigned all SSD to the VM with RDM on hosts.
Looking for any settings here which can enhance the performance as with the similar setup we are observing a latency in performances on the VMs running on the PetaSAN cluster.
Thanks
-Nitin.
admin
2,930 Posts
Quote from admin on August 1, 2017, 6:57 pmUnfortunately we have not done testing PetaSAN in a hyper-converged setup under ESXi, but it is being worked on.
Unfortunately we have not done testing PetaSAN in a hyper-converged setup under ESXi, but it is being worked on.
nitin.mehta
5 Posts
Quote from nitin.mehta on August 2, 2017, 12:21 pmPlease update this thread once you do that kind of setup. However in my standard setup i am facing below error while adding up a 3rd node:
Disk /dev/sdb prepare failure on node PS03
This is a SSD which i have mapped as raw device to the VM, but somehow my cluster formation is not going through.
Can you please advise here.
Please update this thread once you do that kind of setup. However in my standard setup i am facing below error while adding up a 3rd node:
Disk /dev/sdb prepare failure on node PS03
This is a SSD which i have mapped as raw device to the VM, but somehow my cluster formation is not going through.
Can you please advise here.
admin
2,930 Posts
Quote from admin on August 2, 2017, 4:30 pmwe did use rdm in ESXi 6 using paravirtual scsi driver and were able to install PetaSAN.
the error you get is ceph trying to format the disk but not able to. it could be a rdm configuration issue. i suggest you give it another try and maybe also try to install another linux distro other than PetaSAN and see if they can access the disk.
we did use rdm in ESXi 6 using paravirtual scsi driver and were able to install PetaSAN.
the error you get is ceph trying to format the disk but not able to. it could be a rdm configuration issue. i suggest you give it another try and maybe also try to install another linux distro other than PetaSAN and see if they can access the disk.
nitin.mehta
5 Posts
Quote from nitin.mehta on August 3, 2017, 6:19 amThanks!
I will try that and let you now.
However is there a log file for such things. Because on ceph logs i dont see a file which writes cluster logs.
-Nitin.
Thanks!
I will try that and let you now.
However is there a log file for such things. Because on ceph logs i dont see a file which writes cluster logs.
-Nitin.
nitin.mehta
5 Posts
Quote from nitin.mehta on August 3, 2017, 7:01 amI have few more questions here (sorry if these sound dumb ones):
* Do we need to keep all the ESXs into one single vmotion cluster for creating PetaSAN env?
* The SSDs we use, should those be RAIDed ?
Thanks
-Nitin.
I have few more questions here (sorry if these sound dumb ones):
* Do we need to keep all the ESXs into one single vmotion cluster for creating PetaSAN env?
* The SSDs we use, should those be RAIDed ?
Thanks
-Nitin.
admin
2,930 Posts
Quote from admin on August 3, 2017, 12:20 pmPetaSAN has a log in /opt/petasan/log/PetaSAN.log, you can also access it from the ui in the node page.
Ceph logs in /var/log/ceph/, but by default it logs very little as it affects performance, but you can change the log levels if you need to troubleshoot issues.
Re SSD, RAID: in most cases it is better not to RAID disks and let Ceph handle the disks itself. If you have RAID card you should either configure it in JBOD mode or single disk RAID0. In hyper-converged setup you could try to use PCI passthrough and let the VM own the controller, this should give the best result but we have not tested it yet.
Re vMotion: note that PetaSAN vms will be using local storage themselves so they would not be subject to HA or vMotion although other vms (outside of PetaSAN) runing under the same ESXi will be able to use HA/vMotion since they view the storage provided by PetaSAN as shared and not local. PetaSAN's HA is already designed into Ceph.
PetaSAN has a log in /opt/petasan/log/PetaSAN.log, you can also access it from the ui in the node page.
Ceph logs in /var/log/ceph/, but by default it logs very little as it affects performance, but you can change the log levels if you need to troubleshoot issues.
Re SSD, RAID: in most cases it is better not to RAID disks and let Ceph handle the disks itself. If you have RAID card you should either configure it in JBOD mode or single disk RAID0. In hyper-converged setup you could try to use PCI passthrough and let the VM own the controller, this should give the best result but we have not tested it yet.
Re vMotion: note that PetaSAN vms will be using local storage themselves so they would not be subject to HA or vMotion although other vms (outside of PetaSAN) runing under the same ESXi will be able to use HA/vMotion since they view the storage provided by PetaSAN as shared and not local. PetaSAN's HA is already designed into Ceph.