Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

High CPU usage on GlusterFS

Pages: 1 2

Another question I have is if DNS resolution is being used at all in any of the services or just local host file?

I think I may have some issues at the DNS level...

 

We use a local hosts file that is synced via consul, so if a node does a change, all nodes will get it. Also most core services like ceph monitors talk via straight ips.

The dns is setup on the management interface only during the iso installler. It is currently only used internally if you setup an external internet time server + can be used by the admin for apt-get for example.

One more thing to try is turn off scrub function

ceph osd set nodeep-scrub --cluster CLUSTER_NAME
ceph osd set noscrub  --cluster CLUSTER_NAME

Just to provide an update...

I have reinstalled all 3 petsan VM nodes using 1.5 without changing any hardware and using pretty much the same settings

Getting much better performance from inside petasan hosted VMs and no drive utilization spikes as of yet..

Another quick update...  With 1.5, I am now reaching between 4k and 5k iops during a storage vMotion of a VM

On Petasan 2.o, I was barely reaching 1.5k iops

Glad you are happy with v 1.5.

If you add a 128 GB SSD as journal per every 4 hdds, you will get from 2x to 4x increase in performance (2x will be bare minimum, but it will higher).  From an iops/$ view this is a must to have. You can add a controller with write-back cache and it will speed by 4-5x. Ofcourse our hardware recommendation is for an all flash solution which gives a totally different performance.

Bluestore (PetaSAN v 2.0) is better for an all flash solution, it avoids the double writes in Fiestore (v 1.5) at the expense of more io operations for db access, for slow systems with hdds, this may be too much latency.

We are currently using 3 x 256 GB Samsung Pro SSD for journal drive with 3 x 4TB SATA OSD drives spinning at 7200k rpm

We plan on adding another 3 x 4TB SATA OSD drives...

It's been running for 12 hours without a hitch and getting way better performance then we were without PETASAN on mirrored local drives...

Our focus is redundancy for high availability as we are not running many VMs but need them to be always available...

***Correction

OSD drives are SAS not SATA

Pages: 1 2