High CPU usage on GlusterFS
Pages: 1 2
mcharlebois
11 Posts
April 9, 2018, 8:00 pmQuote from mcharlebois on April 9, 2018, 8:00 pmAnother question I have is if DNS resolution is being used at all in any of the services or just local host file?
I think I may have some issues at the DNS level...
Another question I have is if DNS resolution is being used at all in any of the services or just local host file?
I think I may have some issues at the DNS level...
admin
2,930 Posts
April 9, 2018, 8:29 pmQuote from admin on April 9, 2018, 8:29 pmWe use a local hosts file that is synced via consul, so if a node does a change, all nodes will get it. Also most core services like ceph monitors talk via straight ips.
The dns is setup on the management interface only during the iso installler. It is currently only used internally if you setup an external internet time server + can be used by the admin for apt-get for example.
We use a local hosts file that is synced via consul, so if a node does a change, all nodes will get it. Also most core services like ceph monitors talk via straight ips.
The dns is setup on the management interface only during the iso installler. It is currently only used internally if you setup an external internet time server + can be used by the admin for apt-get for example.
admin
2,930 Posts
April 9, 2018, 8:35 pmQuote from admin on April 9, 2018, 8:35 pmOne more thing to try is turn off scrub function
ceph osd set nodeep-scrub --cluster CLUSTER_NAME
ceph osd set noscrub --cluster CLUSTER_NAME
One more thing to try is turn off scrub function
ceph osd set nodeep-scrub --cluster CLUSTER_NAME
ceph osd set noscrub --cluster CLUSTER_NAME
mcharlebois
11 Posts
April 9, 2018, 10:27 pmQuote from mcharlebois on April 9, 2018, 10:27 pmJust to provide an update...
I have reinstalled all 3 petsan VM nodes using 1.5 without changing any hardware and using pretty much the same settings
Getting much better performance from inside petasan hosted VMs and no drive utilization spikes as of yet..
Just to provide an update...
I have reinstalled all 3 petsan VM nodes using 1.5 without changing any hardware and using pretty much the same settings
Getting much better performance from inside petasan hosted VMs and no drive utilization spikes as of yet..
mcharlebois
11 Posts
April 10, 2018, 12:23 amQuote from mcharlebois on April 10, 2018, 12:23 amAnother quick update... With 1.5, I am now reaching between 4k and 5k iops during a storage vMotion of a VM
On Petasan 2.o, I was barely reaching 1.5k iops
Another quick update... With 1.5, I am now reaching between 4k and 5k iops during a storage vMotion of a VM
On Petasan 2.o, I was barely reaching 1.5k iops
admin
2,930 Posts
April 10, 2018, 12:21 pmQuote from admin on April 10, 2018, 12:21 pmGlad you are happy with v 1.5.
If you add a 128 GB SSD as journal per every 4 hdds, you will get from 2x to 4x increase in performance (2x will be bare minimum, but it will higher). From an iops/$ view this is a must to have. You can add a controller with write-back cache and it will speed by 4-5x. Ofcourse our hardware recommendation is for an all flash solution which gives a totally different performance.
Bluestore (PetaSAN v 2.0) is better for an all flash solution, it avoids the double writes in Fiestore (v 1.5) at the expense of more io operations for db access, for slow systems with hdds, this may be too much latency.
Glad you are happy with v 1.5.
If you add a 128 GB SSD as journal per every 4 hdds, you will get from 2x to 4x increase in performance (2x will be bare minimum, but it will higher). From an iops/$ view this is a must to have. You can add a controller with write-back cache and it will speed by 4-5x. Ofcourse our hardware recommendation is for an all flash solution which gives a totally different performance.
Bluestore (PetaSAN v 2.0) is better for an all flash solution, it avoids the double writes in Fiestore (v 1.5) at the expense of more io operations for db access, for slow systems with hdds, this may be too much latency.
mcharlebois
11 Posts
April 10, 2018, 12:40 pmQuote from mcharlebois on April 10, 2018, 12:40 pmWe are currently using 3 x 256 GB Samsung Pro SSD for journal drive with 3 x 4TB SATA OSD drives spinning at 7200k rpm
We plan on adding another 3 x 4TB SATA OSD drives...
It's been running for 12 hours without a hitch and getting way better performance then we were without PETASAN on mirrored local drives...
Our focus is redundancy for high availability as we are not running many VMs but need them to be always available...
We are currently using 3 x 256 GB Samsung Pro SSD for journal drive with 3 x 4TB SATA OSD drives spinning at 7200k rpm
We plan on adding another 3 x 4TB SATA OSD drives...
It's been running for 12 hours without a hitch and getting way better performance then we were without PETASAN on mirrored local drives...
Our focus is redundancy for high availability as we are not running many VMs but need them to be always available...
Last edited on April 10, 2018, 12:41 pm by mcharlebois · #17
mcharlebois
11 Posts
April 10, 2018, 1:12 pmQuote from mcharlebois on April 10, 2018, 1:12 pm***Correction
OSD drives are SAS not SATA
***Correction
OSD drives are SAS not SATA
Last edited on April 10, 2018, 1:13 pm by mcharlebois · #18
Pages: 1 2
High CPU usage on GlusterFS
mcharlebois
11 Posts
Quote from mcharlebois on April 9, 2018, 8:00 pmAnother question I have is if DNS resolution is being used at all in any of the services or just local host file?
I think I may have some issues at the DNS level...
Another question I have is if DNS resolution is being used at all in any of the services or just local host file?
I think I may have some issues at the DNS level...
admin
2,930 Posts
Quote from admin on April 9, 2018, 8:29 pmWe use a local hosts file that is synced via consul, so if a node does a change, all nodes will get it. Also most core services like ceph monitors talk via straight ips.
The dns is setup on the management interface only during the iso installler. It is currently only used internally if you setup an external internet time server + can be used by the admin for apt-get for example.
We use a local hosts file that is synced via consul, so if a node does a change, all nodes will get it. Also most core services like ceph monitors talk via straight ips.
The dns is setup on the management interface only during the iso installler. It is currently only used internally if you setup an external internet time server + can be used by the admin for apt-get for example.
admin
2,930 Posts
Quote from admin on April 9, 2018, 8:35 pmOne more thing to try is turn off scrub function
ceph osd set nodeep-scrub --cluster CLUSTER_NAME
ceph osd set noscrub --cluster CLUSTER_NAME
One more thing to try is turn off scrub function
ceph osd set nodeep-scrub --cluster CLUSTER_NAME
ceph osd set noscrub --cluster CLUSTER_NAME
mcharlebois
11 Posts
Quote from mcharlebois on April 9, 2018, 10:27 pmJust to provide an update...
I have reinstalled all 3 petsan VM nodes using 1.5 without changing any hardware and using pretty much the same settings
Getting much better performance from inside petasan hosted VMs and no drive utilization spikes as of yet..
Just to provide an update...
I have reinstalled all 3 petsan VM nodes using 1.5 without changing any hardware and using pretty much the same settings
Getting much better performance from inside petasan hosted VMs and no drive utilization spikes as of yet..
mcharlebois
11 Posts
Quote from mcharlebois on April 10, 2018, 12:23 amAnother quick update... With 1.5, I am now reaching between 4k and 5k iops during a storage vMotion of a VM
On Petasan 2.o, I was barely reaching 1.5k iops
Another quick update... With 1.5, I am now reaching between 4k and 5k iops during a storage vMotion of a VM
On Petasan 2.o, I was barely reaching 1.5k iops
admin
2,930 Posts
Quote from admin on April 10, 2018, 12:21 pmGlad you are happy with v 1.5.
If you add a 128 GB SSD as journal per every 4 hdds, you will get from 2x to 4x increase in performance (2x will be bare minimum, but it will higher). From an iops/$ view this is a must to have. You can add a controller with write-back cache and it will speed by 4-5x. Ofcourse our hardware recommendation is for an all flash solution which gives a totally different performance.
Bluestore (PetaSAN v 2.0) is better for an all flash solution, it avoids the double writes in Fiestore (v 1.5) at the expense of more io operations for db access, for slow systems with hdds, this may be too much latency.
Glad you are happy with v 1.5.
If you add a 128 GB SSD as journal per every 4 hdds, you will get from 2x to 4x increase in performance (2x will be bare minimum, but it will higher). From an iops/$ view this is a must to have. You can add a controller with write-back cache and it will speed by 4-5x. Ofcourse our hardware recommendation is for an all flash solution which gives a totally different performance.
Bluestore (PetaSAN v 2.0) is better for an all flash solution, it avoids the double writes in Fiestore (v 1.5) at the expense of more io operations for db access, for slow systems with hdds, this may be too much latency.
mcharlebois
11 Posts
Quote from mcharlebois on April 10, 2018, 12:40 pmWe are currently using 3 x 256 GB Samsung Pro SSD for journal drive with 3 x 4TB SATA OSD drives spinning at 7200k rpm
We plan on adding another 3 x 4TB SATA OSD drives...
It's been running for 12 hours without a hitch and getting way better performance then we were without PETASAN on mirrored local drives...
Our focus is redundancy for high availability as we are not running many VMs but need them to be always available...
We are currently using 3 x 256 GB Samsung Pro SSD for journal drive with 3 x 4TB SATA OSD drives spinning at 7200k rpm
We plan on adding another 3 x 4TB SATA OSD drives...
It's been running for 12 hours without a hitch and getting way better performance then we were without PETASAN on mirrored local drives...
Our focus is redundancy for high availability as we are not running many VMs but need them to be always available...
mcharlebois
11 Posts
Quote from mcharlebois on April 10, 2018, 1:12 pm***Correction
OSD drives are SAS not SATA
***Correction
OSD drives are SAS not SATA