Seek maximum efficiency and ratio
richard.su
6 Posts
November 18, 2022, 9:36 amQuote from richard.su on November 18, 2022, 9:36 amSeek maximum efficiency and ratio
My server is hyper-converged with PVE, and the hardware ratio is: 3 physical servers form a PVE virtualization cluster
-----------------Per Physical Server Hardware-------------
DELL R730xd
PERC H730P Mini (Integrated RAID Controller) with 2GB Cache+Battery
CPU: 2socket, (Cores 14, Threads 28) × 2CPU
Memory: 512GB DDR4 2400
Disk:
480GB SATA SSD×2(HardRAID1)(vd0)(PVE System)
16TB SAS HDD×6 (7200rpm) non-RAID
1TB SAS HDD×3 (7200rpm) non-RAID
3.2TB NvME PCIe HHHL×1
4×10GE network card
----------The following is the PVE configuration------------
PVE Version: 7.2
The disk allocation is: directory, and the format is EXT4 (the most widely used and supports snapshots)
network:
bond0 (PVE cluster synchronization) 2×10GE (OVS bridge, vmbr0, active-backup, mtu:9000)
bond1 (workload) 2×10GE (OVS bridge, vmbr1, active-backup, mtu:9000)
-------------------------------------------------- ------
I have tested at least 3 ratios, and the cifs transmission speed is 30MB/s-40MB/s
May I ask how to configure it to maximize the speed and performance, the premise: the physical network topology cannot be changed, the 4 10GEs cannot be changed, and only 2 10GEs can be used in the end
Seek maximum efficiency and ratio
My server is hyper-converged with PVE, and the hardware ratio is: 3 physical servers form a PVE virtualization cluster
-----------------Per Physical Server Hardware-------------
DELL R730xd
PERC H730P Mini (Integrated RAID Controller) with 2GB Cache+Battery
CPU: 2socket, (Cores 14, Threads 28) × 2CPU
Memory: 512GB DDR4 2400
Disk:
480GB SATA SSD×2(HardRAID1)(vd0)(PVE System)
16TB SAS HDD×6 (7200rpm) non-RAID
1TB SAS HDD×3 (7200rpm) non-RAID
3.2TB NvME PCIe HHHL×1
4×10GE network card
----------The following is the PVE configuration------------
PVE Version: 7.2
The disk allocation is: directory, and the format is EXT4 (the most widely used and supports snapshots)
network:
bond0 (PVE cluster synchronization) 2×10GE (OVS bridge, vmbr0, active-backup, mtu:9000)
bond1 (workload) 2×10GE (OVS bridge, vmbr1, active-backup, mtu:9000)
-------------------------------------------------- ------
I have tested at least 3 ratios, and the cifs transmission speed is 30MB/s-40MB/s
May I ask how to configure it to maximize the speed and performance, the premise: the physical network topology cannot be changed, the 4 10GEs cannot be changed, and only 2 10GEs can be used in the end
admin
2,930 Posts
November 18, 2022, 10:57 pmQuote from admin on November 18, 2022, 10:57 pmHow are you testing to get the 30-40 MB/s..file copy ? dd command ? fio..? diskspd ? CrystalDiskMark ? what parameters ?
What do you mean by tested 3 ratios ?
Do you have SSD pool or HDD pool or pool with all drives mixed ? have you set a crush rule to your pool or using default ?
How are you testing to get the 30-40 MB/s..file copy ? dd command ? fio..? diskspd ? CrystalDiskMark ? what parameters ?
What do you mean by tested 3 ratios ?
Do you have SSD pool or HDD pool or pool with all drives mixed ? have you set a crush rule to your pool or using default ?
Last edited on November 18, 2022, 11:01 pm by admin · #2
richard.su
6 Posts
November 19, 2022, 9:07 amQuote from richard.su on November 19, 2022, 9:07 amhi, Admin
https://photos.app.goo.gl/JUBy8Rq1DHtxWRWu6
the lastet image is speedtest
hi, Admin
https://photos.app.goo.gl/JUBy8Rq1DHtxWRWu6
the lastet image is speedtest
richard.su
6 Posts
November 19, 2022, 9:14 amQuote from richard.su on November 19, 2022, 9:14 amUnder the existing hardware, how should I configure it to be the best?
Under the existing hardware, how should I configure it to be the best?
richard.su
6 Posts
November 19, 2022, 9:15 amQuote from richard.su on November 19, 2022, 9:15 amhave you set a crush rule to your pool or using default ?
by default
have you set a crush rule to your pool or using default ?
by default
richard.su
6 Posts
November 19, 2022, 9:35 amQuote from richard.su on November 19, 2022, 9:35 amPERC H730P Mini (Integrated RAID Controller)with 2GB Cache+Battery
Should I use a physical RAID card to RAID5 each server's 6x16TB hard drives?
PERC H730P Mini (Integrated RAID Controller)with 2GB Cache+Battery
Should I use a physical RAID card to RAID5 each server's 6x16TB hard drives?
admin
2,930 Posts
November 20, 2022, 12:05 pmQuote from admin on November 20, 2022, 12:05 pmTo get good performance, first you need to get the cluster health in dashboard to ok, if it is reporting issues you need tp get those solved first. You also either should not mix SSD and HDD OSDs in the same pool, in you case you can test with only one type, in baremetal setups you would create 2 pools and use 2 crush rules for them and not use the default rule, in a vm virtualized setup it is more complicated as the system does not detect the type of drives, you could do this by setting device classes manually but for a simple setup in a virtual setup in your case i would only add HDD or SSD as OSDs and not mix, obviously SSD will give better performance.
Having a large number of nodes and OSDs, helps scale the system for multiple client connections but (for CIFS) will not increase performance of single client. But if in your test you add more connections and have parallel copy operation going at same time the total performance will go up.
We do not support virtual environments, but some tips: it is not good to add 2 virtual OSDs that store on the same physical drive, it will have negative impact. Probably having many virtualisation nodes on same host can cause resource conflicts. Some Virtualisation systems like VMWare will put throttle caps on vm i/o at various queue levels (you can search for these parameters) you may need to increase the limits on the queues. You may also experiment with i/o passthrough or raw device mappings. Some info on virtualisation of other storage systems like FreeNAS may be applicable.
For RAID, do not use it, better have many (real) OSDs than fewer OSDs backed by RAID.
To get good performance, first you need to get the cluster health in dashboard to ok, if it is reporting issues you need tp get those solved first. You also either should not mix SSD and HDD OSDs in the same pool, in you case you can test with only one type, in baremetal setups you would create 2 pools and use 2 crush rules for them and not use the default rule, in a vm virtualized setup it is more complicated as the system does not detect the type of drives, you could do this by setting device classes manually but for a simple setup in a virtual setup in your case i would only add HDD or SSD as OSDs and not mix, obviously SSD will give better performance.
Having a large number of nodes and OSDs, helps scale the system for multiple client connections but (for CIFS) will not increase performance of single client. But if in your test you add more connections and have parallel copy operation going at same time the total performance will go up.
We do not support virtual environments, but some tips: it is not good to add 2 virtual OSDs that store on the same physical drive, it will have negative impact. Probably having many virtualisation nodes on same host can cause resource conflicts. Some Virtualisation systems like VMWare will put throttle caps on vm i/o at various queue levels (you can search for these parameters) you may need to increase the limits on the queues. You may also experiment with i/o passthrough or raw device mappings. Some info on virtualisation of other storage systems like FreeNAS may be applicable.
For RAID, do not use it, better have many (real) OSDs than fewer OSDs backed by RAID.
Last edited on November 20, 2022, 5:24 pm by admin · #7
Seek maximum efficiency and ratio
richard.su
6 Posts
Quote from richard.su on November 18, 2022, 9:36 amSeek maximum efficiency and ratio
My server is hyper-converged with PVE, and the hardware ratio is: 3 physical servers form a PVE virtualization cluster
-----------------Per Physical Server Hardware-------------
DELL R730xd
PERC H730P Mini (Integrated RAID Controller) with 2GB Cache+Battery
CPU: 2socket, (Cores 14, Threads 28) × 2CPU
Memory: 512GB DDR4 2400
Disk:
480GB SATA SSD×2(HardRAID1)(vd0)(PVE System)
16TB SAS HDD×6 (7200rpm) non-RAID
1TB SAS HDD×3 (7200rpm) non-RAID
3.2TB NvME PCIe HHHL×1
4×10GE network card----------The following is the PVE configuration------------
PVE Version: 7.2
The disk allocation is: directory, and the format is EXT4 (the most widely used and supports snapshots)
network:
bond0 (PVE cluster synchronization) 2×10GE (OVS bridge, vmbr0, active-backup, mtu:9000)bond1 (workload) 2×10GE (OVS bridge, vmbr1, active-backup, mtu:9000)
-------------------------------------------------- ------
I have tested at least 3 ratios, and the cifs transmission speed is 30MB/s-40MB/s
May I ask how to configure it to maximize the speed and performance, the premise: the physical network topology cannot be changed, the 4 10GEs cannot be changed, and only 2 10GEs can be used in the end
Seek maximum efficiency and ratio
My server is hyper-converged with PVE, and the hardware ratio is: 3 physical servers form a PVE virtualization cluster
-----------------Per Physical Server Hardware-------------
DELL R730xd
PERC H730P Mini (Integrated RAID Controller) with 2GB Cache+Battery
CPU: 2socket, (Cores 14, Threads 28) × 2CPU
Memory: 512GB DDR4 2400
Disk:
480GB SATA SSD×2(HardRAID1)(vd0)(PVE System)
16TB SAS HDD×6 (7200rpm) non-RAID
1TB SAS HDD×3 (7200rpm) non-RAID
3.2TB NvME PCIe HHHL×1
4×10GE network card
----------The following is the PVE configuration------------
PVE Version: 7.2
The disk allocation is: directory, and the format is EXT4 (the most widely used and supports snapshots)
network:
bond0 (PVE cluster synchronization) 2×10GE (OVS bridge, vmbr0, active-backup, mtu:9000)
bond1 (workload) 2×10GE (OVS bridge, vmbr1, active-backup, mtu:9000)
-------------------------------------------------- ------
I have tested at least 3 ratios, and the cifs transmission speed is 30MB/s-40MB/s
May I ask how to configure it to maximize the speed and performance, the premise: the physical network topology cannot be changed, the 4 10GEs cannot be changed, and only 2 10GEs can be used in the end
admin
2,930 Posts
Quote from admin on November 18, 2022, 10:57 pmHow are you testing to get the 30-40 MB/s..file copy ? dd command ? fio..? diskspd ? CrystalDiskMark ? what parameters ?
What do you mean by tested 3 ratios ?
Do you have SSD pool or HDD pool or pool with all drives mixed ? have you set a crush rule to your pool or using default ?
How are you testing to get the 30-40 MB/s..file copy ? dd command ? fio..? diskspd ? CrystalDiskMark ? what parameters ?
What do you mean by tested 3 ratios ?
Do you have SSD pool or HDD pool or pool with all drives mixed ? have you set a crush rule to your pool or using default ?
richard.su
6 Posts
Quote from richard.su on November 19, 2022, 9:07 amhi, Admin
https://photos.app.goo.gl/JUBy8Rq1DHtxWRWu6
the lastet image is speedtest
hi, Admin
https://photos.app.goo.gl/JUBy8Rq1DHtxWRWu6
the lastet image is speedtest
richard.su
6 Posts
Quote from richard.su on November 19, 2022, 9:14 amUnder the existing hardware, how should I configure it to be the best?
Under the existing hardware, how should I configure it to be the best?
richard.su
6 Posts
Quote from richard.su on November 19, 2022, 9:15 amhave you set a crush rule to your pool or using default ?
by default
have you set a crush rule to your pool or using default ?
by default
richard.su
6 Posts
Quote from richard.su on November 19, 2022, 9:35 amPERC H730P Mini (Integrated RAID Controller)with 2GB Cache+Battery
Should I use a physical RAID card to RAID5 each server's 6x16TB hard drives?
PERC H730P Mini (Integrated RAID Controller)with 2GB Cache+Battery
Should I use a physical RAID card to RAID5 each server's 6x16TB hard drives?
admin
2,930 Posts
Quote from admin on November 20, 2022, 12:05 pmTo get good performance, first you need to get the cluster health in dashboard to ok, if it is reporting issues you need tp get those solved first. You also either should not mix SSD and HDD OSDs in the same pool, in you case you can test with only one type, in baremetal setups you would create 2 pools and use 2 crush rules for them and not use the default rule, in a vm virtualized setup it is more complicated as the system does not detect the type of drives, you could do this by setting device classes manually but for a simple setup in a virtual setup in your case i would only add HDD or SSD as OSDs and not mix, obviously SSD will give better performance.
Having a large number of nodes and OSDs, helps scale the system for multiple client connections but (for CIFS) will not increase performance of single client. But if in your test you add more connections and have parallel copy operation going at same time the total performance will go up.
We do not support virtual environments, but some tips: it is not good to add 2 virtual OSDs that store on the same physical drive, it will have negative impact. Probably having many virtualisation nodes on same host can cause resource conflicts. Some Virtualisation systems like VMWare will put throttle caps on vm i/o at various queue levels (you can search for these parameters) you may need to increase the limits on the queues. You may also experiment with i/o passthrough or raw device mappings. Some info on virtualisation of other storage systems like FreeNAS may be applicable.
For RAID, do not use it, better have many (real) OSDs than fewer OSDs backed by RAID.
To get good performance, first you need to get the cluster health in dashboard to ok, if it is reporting issues you need tp get those solved first. You also either should not mix SSD and HDD OSDs in the same pool, in you case you can test with only one type, in baremetal setups you would create 2 pools and use 2 crush rules for them and not use the default rule, in a vm virtualized setup it is more complicated as the system does not detect the type of drives, you could do this by setting device classes manually but for a simple setup in a virtual setup in your case i would only add HDD or SSD as OSDs and not mix, obviously SSD will give better performance.
Having a large number of nodes and OSDs, helps scale the system for multiple client connections but (for CIFS) will not increase performance of single client. But if in your test you add more connections and have parallel copy operation going at same time the total performance will go up.
We do not support virtual environments, but some tips: it is not good to add 2 virtual OSDs that store on the same physical drive, it will have negative impact. Probably having many virtualisation nodes on same host can cause resource conflicts. Some Virtualisation systems like VMWare will put throttle caps on vm i/o at various queue levels (you can search for these parameters) you may need to increase the limits on the queues. You may also experiment with i/o passthrough or raw device mappings. Some info on virtualisation of other storage systems like FreeNAS may be applicable.
For RAID, do not use it, better have many (real) OSDs than fewer OSDs backed by RAID.