Very slow on HDD. I will close my project
Pages: 1 2
admin
2,930 Posts
July 1, 2021, 2:30 pmQuote from admin on July 1, 2021, 2:30 pmIs the cluster already loaded with other load ?
Due you see any load in the nodes stats charts : mem/disk/cpu % util ?
If not loaded can you run a UI benchmark 4k iops 1 client, 1 thread, 1 min
The copy is from a vm or direct Windows ?
Is the cluster already loaded with other load ?
Due you see any load in the nodes stats charts : mem/disk/cpu % util ?
If not loaded can you run a UI benchmark 4k iops 1 client, 1 thread, 1 min
The copy is from a vm or direct Windows ?
Shiori
86 Posts
July 5, 2021, 7:48 pmQuote from Shiori on July 5, 2021, 7:48 pmI have a completely HDD 7 node cluster that 30ish disks and it is out performing yours with close to 50 VMs from a Xenserver cluster on 40Gbps IPoIB network and close to your level of performance on 1Gbps ethernet. Have you set your tcp window in windows to anything larger than default? even a 10Gbps network really needs OS tweaking to take advantage of the bandwidth when using single streams.
You also didnt mention memory per hdd installed, you need a lot of ram with any ceph+gluster setup and your iscsi nodes should not be handling storage, just network and monitoring (I use the monitor nodes for this and it works very well). Having say 8 storage drives per storage node will require 2 SSDs for journaling to make any real improvements and this is assuming SAS2 or SATA2 with good quality dedicated PCIe-2 8x cards not hamstrung onboard chips. the quality of your hardware is almost more important than the drives connected to the server.
I have a completely HDD 7 node cluster that 30ish disks and it is out performing yours with close to 50 VMs from a Xenserver cluster on 40Gbps IPoIB network and close to your level of performance on 1Gbps ethernet. Have you set your tcp window in windows to anything larger than default? even a 10Gbps network really needs OS tweaking to take advantage of the bandwidth when using single streams.
You also didnt mention memory per hdd installed, you need a lot of ram with any ceph+gluster setup and your iscsi nodes should not be handling storage, just network and monitoring (I use the monitor nodes for this and it works very well). Having say 8 storage drives per storage node will require 2 SSDs for journaling to make any real improvements and this is assuming SAS2 or SATA2 with good quality dedicated PCIe-2 8x cards not hamstrung onboard chips. the quality of your hardware is almost more important than the drives connected to the server.
Pages: 1 2
Very slow on HDD. I will close my project
admin
2,930 Posts
Quote from admin on July 1, 2021, 2:30 pmIs the cluster already loaded with other load ?
Due you see any load in the nodes stats charts : mem/disk/cpu % util ?
If not loaded can you run a UI benchmark 4k iops 1 client, 1 thread, 1 min
The copy is from a vm or direct Windows ?
Is the cluster already loaded with other load ?
Due you see any load in the nodes stats charts : mem/disk/cpu % util ?
If not loaded can you run a UI benchmark 4k iops 1 client, 1 thread, 1 min
The copy is from a vm or direct Windows ?
Shiori
86 Posts
Quote from Shiori on July 5, 2021, 7:48 pmI have a completely HDD 7 node cluster that 30ish disks and it is out performing yours with close to 50 VMs from a Xenserver cluster on 40Gbps IPoIB network and close to your level of performance on 1Gbps ethernet. Have you set your tcp window in windows to anything larger than default? even a 10Gbps network really needs OS tweaking to take advantage of the bandwidth when using single streams.
You also didnt mention memory per hdd installed, you need a lot of ram with any ceph+gluster setup and your iscsi nodes should not be handling storage, just network and monitoring (I use the monitor nodes for this and it works very well). Having say 8 storage drives per storage node will require 2 SSDs for journaling to make any real improvements and this is assuming SAS2 or SATA2 with good quality dedicated PCIe-2 8x cards not hamstrung onboard chips. the quality of your hardware is almost more important than the drives connected to the server.
I have a completely HDD 7 node cluster that 30ish disks and it is out performing yours with close to 50 VMs from a Xenserver cluster on 40Gbps IPoIB network and close to your level of performance on 1Gbps ethernet. Have you set your tcp window in windows to anything larger than default? even a 10Gbps network really needs OS tweaking to take advantage of the bandwidth when using single streams.
You also didnt mention memory per hdd installed, you need a lot of ram with any ceph+gluster setup and your iscsi nodes should not be handling storage, just network and monitoring (I use the monitor nodes for this and it works very well). Having say 8 storage drives per storage node will require 2 SSDs for journaling to make any real improvements and this is assuming SAS2 or SATA2 with good quality dedicated PCIe-2 8x cards not hamstrung onboard chips. the quality of your hardware is almost more important than the drives connected to the server.