Ceph performance is very bad with Dell R730 Power edge servers
Srmvel
10 Posts
November 12, 2021, 1:15 pmQuote from Srmvel on November 12, 2021, 1:15 pmHi Team,
We like to evaluate the storage solution with the help of Ceph.
As this is the initial phase, we tried out direct REDHAT Ceph with ISCSI GATEWAY. The intention is to use the CEPH storage in different virtual environments(like VMware, Hyper-V) with ISCSI protocol.
Hardware Configuration.
DELL power Edge servers --- 3 storage servers
Hardware : Dell PowerEdge R730xd CPU : 2 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
Memory : 128Gb
RAID controller : Internal - PERC H730( We used this RAID card as a non-RAID mode for OSD disks(HBA)
Embedded NIC : 2 x 10GbE ----> For Mgmt
NIC on PCIe : Mellanox ConnectX - 2 * 40GbE ---> For Cluster replication
DISKD USED : OS Disk : 2 * 220Gb in RAID-1
OSD Disks --> 2 * 960Gb Micron-5300 SSD's /server
===================================================
Configuration :
Operating System :Debian 11
Ceph Version : 16.2.6 pacific (stable)
1 Node --> Acting as Ceph monitor, Ceph iSCSI Gateway
3 DELL Nodes --> Acting as OSD nodes --> OSD disks are not configured with separate WAL/DB disk
Total of 6 OSD's.
We tried out to test with both ESXi and Hyper-V VMs,but the performance is very very bad and like to know if there any other tunings or recommendations are required.
Before testing out with Petasan CEPH,we like to know what changes are really required to take it forward.As we are targetting this CEPH storage for prod environments,we like to know where is the problem.
It seems all the above hardware are compatible and using enterprise SSD disks only.
Thanks,
Manivel RR
Hi Team,
We like to evaluate the storage solution with the help of Ceph.
As this is the initial phase, we tried out direct REDHAT Ceph with ISCSI GATEWAY. The intention is to use the CEPH storage in different virtual environments(like VMware, Hyper-V) with ISCSI protocol.
Hardware Configuration.
DELL power Edge servers --- 3 storage servers
Hardware : Dell PowerEdge R730xd CPU : 2 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
Memory : 128Gb
RAID controller : Internal - PERC H730( We used this RAID card as a non-RAID mode for OSD disks(HBA)
Embedded NIC : 2 x 10GbE ----> For Mgmt
NIC on PCIe : Mellanox ConnectX - 2 * 40GbE ---> For Cluster replication
DISKD USED : OS Disk : 2 * 220Gb in RAID-1
OSD Disks --> 2 * 960Gb Micron-5300 SSD's /server
===================================================
Configuration :
Operating System :Debian 11
Ceph Version : 16.2.6 pacific (stable)
1 Node --> Acting as Ceph monitor, Ceph iSCSI Gateway
3 DELL Nodes --> Acting as OSD nodes --> OSD disks are not configured with separate WAL/DB disk
Total of 6 OSD's.
We tried out to test with both ESXi and Hyper-V VMs,but the performance is very very bad and like to know if there any other tunings or recommendations are required.
Before testing out with Petasan CEPH,we like to know what changes are really required to take it forward.As we are targetting this CEPH storage for prod environments,we like to know where is the problem.
It seems all the above hardware are compatible and using enterprise SSD disks only.
Thanks,
Manivel RR
Last edited on November 12, 2021, 1:50 pm by Srmvel · #1
admin
2,930 Posts
November 12, 2021, 2:47 pmQuote from admin on November 12, 2021, 2:47 pmCan comment on direct Redhat Ceph, but it is fairly quick to setup PetaSAN and try things out.
Can comment on direct Redhat Ceph, but it is fairly quick to setup PetaSAN and try things out.
Srmvel
10 Posts
November 12, 2021, 2:58 pmQuote from Srmvel on November 12, 2021, 2:58 pmHi Admin,
The given hardware specs are good to test it out with petasan?
Hi Admin,
The given hardware specs are good to test it out with petasan?
admin
2,930 Posts
November 12, 2021, 3:42 pmQuote from admin on November 12, 2021, 3:42 pmthe system can work on a wide range of hardware, you can start with existing config and run some benchmarks from ui and extend if needed,
the system can work on a wide range of hardware, you can start with existing config and run some benchmarks from ui and extend if needed,
Srmvel
10 Posts
Ceph performance is very bad with Dell R730 Power edge servers
Srmvel
10 Posts
Quote from Srmvel on November 12, 2021, 1:15 pmHi Team,
We like to evaluate the storage solution with the help of Ceph.
As this is the initial phase, we tried out direct REDHAT Ceph with ISCSI GATEWAY. The intention is to use the CEPH storage in different virtual environments(like VMware, Hyper-V) with ISCSI protocol.
Hardware Configuration.
DELL power Edge servers --- 3 storage servers
Hardware : Dell PowerEdge R730xd CPU : 2 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
Memory : 128Gb
RAID controller : Internal - PERC H730( We used this RAID card as a non-RAID mode for OSD disks(HBA)
Embedded NIC : 2 x 10GbE ----> For Mgmt
NIC on PCIe : Mellanox ConnectX - 2 * 40GbE ---> For Cluster replication
DISKD USED : OS Disk : 2 * 220Gb in RAID-1
OSD Disks --> 2 * 960Gb Micron-5300 SSD's /server
===================================================
Configuration :
Operating System :Debian 11
Ceph Version : 16.2.6 pacific (stable)
1 Node --> Acting as Ceph monitor, Ceph iSCSI Gateway
3 DELL Nodes --> Acting as OSD nodes --> OSD disks are not configured with separate WAL/DB disk
Total of 6 OSD's.
We tried out to test with both ESXi and Hyper-V VMs,but the performance is very very bad and like to know if there any other tunings or recommendations are required.
Before testing out with Petasan CEPH,we like to know what changes are really required to take it forward.As we are targetting this CEPH storage for prod environments,we like to know where is the problem.
It seems all the above hardware are compatible and using enterprise SSD disks only.
Thanks,
Manivel RR
Hi Team,
We like to evaluate the storage solution with the help of Ceph.
As this is the initial phase, we tried out direct REDHAT Ceph with ISCSI GATEWAY. The intention is to use the CEPH storage in different virtual environments(like VMware, Hyper-V) with ISCSI protocol.
Hardware Configuration.
DELL power Edge servers --- 3 storage servers
Hardware : Dell PowerEdge R730xd CPU : 2 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
Memory : 128Gb
RAID controller : Internal - PERC H730( We used this RAID card as a non-RAID mode for OSD disks(HBA)
Embedded NIC : 2 x 10GbE ----> For Mgmt
NIC on PCIe : Mellanox ConnectX - 2 * 40GbE ---> For Cluster replication
DISKD USED : OS Disk : 2 * 220Gb in RAID-1
OSD Disks --> 2 * 960Gb Micron-5300 SSD's /server
===================================================
Configuration :
Operating System :Debian 11
Ceph Version : 16.2.6 pacific (stable)
1 Node --> Acting as Ceph monitor, Ceph iSCSI Gateway
3 DELL Nodes --> Acting as OSD nodes --> OSD disks are not configured with separate WAL/DB disk
Total of 6 OSD's.
We tried out to test with both ESXi and Hyper-V VMs,but the performance is very very bad and like to know if there any other tunings or recommendations are required.
Before testing out with Petasan CEPH,we like to know what changes are really required to take it forward.As we are targetting this CEPH storage for prod environments,we like to know where is the problem.
It seems all the above hardware are compatible and using enterprise SSD disks only.
Thanks,
Manivel RR
admin
2,930 Posts
Quote from admin on November 12, 2021, 2:47 pmCan comment on direct Redhat Ceph, but it is fairly quick to setup PetaSAN and try things out.
Can comment on direct Redhat Ceph, but it is fairly quick to setup PetaSAN and try things out.
Srmvel
10 Posts
Quote from Srmvel on November 12, 2021, 2:58 pmHi Admin,
The given hardware specs are good to test it out with petasan?
Hi Admin,
The given hardware specs are good to test it out with petasan?
admin
2,930 Posts
Quote from admin on November 12, 2021, 3:42 pmthe system can work on a wide range of hardware, you can start with existing config and run some benchmarks from ui and extend if needed,
the system can work on a wide range of hardware, you can start with existing config and run some benchmarks from ui and extend if needed,
Srmvel
10 Posts