Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Performance gap between rbd and rbd over iscsi

I am using petasan as iscsi gateway for my ceph cluster by connecting a petasan v1.5 node to my ceph cluster and using lrbd and targetcli to export rbd to iscsi.

I used  fio to test 4m seq write throughput.

[4m]
direct=1
ioengine=libaio
directory=/test/
numjobs=58
iodepth=4
group_reporting
rw=write
bs=4M
size=200G

with native rbd I can get around 1GB/s throughput, but with iscsi I can only get aroub 300MB/s throughput.

The NICs for ceph cluster and iscsi are both 10GbE ones.

Is there anything I could do to make rbd over iscsi have the same performance of native rbd?

I have done the same test with a smaller cluster and in that test the performance between rbd and rbd over iscsi were almost the same.

Why do you use lrbd ?

what is the load on the system : cpu / network ?

I use lrbd because I can't use petasan to manage my already existed ceph cluster, so I used a petasan node to act as an iscsi gateway.

The load on ceph cluster looks ok with nothing seems wrong.

 

The load on petasan iscsi node :

network for ceph cluster:

TX:2.58Gb/s

RX:9.66Mb/s

network for iscsi:

TX:4.4Mb/s

RX:2.59Gb/s

 

Cpu(8cores cput) util% is 37% on only one core

top 3 process are:

iscsi_trx

kworker/0:1

iscsi_ttx

 

 

 

Where is the client  and how is it configured ? via open-iscsi ? what is the load on the client ?

If you add more clients hitting the same disk, do you get higher output ?

If you have more than 1 target disk running, what output do you get ?

The client is another server in the same subnet with petasan gateway node with 10GbE nic. Client's system is linux with open-iscsi.The cpu  usage on the client is low and the network usage is the same as the throughput.

I haven't  test add more clients to the same disk yet (I am not sure If you mean more client or more thread,because I think it is not a good idea to mount the same iscsi disk on multiple client)

I have tested to run 2 targets,the throughput is about 180% of 1 targets and I found 2 cpu cores were in use on petasan gateway node.Does it mean each target can only use one cpu core?

 

And I have another question about multipath.

If I am using targetcli to manage iscsi target on my petasan gateway node,how should I config multipath? Can I just create two target with the same IQN and the same rbd to provide multipath?How the two target gateway coordinate to do multipath?

Is the osd write-same patch related to mulitpath because my existed ceph cluster dosen't have that patch?

Thank you.

Setup a target with 4 to 8 paths in active/active and test from a single client machine.  Make sure you have active/active setup on the client multipath configuration.  Each path should be handled by a separate core on the target side.

No need to test on different client machines since it does look the client resources are OK. Aside from this, it is possible for multiple client machines to actively use a single disk, at the block layer this is perfectly fine, at filesystem layer this depends if you use a clustered file system or not. VMWare and Hyper-v do this, multiple nodes access the same target disk concurrently. If you are using a non cluster fs like xfs then no you cannot mount it on 2 different machines, in testing however you can use fio directly on the block device without mounting a file system.

On the target side: lrbd allows you to create 1 target with several active paths, look at the docs how to do this. You can also use PetaSAN ui to do this. On the client side you need to setup open-iscsi and multipath to talk to your target in active/active setup.

You do not need the write-same osd patch unless you need to support vmware.

Thank you for your explanation.

So is the performce(300-400MB/s) is expected for a single path target ?

it depends on your hardware, but in general your paths will be distributed over your cores