Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Expected Transfer Length (TARGET_CORE)

Hi,

after some testing with 1.5  we got a lot of this messages on 3 of your 4 nodes:

Feb  9 15:49:15 ps-cl01-node03 kernel: [682476.076949] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:49:15 ps-cl01-node03 kernel: [682476.077128] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:49:15 ps-cl01-node03 kernel: [682476.080760] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:49:15 ps-cl01-node03 kernel: [682476.080946] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:49:16 ps-cl01-node03 kernel: [682476.271366] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:49:16 ps-cl01-node03 kernel: [682476.271535] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:50:18 ps-cl01-node03 kernel: [682539.090201] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:50:18 ps-cl01-node03 kernel: [682539.090395] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:50:18 ps-cl01-node03 kernel: [682539.093892] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:50:18 ps-cl01-node03 kernel: [682539.094065] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:50:19 ps-cl01-node03 kernel: [682539.281013] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0
Feb  9 15:50:19 ps-cl01-node03 kernel: [682539.281189] TARGET_CORE[iSCSI]: Expected Transfer Length: 2048 does not match SCSI CDB Length: 16 for SAM Opcode: 0xa0

do we need to worry about this?

Hi,

What client os and initiator are you using ?

we are using vmware 6.0 with HP Emulex OneConnect ISCSI HBA

Can you try using the vmware software iSCSI hba and see if you still get these logs.

It seems related to this

http://article.gmane.org/gmane.linux.scsi.target.devel/1579

Hi,

after some more investigation, it looks like this only happens if we restart a host cause of some stress testing for the new esx cluster and storage we have restarted a lot of esx servers since we did not restart the hosts the errors are gone.

but what we see ist a maximum transfer speed of 250mb/s per VM write speed using storage VMotion 2 vm´s results aprox 500mb/s  (cluster stress test results 1000mb/s write and 2800mb/s read) any idea why this happend?

For the warning logs, if it does happen again, we can send you an iscsi kernel module that will put more debug logs. It is most probably due to how the emulex client initiator retries a command on a new host when a host fails,  but it will be better to double check this. Also i still would recommend trying to use the software hba and see if you still see any warnings.

For the performance point, there could be several factors:

  • The cluster benchmark is the low level ceph 'rados' test, which shows what the max the ceph cluster is able to give and helps in tuning hardware to get the best performance, when testing iSCSI performance there is extra cpu and network load if you run the iSCSI service on the same host as well as there is extra latency due to the extra step.
  • There is another factor, which i believe is what you are see-ing, it is from the vmware side: vmware does put limits on how much a vm can use from the SAN, this is so that no vm can take all bandwidth starving others, i believe this is done by limiting queue depth to 32 per vm, there is also a total limit for an esx so no esx host can strive others. I suspect this is what you see since the bandwidth doubled by adding a second vm. The way we test here is run a stress tool with many workers/threads per vm and have several vms running this test per esx host and have several esx.  If you want there are configuration options in vmware that let you remove/increase limits per vm.

We also are seeing these warnings... the URL above is not working either...

Are you using the VMWare software iSCSI hba adaptor or something else ?  what version do you use ? do you see this all the time ?

this is a link of the warning

https://www.spinics.net/lists/target-devel/msg02190.html