[dpdk-dev] [PATCH v3 6/8] driver/virtio:enqueue vhost TX offload

Xu, Qian Q qian.q.xu at intel.com
Fri Nov 6 09:24:06 CET 2015


Tested-by: Qian Xu <qian.q.xu at intel.com>

- Test Commit: c4d404d7c1257465176deb5bb8c84e627d2d5eee
- OS/Kernel: Fedora 21/4.1.8
- GCC: gcc (GCC) 4.9.2 20141101 (Red Hat 4.9.2-1)
- CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
- NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
- Target: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
- Total 1 cases, 1 passed, 0 failed. Legacy vhost + virtio-pmd can work well with TSO. 

Test Case 1:  test_legacy_vhost+ virtio-pmd tso 
=======================================

On host:

1. Start VM with legacy-vhost as backend::

    taskset -c 4-6  /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -object memory-backend-file, id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
    -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm1 \
    -drive file=/home/img/dpdk1-vm1.img \
    -netdev tap,id=vhost3,ifname=tap_vhost3,vhost=on,script=no \
    -device virtio-net pci,netdev=vhost3,mac=52:54:00:00:00:01,id=net3 \
    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01 \
    -localtime -nographic

2.  Set up the bridge on host: 

brctl addbr br1
brctl addif br1 ens260f0 # The interface is 85:00.0 connected to ixia card3 port9
brctl addif br1 tap0
brctl addif br1 tap1

ifconfig ens260f0 up
ifconfig ens260f0 promisc
ifconfig tap0 up
ifconfig tap1 up
ifconfig tap0 promisc
ifconfig tap1 promisc
brctl stp br1 off
ifconfig br1 up
brctl show

3. Disable firewall and Network manager on host:

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl stop ip6tables.service
systemctl disable ip6tables.service
systemctl stop iptables.service
systemctl disable iptables.service
systemctl stop NetworkManager.service
systemctl disable NetworkManager.service

4.  Let br1 learn the MAC : 02:00:00:00:00:00, since in the VM, the virtio device run testpmd, then it will send packets with the DEST MAC as 02:00:00:00:00:00. Then the br1 will know this packet can go to the NIC and then it will go back to the traffic generator. So here we send a packet from IXIA with the SRC MAC=02:00:00:00:00:00 and DEST MAC=52:54:00:00:00:01 to let the br1 know the MAC. We can verify the macs that the bridge knows by running: brctl br1 showmacs

port no mac addr                is local?       ageing timer
  3     02:00:00:00:00:00       no                 6.06
  1     42:fa:45:4d:aa:4d       yes                0.00
  1     42:fa:45:4d:aa:4d       yes                0.00
  1     52:54:00:00:00:01       no                 6.06
  2     8e:d7:22:bf:c9:8d       yes                0.00
  2     8e:d7:22:bf:c9:8d       yes                0.00
  3     90:e2:ba:4a:55:1c       yes                0.00
  3     90:e2:ba:4a:55:1c       yes                0.00


On guest:

5. ensure the dpdk folder copied to the guest with the same config file and build process as host. Then bind 2 virtio devices to igb_uio and start testpmd, below is the step for reference::

    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 

    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --max-pkt-len 9000 
    
    $ >set fwd csum
    
    $ >tso set 1000 0
    $ >tso set 1000 1

    $ >start 

6.  Send TCP packets to virtio1, and the packet size is 5000, then at the virtio side, it will receive 1 packet ant let vhost to do TSO, vhost will let NIC do TSO, so at IXIA, we expected 5 packets, each ~1k size, then also capture the received packets and check if the checksum is correct.

Result:  All the behavior is expected and cksum is correct. So the case is PASS.


Thanks
Qian


-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Xu, Qian Q
Sent: Thursday, November 05, 2015 6:45 PM
To: Thomas Monjalon
Cc: dev at dpdk.org; Michael S. Tsirkin
Subject: Re: [dpdk-dev] [PATCH v3 6/8] driver/virtio:enqueue vhost TX offload

OK, I will check it tomorrow. 
Another comment is that "Legacy vhost + virtio-pmd" is not the common use case. Firstly, in this case, virtio-pmd has no TCP/IP stack, TSO is not very meaningful; secondly, we can't get performance benefit from this case compared to "Legacy vhost+ legacy virtio". So I'm afraid no customer would like to try this case since the fake TSO and poor performance. 


Thanks
Qian


-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com] 
Sent: Thursday, November 05, 2015 5:02 PM
To: Xu, Qian Q
Cc: Liu, Jijiang; dev at dpdk.org; Michael S. Tsirkin
Subject: Re: [dpdk-dev] [PATCH v3 6/8] driver/virtio:enqueue vhost TX offload

2015-11-05 08:49, Xu, Qian Q:
> Test Case 1:  test_dpdk vhost+ virtio-pmd tso 
[...]
> Test Case 2:  test_dpdk vhost+legacy virtio iperf tso
[...]
> Yes please, I'd like to see a test report showing this virtio running with Linux vhost and without vhost.
> We must check that the checksum is well offloaded and sent packets are valids.
> Thanks

Thanks for doing some tests.
I had no doubt it works with DPDK vhost.
Please could you do some tests without vhost and with kernel vhost?
We need to check that the checksum is not missing in such cases.


More information about the dev mailing list