[dpdk-dev] [PATCH v5 0/4] Fix vhost enqueue/dequeue issue

Xu, Qian Q qian.q.xu at intel.com
Wed Jun 3 09:50:43 CEST 2015


Tested-by: Qian Xu<qian.q.xu at intel.com>
Signed-off-by: Qian Xu<qian.q.xu at intel.com>

-Tested commit: 1a1109404e702d3ad1ccc1033df55c59bec1f89a
-Host OS/Kernel: FC21/3.19
-Guest OS/Kernel: FC21/3.19
-NIC: Intel 82599 10G
-Default x86_64-native-linuxapp-gcc configuration
-Total 2 cases, 2 passed.

Test Case 1:  test_perf_vhost_one_vm_dpdk_fwd_vhost-user
====================================================
On host:

1. Start up vhost-switch, vm2vm 0 means only one vm without vm to vm communication::

    taskset -c 18-20 <dpdk_folder>/examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0 
   

2. Start VM with vhost user as backend::

taskset -c 22-28 \
/home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host \
-enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
-smp cores=20,sockets=1 -drive file=/home/img/fc21-vm1.img \
-chardev socket,id=char0,path=/home/qxu10/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce=on \
-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 \
-chardev socket,id=char1,path=/home/qxu10/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce=on \
-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
-netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:09 -nographic

On guest:

3. ensure the dpdk folder copied to the guest with the same config file and build process as host. Then bind 2 virtio devices to igb_uio and start testpmd, below is the step for reference::

    ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0

    ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n 4 -- -i --txqflags 0x0f00 --rxq=2 --disable-hw-vlan-filter
    
    $ >set fwd mac
    
    $ >start tx_first

4. After typing start tx_first in testpmd, user can see there would be 2 virtio device with MAC and vlan id registered in vhost-user, the log would be shown in host's vhost-sample output.

5. Send traffic(30second) to virtio1 and virtio2, and set the packet size from 64 to 1518. Check the performance in Mpps. The traffic sent to virtio1 should have the DEST MAC of Virtio1's MAC, Vlan id of Virtio1. The traffic sent to virtio2 should have the DEST MAC of Virtio2's MAC, Vlan id of Virtio2. The traffic's DEST IP and SRC IP is continuously incremental,e.g(from 192.168.1.1 to 192.168.1.63), so the packets can go to different queues via RSS/Hash. As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 

6. Check that if the packets have been to different queue at the guest testpmd stats display.
 
7. Check the packet data integrity. 
    
Test Case 2:  test_perf_virtio_one_vm_linux_fwd_vhost-user
===================================================
On host:

Same step as in TestCase1.

On guest:   
  
1. Set up routing on guest::

    $ systemctl stop firewalld.service
    
    $ systemctl disable firewalld.service
    
    $ systemctl stop ip6tables.service
    
    $ systemctl disable ip6tables.service

    $ systemctl stop iptables.service
    
    $ systemctl disable iptables.service

    $ systemctl stop NetworkManager.service
    
    $ systemctl disable NetworkManager.service
 
    $ echo 1 >/proc/sys/net/ipv4/ip_forward

    $ ip addr add 192.168.1.2/24 dev eth1    # eth1 is virtio1
    
    $ ip neigh add 192.168.1.1 lladdr 00:00:00:00:0a:0a dev eth1
    
    $ ip link set dev eth1 up
    
    $ ip addr add 192.168.2.2/24 dev eth2    # eth2 is virtio2
    
    $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:00:0a  dev eth2
    
    $ ip link set dev eth2 up

2. Send traffic(30second) to virtio1 and virtio2. According to above script, traffic sent to virtio1 should have SRC IP (e.g: 192.168.1.1), DEST IP(e.g:192.168.2.1), DEST MAC as virtio1's MAC, VLAN ID as virtio1's VLAN. Traffic sent to virtio2 has the similar setting, SRC IP(e.g:192.168.2.1), DEST IP(e.g: 192.168.1.1), VLAN ID as virtio2's VLAN. Set the packet size from 64 to 1518 as well as jumbo frame.Check the performance in Mpps.As to the functionality criteria, The received rate should not be zero. As to the performance criteria, need check it with developer or design doc/PRD. 

3. Check if the data integrity of the forwarded packets, ensure no content changes.  

Thanks
Qian


-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ouyang Changchun
Sent: Wednesday, June 03, 2015 2:02 PM
To: dev at dpdk.org
Subject: [dpdk-dev] [PATCH v5 0/4] Fix vhost enqueue/dequeue issue

Fix enqueue/dequeue can't handle chained vring descriptors; Remove unnecessary vring descriptor length updating; Add support copying scattered mbuf to vring;

Changchun Ouyang (4):
  lib_vhost: Fix enqueue/dequeue can't handle chained vring descriptors
  lib_vhost: Refine code style
  lib_vhost: Extract function
  lib_vhost: Remove unnecessary vring descriptor length updating

 lib/librte_vhost/vhost_rxtx.c | 201 +++++++++++++++++++++++-------------------
 1 file changed, 111 insertions(+), 90 deletions(-)

--
1.8.4.2



More information about the dev mailing list