[dts] [PATCH V1] add test suite vhost_pmd_xstats

xu,gang gangx.xu at intel.com
Tue Dec 27 04:13:34 CET 2016


Signed-off-by: xu,gang <gangx.xu at intel.com>
---
 framework/ssh_pexpect.py                  |   1 +
 test_plans/vhost_pmd_xstats_test_plan.rst | 306 ++++++++++++++++++++++++++++++
 tests/TestSuite_vhost_pmd_xstats.py       | 245 ++++++++++++++++++++++++
 3 files changed, 552 insertions(+)
 create mode 100644 test_plans/vhost_pmd_xstats_test_plan.rst
 create mode 100644 tests/TestSuite_vhost_pmd_xstats.py

diff --git a/framework/ssh_pexpect.py b/framework/ssh_pexpect.py
index d5b6616..09095a1 100644
--- a/framework/ssh_pexpect.py
+++ b/framework/ssh_pexpect.py
@@ -185,6 +185,7 @@ class SSHPexpect(object):
         if i == 1:
             time.sleep(0.5)
             p.sendline(password)
+            time.sleep(10)
             p.expect("100%", 60)
         if i == 4:
             self.logger.error("SCP TIMEOUT error %d" % i)
diff --git a/test_plans/vhost_pmd_xstats_test_plan.rst b/test_plans/vhost_pmd_xstats_test_plan.rst
new file mode 100644
index 0000000..4415e41
--- /dev/null
+++ b/test_plans/vhost_pmd_xstats_test_plan.rst
@@ -0,0 +1,306 @@
+.. Copyright (c) <2016>, Intel Corporation
+   All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+==================================
+Vhost PMD xstats test plan
+==================================
+
+This test plan will cover the basic vhost pmd xstats case and will be worked as a regression test plan. In the test plan, we will use vhost as a pmd port in testpmd. 
+
+Test Case1: Vhsot PMD xstats based on packet size
+======================================================================
+
+flow: 
+TG-->NIC-->>Vhost TX-->Virtio RX-->Virtio TX-->Vhsot RX-->NIC-->TG
+
+1. Launch testpmd by below command, bind one physical port to igb_uio, then launch the testpmd
+    rm -rf vhost-net*
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x30 -n 4 --socket-mem 1024,0 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1
+    testpmd>
+
+	ssh 10.240.176.254 -p 6087
+
+2. Launch VM1,
+
+    taskset -c 6-7 \
+    /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
+     -chardev socket,id=char0,path=./vhost-net \
+     -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=1 \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,mq=on \
+     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -localtime -vnc :10 -daemonize
+
+
+3. On VM1, ensure the same dpdk folder is copied and run testpmd, use txqflags=0xf01 ::
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i --txqflags=0xf01
+    testpmd>start
+
+4. On host, testpmd, set ports to the mac forward mode
+   
+    testpmd>set fwd mac
+    testpmd>start tx_first
+
+5. On VM, testpmd, set port to the mac forward mode
+   
+    testpmd>set fwd mac
+    testpmd>start
+	
+6. On host run "show port xstats all" at least twice to check the packets number
+
+    testpmd>show port xstats all
+	rx_q0_size_64_packets: 16861577
+	rx_q0_size_65_to_127_packets: 0
+	rx_q0_size_128_to_255_packets: 0
+	rx_q0_size_256_to_511_packets: 0
+	rx_q0_size_512_to_1023_packets: 0
+	rx_q0_size_1024_to_1522_packets: 0
+	rx_q0_size_1523_to_max_packets: 0
+
+
+7. Let TG generate different size of packets, send 10000 packets for each packet sizes(64,128,255, 512, 1024, 1523), check the statistic number is correct
+
+  
+
+Test Case2: Vhsot PMD xstats based on packet types
+======================================================
+
+Similar as Test Case1, all steps are similar except step6,7: 
+ 
+5. On host run "show port xstats all" at least twice to check the packets type:
+
+    testpmd>show port xstats all
+	rx_q0_broadcast_packets: 0
+	rx_q0_multicast_packets: 0
+	rx_q0_ucast_packets: 45484904
+
+
+
+6. Let TG generate different type of packets, broadcast, multicast, ucast, check the statistic number is correct 
+
+   
+Test Case3: Performance compare with xstats turn on/off with vector path(Mergeable on and txflags=0xf01)
+======================================================
+
+1. Launch testpmd by below command, don't bind NIC to igb_uio, then launch the testpmd
+    rm -rf vhost-net*
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x30 -n 4 --socket-mem 1024,0 --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1
+    testpmd>
+
+2. Launch VM1, mrg_off=off so to enable the mergeable::
+
+    taskset -c 6-7 \
+    /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
+     -chardev socket,id=char0,path=./vhost-net \
+     -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=1 \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,mq=on \
+     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -localtime -vnc :10 -daemonize
+
+
+3. On VM1, ensure the same dpdk folder is copied and run testpmd, use txqflags=0xf01 ::
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i --txqflags=0xf01
+    testpmd>start
+
+4. On host, testpmd, set ports to the mac forward mode
+   
+    testpmd>set fwd io retry
+    testpmd>start tx_first 8
+
+5. On VM, testpmd, set port to the mac forward mode
+   
+    testpmd>start
+
+6.     testpmd>show port stats all
+    xxxxxxx
+    Throughput (since last show)
+    RX-pps:            xxx
+    TX-pps:            xxx 
+	
+7. The performance drop after turn on xstats should under 5%
+
+   
+Test Case3: Performance compare with xstats turn on/off when mergeable on
+======================================================
+
+1. Launch testpmd by below command, don't bind NIC to igb_uio, then launch the testpmd
+    rm -rf vhost-net*
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x30 -n 4 --socket-mem 1024,0 --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1
+    testpmd>
+
+2. Launch VM1, mrg_off=off so to enable the mergeable::
+
+    taskset -c 6-7 \
+    /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
+     -chardev socket,id=char0,path=./vhost-net \
+     -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=1 \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on \
+     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -localtime -vnc :10 -daemonize
+
+
+3. On VM1, ensure the same dpdk folder is copied and run testpmd, use txqflags=0xf01 ::
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i --txqflags=0xf01
+    testpmd>start
+
+4. On host, testpmd, set ports to the mac forward mode
+   
+    testpmd>set fwd io retry
+    testpmd>start tx_first 8
+
+5. On VM, testpmd, set port to the mac forward mode
+   
+    testpmd>start
+
+6.     testpmd>show port stats all
+    xxxxxxx
+    Throughput (since last show)
+    RX-pps:            xxx
+    TX-pps:            xxx 
+	
+7. The performance drop after turn on xstats should under 5%
+
+Test Case5: clear Vhsot PMD xstats 
+======================================================================
+
+flow: 
+TG-->NIC-->>Vhost TX-->Virtio RX-->Virtio TX-->Vhsot RX-->NIC-->TG
+
+1. Launch testpmd by below command, bind one physical port to igb_uio, then launch the testpmd
+    rm -rf vhost-net*
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x30 -n 4 --socket-mem 1024,0 --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1
+    testpmd>
+
+2. Launch VM1, mrg_off=on so to enable the mergeable::
+
+    taskset -c 6-7 \
+    /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
+     -chardev socket,id=char0,path=./vhost-net \
+     -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=1 \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on \
+     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -localtime -vnc :10 -daemonize
+
+
+3. On VM1, ensure the same dpdk folder is copied and run testpmd, use txqflags=0xf01 ::
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i --txqflags=0xf01
+    testpmd>start
+
+4. On host, testpmd, set ports to the mac forward mode
+   
+    testpmd>set fwd mac
+    testpmd>start tx_first
+
+5. On VM, testpmd, set port to the mac forward mode
+   
+    testpmd>set fwd mac
+    testpmd>start
+
+6. Let TG generate different size of packets, send 10000 packets for each packet sizes(64,128,255, 512, 1024, 1523,3000), check the statistic number is correct
+	
+7. On host run "show port xstats all" at least twice to check the packets number
+
+    testpmd>show port xstats all
+	rx_q0_size_64_packets: 16861577
+	rx_q0_size_65_to_127_packets: 0
+	rx_q0_size_128_to_255_packets: 0
+	rx_q0_size_256_to_511_packets: 0
+	rx_q0_size_512_to_1023_packets: 0
+	rx_q0_size_1024_to_1522_packets: 0
+	rx_q0_size_1523_to_max_packets: 0
+
+8. On host run "clear port xstats all" , then all the statistic date should be 0
+
+    testpmd>clear port xstats all
+	testpmd>show port xstats all
+	rx_q0_size_64_packets: 0
+	rx_q0_size_65_to_127_packets: 0
+	rx_q0_size_128_to_255_packets: 0
+	rx_q0_size_256_to_511_packets: 0
+	rx_q0_size_512_to_1023_packets: 0
+	rx_q0_size_1024_to_1522_packets: 0
+	rx_q0_size_1523_to_max_packets: 0
+
+Test Case5: Stress test for Vhsot PMD xstats 
+======================================================
+
+Similar as Test Case1, all steps are similar except step6,7: 
+ 
+5. Send 64 bytes packet with line speed for 30 minutes
+
+6. On host run "show port xstats all" , check the packet statistic number has no big difference with the TG side.
+
+    testpmd>show port xstats all
+	rx_q0_size_64_packets: 16861577
+	
+
+Test Case6: Long lasting test for Vhsot PMD xstats 
+======================================================
+1. Launch testpmd by below command, no need bind any physical port to igb_uio
+    rm -rf vhost-net*
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c -n 4 --socket-mem 1024 1024 --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- -i --nb-cores=4 --rxq=2 --txq=2 --rss-ip
+    testpmd>start
+
+2. Launch VM1, set queues=2, vectors=2xqueues+2, mq=on::
+
+    qemu-system-x86_64 -name vm1 -cpu host -enable-kvm \
+    -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem \
+    -mem-prealloc -smp cores=3,sockets=1 -drive file=/home/osimg/ubuntu16.img -chardev socket,id=char0,path=./vhost-net \
+    -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \
+    -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,mq=on,vectors=6 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \
+    -vnc :10 -daemonize
+
+3. On VM1, ensure the same dpdk folder is copied and run testpmd, use txqflags=0xf01 ::
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 --nb-cores=2 --txqflags=0xf01 --rss-ip
+    testpmd>start
+
+4. On host, testpmd, set ports to the mac forward mode
+   
+    testpmd>set fwd io retry
+    testpmd>start tx_first 8
+
+5. On VM, testpmd, set port to the mac forward mode
+   
+    testpmd>start
+
+6.  Send packets for 30 minutes, check the Xstatsa still can work correctly
+   testpmd>show port xstats all
+	
+
+
+	
+
+
diff --git a/tests/TestSuite_vhost_pmd_xstats.py b/tests/TestSuite_vhost_pmd_xstats.py
new file mode 100644
index 0000000..35909ad
--- /dev/null
+++ b/tests/TestSuite_vhost_pmd_xstats.py
@@ -0,0 +1,245 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""
+DPDK Test suite.
+
+vhost pmd xstats test suite.
+"""
+import os
+import dts
+import string
+import re
+import time
+import utils
+import datetime
+from scapy.utils import wrpcap, rdpcap
+from test_case import TestCase
+from exception import VerifyFailure
+from settings import HEADER_SIZE
+from etgen import IxiaPacketGenerator
+from qemu_kvm import QEMUKvm
+from packet import Packet, sniff_packets, load_sniff_packets
+
+
+class TestVhostPmdXstats(TestCase, IxiaPacketGenerator):
+
+    def set_up_all(self):
+        """
+        Run at the start of each test suite.
+        """
+        self.dut_ports = self.dut.get_ports(self.nic)
+        self.verify(len(self.dut_ports) >= 2, "Insufficient ports")
+        cores = self.dut.get_core_list("1S/4C/1T")
+        self.coremask = utils.create_mask(cores)
+       
+        self.dmac = self.dut.get_mac_address(self.dut_ports[0]) 
+        self.virtio1_mac = "52:54:00:00:00:01"
+
+        # build sample app
+        out = self.dut.build_dpdk_apps("./examples/vhost")
+        self.verify("Error" not in out, "compilation error 1")
+        self.verify("No such file" not in out, "compilation error 2")
+        #stop vhost firewalld.service
+        self.dut.send_expect("systemctl stop firewalld.service", "#")
+
+    def set_up(self):
+        """ 
+        Run before each test case.
+        Launch vhost sample using default params
+        """
+	self.dut.send_expect("rm -rf ./vhost.out", "#")
+	self.dut.send_expect("rm -rf ./vhost-net*", "#")
+	self.dut.send_expect("killall -s INT vhost-switch", "#")
+	self.dut.send_expect("killall -s INT qemu-system-x86_64", "#")
+
+    def vm_testpmd_start(self):
+        """
+        Start testpmd in vm
+        """
+        self.vm_testpmd = "./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i --txqflags=0xf01"
+        if self.vm_dut is not None:
+            self.vm_dut.send_expect(self.vm_testpmd, "testpmd>", 60)
+            out = self.vm_dut.send_expect("start", "testpmd>")
+
+    def vm_tx_first_start(self):
+        if self.vm_dut is not None:
+            # Start tx_first
+            self.vm_dut.send_expect("set fwd mac", "testpmd>")
+            self.vm_dut.send_expect("start tx_first", "testpmd>")
+
+    def start_onevm(self):
+        """
+        Start One VM with one virtio device
+        """
+        self.vm_dut = None
+        self.vm = QEMUKvm(self.dut, 'vm0', 'vhost_pmd_xstats')
+        vm_params = {}
+        vm_params['driver'] = 'vhost-user'
+        vm_params['opt_path'] = './vhost-net'
+        vm_params['opt_mac'] = self.virtio1_mac
+        self.vm.set_vm_device(**vm_params)
+        try:
+            self.vm_dut = self.vm.start()
+            if self.vm_dut is None:
+                raise Exception("Set up VM ENV failed")
+        except Exception as e:
+            print utils.RED("Failure for %s" % str(e))
+        return True
+
+    def scapy_send_packet(self, pktsize, dmac, num=1):
+        """
+        Send a packet to port
+        """
+        txport = self.tester.get_local_port(self.dut_ports[0])
+        self.txItf = self.tester.get_interface(txport)
+        pkt = Packet(pkt_type='UDP', pkt_len=pktsize)
+        pkt.config_layer('ether', {'dst': dmac,})
+        pkt.send_pkt(tx_port=self.txItf, count=num)
+
+    def send_verify(self, scope, mun):
+        """
+        according the scope to check results
+        """
+        out = self.dut.send_expect("show port xstats %s" % self.dut_ports[1], "testpmd>", 60)
+        packet = re.search("rx_%s_packets:\s*(\d*)" % scope, out)
+        sum_packet = packet.group(1)
+        self.verify(int(sum_packet) >= mun, "Insufficient the received package")
+
+    def prepare_start(self):
+        """
+        prepare all of the conditions for start
+        """
+        self.dut.send_expect("./x86_64-native-linuxapp-gcc/app/testpmd -c %s -n %s --socket-mem 1024,0 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1" % (self.coremask,self.dut.get_memory_channels()), "testpmd>", 60)
+        self.start_onevm()
+        self.vm_testpmd_start()
+        self.dut.send_expect("set fwd mac", "testpmd>", 60)
+        self.dut.send_expect("start tx_first", "testpmd>", 60) 
+        self.vm_tx_first_start()
+
+    def vhost_pmd_xstats_based(self):
+        """
+        Verify receiving and transmitting packets correctly in the Vhsot PMD xstats
+        """
+        self.prepare_start()
+        sizes = [64,65,128,256,513,1025]
+        scope = ''
+        for pktsize in sizes:
+            if pktsize == 64:
+                scope = 'size_64'
+            elif 65 <= pktsize <= 127:
+                scope = 'size_65_to_127'
+            elif 128 <= pktsize <= 255:
+                scope = 'size_128_to_255'
+            elif 256 <= pktsize <= 511:
+                scope = 'size_256_to_511'
+            elif 512 <= pktsize <= 1023:
+                scope = 'size_512_to_1023'
+            elif 1024 <= pktsize:
+                scope = 'size_1024_to_max'
+
+            self.scapy_send_packet(pktsize, self.dmac, 100)
+            self.send_verify(scope, 100)
+    
+    def test_vhost_pmd_xstats_based(self):
+        """
+        Verify receiving and transmitting packets correctly in the Vhsot PMD xstats
+        """
+        self.vhost_pmd_xstats_based()        
+    def test_vhost_pmd_xstats_based_types(self):
+        """
+        Verify different type of packets receiving and transmitting packets correctly in the Vhsot PMD xstats
+        """
+        self.prepare_start()
+        types = ['ff:ff:ff:ff:ff:ff','01:00:00:33:00:01']
+        scope = ''
+        for p in types:
+            if p == 'ff:ff:ff:ff:ff:ff':
+                scope = 'broadcast'
+                self.dmac = 'ff:ff:ff:ff:ff:ff'
+            elif p == '01:00:00:33:00:01':
+                scope = 'multicast'
+                self.dmac = '01:00:00:33:00:01'
+            self.scapy_send_packet(64, self.dmac, 100)
+            self.send_verify(scope, 100)
+           
+    def test_clear_vhost_pmd_xstats(self):
+        """
+        Verify clear Vhsot PMD xstats
+        """   
+        self.vhost_pmd_xstats_based() 
+        self.dut.send_expect("clear port xstats all", "testpmd>", 60)
+        out = self.dut.send_expect("show port xstats all", "testpmd>", 60)
+        size_packets = ['size_64','size_65_to_127','size_128_to_255','size_256_to_511','size_512_to_1023','size_1024_to_max',]
+        for size_packet in size_packets:
+            packet = re.search("rx_%s_packets:\s*(\d*)" % size_packet, out)
+            sum_packet = packet.group(1)
+            self.verify(int(sum_packet) >= 0, "Insufficient the received package")
+
+    def test_longlasting_vhost_pmd_xstats(self):
+        """
+        Verify Long lasting test for Vhsot PMD xstats 
+        Send packets for 30 minutes, check the Xstatsa still can work correctly
+        """
+        self.dut.send_expect("./x86_64-native-linuxapp-gcc/app/testpmd -c %s -n %s --socket-mem 1024,0 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1" % (self.coremask,self.dut.get_memory_channels()), "testpmd>", 60)
+        self.start_onevm()
+        self.vm_testpmd_start()
+        self.dut.send_expect("set fwd mac", "testpmd>", 60)
+        self.dut.send_expect("start tx_first", "testpmd>", 60)
+        if self.vm_dut is not None:
+            out = self.vm_dut.send_expect("start", "testpmd>")
+        date_old = datetime.datetime.now()
+        date_new = date_old + datetime.timedelta(minutes=1)
+        while(1):
+            date_now = datetime.datetime.now()
+            self.scapy_send_packet(64, self.dmac, 1)
+            if date_now >= date_new:
+                break
+        out_0 = self.dut.send_expect("show port xstats %s" % self.dut_ports[0], "testpmd>", 60)
+        out_1 = self.dut.send_expect("show port xstats %s" % self.dut_ports[1], "testpmd>", 60)
+        rx_packet = re.search("rx_size_64_packets:\s*(\d*)" , out_1)
+        tx_packet = re.search("tx_good_packets:\s*(\d*)" , out_0)
+        tx_packets = tx_packet.group(1)
+        rx_packets = rx_packet.group(1)
+        self.verify(int(rx_packets) >= int(tx_packets), "Insufficient the received package")
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        self.dut.kill_all()
+        time.sleep(2)
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        pass
-- 
1.9.3



More information about the dts mailing list