[dts] [PATCH] tests: Add VF packet drop and performance test

Liu, Yong yong.liu at intel.com
Mon Feb 1 09:36:25 CET 2016


Hi Yulong,
Some questions for VF performance test plan.
BTW, as I known that l3fwd case is focusing in performance validation 
and has been optimized for throughout.


On 01/19/2016 04:07 PM, Yulong Pei wrote:
> 1.vf_perf.cfg: vm setting and qemu parameters.
> 2.vf_perf_test_plan.rst: test plan, describe test cases.
> 3.TestSuite_vf_perf.py: implement test cases according to the test plan.
>
> Signed-off-by: Yulong Pei <yulong.pei at intel.com>
> ---
>   conf/vf_perf.cfg                 | 105 ++++++++++++++++++++
>   test_plans/vf_perf_test_plan.rst | 179 ++++++++++++++++++++++++++++++++++
>   tests/TestSuite_vf_perf.py       | 201 +++++++++++++++++++++++++++++++++++++++
>   3 files changed, 485 insertions(+)
>   create mode 100644 conf/vf_perf.cfg
>   create mode 100644 test_plans/vf_perf_test_plan.rst
>   create mode 100644 tests/TestSuite_vf_perf.py
>
> diff --git a/conf/vf_perf.cfg b/conf/vf_perf.cfg
> new file mode 100644
> index 0000000..986d289
> --- /dev/null
> +++ b/conf/vf_perf.cfg
> @@ -0,0 +1,105 @@
> +# QEMU options
> +# name
> +#       name: vm0
> +#
> +# enable_kvm
> +#       enable: [yes | no]
> +#
> +# cpu
> +#       model: [host | core2duo | ...]
> +#           usage:
> +#               choose model value from the command
> +#                   qemu-system-x86_64 -cpu help
> +#       number: '4' #number of vcpus
> +#       cpupin: '3 4 5 6' # host cpu list
> +#
> +# mem
> +#       size: 1024
> +#
> +# disk
> +#       file: /path/to/image/test.img
> +#
> +# net
> +#        type: [nic | user | tap | bridge | ...]
> +#           nic
> +#               opt_vlan: 0
> +#                   note: Default is 0.
> +#               opt_macaddr: 00:00:00:00:01:01
> +#                   note: if creating a nic, it`s better to specify a MAC,
> +#                         else it will get a random number.
> +#               opt_model:["e1000" | "virtio" | "i82551" | ...]
> +#                   note: Default is e1000.
> +#               opt_name: 'nic1'
> +#               opt_addr: ''
> +#                   note: PCI cards only.
> +#               opt_vectors:
> +#                   note: This option currently only affects virtio cards.
> +#           user
> +#               opt_vlan: 0
> +#                   note: default is 0.
> +#               opt_hostfwd: [tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport
> +#                   note: If not specified, it will be setted automatically.
> +#           tap
> +#               opt_vlan: 0
> +#                   note: default is 0.
> +#               opt_br: br0
> +#                   note: if choosing tap, need to specify bridge name,
> +#                         else it will be br0.
> +#               opt_script: QEMU_IFUP_PATH
> +#                   note: if not specified, default is self.QEMU_IFUP_PATH.
> +#               opt_downscript: QEMU_IFDOWN_PATH
> +#                   note: if not specified, default is self.QEMU_IFDOWN_PATH.
> +#
> +# device
> +#       driver: [pci-assign | virtio-net-pci | ...]
> +#           pci-assign
> +#               prop_host: 08:00.0
> +#               prop_addr: 00:00:00:00:01:02
> +#           virtio-net-pci
> +#               prop_netdev: mynet1
> +#               prop_id: net1
> +#               prop_mac: 00:00:00:00:01:03
> +#               prop_bus: pci.0
> +#               prop_addr: 0x3
> +#
> +# monitor
> +#       port: 6061
> +#           note: if adding monitor to vm, need to specicy
> +#                 this port, else it will get a free port
> +#                 on the host machine.
> +#
> +# qga
> +#       enable: [yes | no]
> +#
> +# serial_port
> +#       enable: [yes | no]
> +#
> +# vnc
> +#       displayNum: 1
> +#           note: you can choose a number not used on the host.
> +#
> +# daemon
> +#       enable: 'yes'
> +#           note:
> +#               By default VM will start with the daemonize status.
> +#               Not support starting it on the stdin now.
> +
> +# vm configuration for pmd sriov case
> +[vm0]
> +cpu =
> +    model=host,number=4,cpupin=5 6 7 8;
> +disk =
> +    file=/home/image/sriov-fc20-1.img;
> +login =
> +    user=root,password=tester;
> +net =
> +   type=nic,opt_vlan=0;
> +   type=user,opt_vlan=0;
> +monitor =
> +    port=;
> +qga =
> +    enable=yes;
> +vnc =
> +    displayNum=1;
> +daemon =
> +    enable=yes;
> diff --git a/test_plans/vf_perf_test_plan.rst b/test_plans/vf_perf_test_plan.rst
> new file mode 100644
> index 0000000..059e1bd
> --- /dev/null
> +++ b/test_plans/vf_perf_test_plan.rst
> @@ -0,0 +1,179 @@
> +.. Copyright (c) <2015>, Intel Corporation
> +      All rights reserved.
> +
> +   Redistribution and use in source and binary forms, with or without
> +   modification, are permitted provided that the following conditions
> +   are met:
> +
> +   - Redistributions of source code must retain the above copyright
> +     notice, this list of conditions and the following disclaimer.
> +
> +   - Redistributions in binary form must reproduce the above copyright
> +     notice, this list of conditions and the following disclaimer in
> +     the documentation and/or other materials provided with the
> +     distribution.
> +
> +   - Neither the name of Intel Corporation nor the names of its
> +     contributors may be used to endorse or promote products derived
> +     from this software without specific prior written permission.
> +
> +   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> +   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> +   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> +   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> +   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> +   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> +   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> +   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> +   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> +   OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +Test Case 1: Measure packet loss with kernel PF & dpdk VF
> +===================================
> +
> +1. got the pci device id of DUT, for example,
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
> +
> +2. create 2 VFs from 1 PF,
> +
> +echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
> +0000:81:02.0 'XL710/X710 Virtual Function' unused=
> +0000:81:02.1 'XL710/X710 Virtual Function' unused=
> +
> +3. detach VFs from the host, bind them to pci-stub driver,
> +
> +virsh nodedev-detach pci_0000_81_02_0;
> +virsh nodedev-detach pci_0000_81_02_1;
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +
> +4. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
> +
> +/usr/bin/qemu-system-x86_64  -name vm0 -enable-kvm \
> +-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
> +-device pci-assign,host=81:02.0,id=pt_0 \
> +-device pci-assign,host=81:02.1,id=pt_1
> +
> +5. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
> +and then start testpmd, set it in mac forward mode,
> +
> +./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
> +
> +testpmd> set fwd mac
> +testpmd> start
> +
> +6. using ixia traffic generator to send 64 bytes packet with 10% line rate to VF, verify packet loss < 0.0001.
> +
Should we also validate packet sequence and content integrity?

> +Test Case 2: Measure performace with kernel PF & dpdk VF
> +========================================================
> +
> +1. setup test environment as Test Case 1, step 1-5.
> +
> +2. Measure maximum RFC2544 performance throughput for the following packet sizes,
> +
> +frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> +The output format should be as below, with figures given in mpps:
> +
We need also measure performance of multiple queues.

> ++------------+-------+
> +| Size\Cores |  all  |
> ++------------+-------+
> +| 64-byte    |       |
> ++------------+-------+
> +| 128-byte   |       |
> ++------------+-------+
> +| 256-byte   |       |
> ++------------+-------+
> +| 512-byte   |       |
> ++------------+-------+
> +| 1024-byte  |       |
> ++------------+-------+
> +| 1280-byte  |       |
> ++------------+-------+
> +| 1518-byte  |       |
> ++------------+-------+
> +
> +
> +Test Case 3: Measure performace with dpdk PF & dpdk VF
> +======================================================
> +
> +1. got the pci device id of DUT and bind it to igb_uio driver,  for example,
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
> +
> +./dpdk_nic_bind.py --bind=igb_uio 81:00.0
> +
> +2. create 2 VFs from 1 PF,
> +
> +echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/max_vfs
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
> +0000:81:02.0 'XL710/X710 Virtual Function' unused=
> +0000:81:02.1 'XL710/X710 Virtual Function' unused=
> +
> +3. detach VFs from the host, bind them to pci-stub driver,
> +
> +./dpdk_nic_bind.py --bind=pci-stub 81:02.0 81:02.1
> +./dpdk_nic_bind.py --st
> +
> +0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +
> +4. bind PF 81:00.0 to testpmd and start it on the host,
> +
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 81:00.0 -- -i --portmask=0x1 --txqflags=0
> +
> +5. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
> +
> +/usr/bin/qemu-system-x86_64  -name vm0 -enable-kvm \
> +-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
> +-device pci-assign,host=81:02.0,id=pt_0 \
> +-device pci-assign,host=81:02.1,id=pt_1
> +
> +6. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
> +and then start testpmd, set it in mac forward mode,
> +
> +./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
> +
> +testpmd> set fwd mac
> +testpmd> start
> +
> +7. Measure maximum RFC2544 performance throughput for the following packet sizes,
> +
> +frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> +The output format should be as below, with figures given in mpps:
> +
> ++------------+-------+
> +| Size\Cores |  all  |
> ++------------+-------+
> +| 64-byte    |       |
> ++------------+-------+
> +| 128-byte   |       |
> ++------------+-------+
> +| 256-byte   |       |
> ++------------+-------+
> +| 512-byte   |       |
> ++------------+-------+
> +| 1024-byte  |       |
> ++------------+-------+
> +| 1280-byte  |       |
> ++------------+-------+
> +| 1518-byte  |       |
> ++------------+-------+
> diff --git a/tests/TestSuite_vf_perf.py b/tests/TestSuite_vf_perf.py
> new file mode 100644
> index 0000000..c95293f
> --- /dev/null
> +++ b/tests/TestSuite_vf_perf.py
> @@ -0,0 +1,201 @@
> +# <COPYRIGHT_TAG>
> +
> +import re
> +import time
> +
> +import dts
> +from qemu_kvm import QEMUKvm
> +from test_case import TestCase
> +from pmd_output import PmdOutput
> +from etgen import IxiaPacketGenerator
> +
> +VM_CORES_MASK = 'all'
> +
> +class TestVfPerf(TestCase, IxiaPacketGenerator):
> +
> +    def set_up_all(self):
> +
> +        self.tester.extend_external_packet_generator(TestVfPerf, self)
> +
> +        self.dut_ports = self.dut.get_ports(self.nic)
> +        self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
> +
> +        self.core_configs = []
> +        self.core_configs.append({'cores': 'all', 'pps': {}})
> +
> +        self.vm0 = None
> +
> +    def set_up(self):
> +
> +        self.setup_2vf_1vm_env_flag = 0
> +
> +    def setup_2vf_1vm_env(self, driver='default'):
> +
> +        self.used_dut_port = self.dut_ports[0]
> +        self.dut.generate_sriov_vfs_by_port(self.used_dut_port, 2, driver=driver)
> +        self.sriov_vfs_port = self.dut.ports_info[self.used_dut_port]['vfs_port']
> +
> +        try:
> +
> +            for port in self.sriov_vfs_port:
> +                print port.pci
> +                port.bind_driver('pci-stub')
> +
> +            time.sleep(1)
> +            vf0_prop = {'opt_host': self.sriov_vfs_port[0].pci}
> +            vf1_prop = {'opt_host': self.sriov_vfs_port[1].pci}
> +
> +            for port_id in self.dut_ports:
> +                if port_id == self.used_dut_port:
> +                    continue
> +                port = self.dut.ports_info[port_id]['port']
> +                port.bind_driver()
> +
> +            if driver == 'igb_uio':
> +                self.host_testpmd = PmdOutput(self.dut)
> +                eal_param = '-b %(vf0)s -b %(vf1)s' % {'vf0': self.sriov_vfs_port[0].pci,
> +                                                       'vf1': self.sriov_vfs_port[1].pci}
> +                self.host_testpmd.start_testpmd("1S/2C/2T", eal_param=eal_param)
> +
> +            # set up VM0 ENV
> +            self.vm0 = QEMUKvm(self.dut, 'vm0', 'vf_perf')
> +            self.vm0.set_vm_device(driver='pci-assign', **vf0_prop)
> +            self.vm0.set_vm_device(driver='pci-assign', **vf1_prop)
> +            self.vm_dut_0 = self.vm0.start()
> +            if self.vm_dut_0 is None:
> +                raise Exception("Set up VM0 ENV failed!")
> +
> +            self.setup_2vf_1vm_env_flag = 1
> +        except Exception as e:
> +            self.destroy_2vf_1vm_env()
> +            raise Exception(e)
> +
> +    def destroy_2vf_1vm_env(self):
> +        if getattr(self, 'vm0', None):
> +            self.vm0_testpmd.execute_cmd('stop')
> +            self.vm0_testpmd.execute_cmd('quit', '# ')
> +            self.vm0_testpmd = None
> +            self.vm0_dut_ports = None
> +            self.vm_dut_0 = None
> +            self.vm0.stop()
> +            self.vm0 = None
> +
> +        if getattr(self, 'host_testpmd', None):
> +            self.host_testpmd.execute_cmd('quit', '# ')
> +            self.host_testpmd = None
> +
> +        if getattr(self, 'used_dut_port', None):
> +            self.dut.destroy_sriov_vfs_by_port(self.used_dut_port)
> +            port = self.dut.ports_info[self.used_dut_port]['port']
> +            port.bind_driver()
> +            self.used_dut_port = None
> +
> +        for port_id in self.dut_ports:
> +            port = self.dut.ports_info[port_id]['port']
> +            port.bind_driver()
> +
> +        self.setup_2vf_1vm_env_flag = 0
> +
> +    def test_perf_kernel_pf_dpdk_vf_packet_loss(self):
> +
> +        self.setup_2vf_1vm_env(driver='')
> +
> +        self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
> +        port_id_0 = 0
> +        self.vm0_testpmd = PmdOutput(self.vm_dut_0)
> +        self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
> +        self.vm0_testpmd.execute_cmd('show port info all')
> +        pmd0_vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
> +        self.vm0_testpmd.execute_cmd('set fwd mac')
> +        self.vm0_testpmd.execute_cmd('start')
> +
> +        time.sleep(2)
> +
> +        tx_port = self.tester.get_local_port(self.dut_ports[0])
> +        rx_port = tx_port
> +        dst_mac = pmd0_vf0_mac
> +        src_mac = self.tester.get_mac(tx_port)
> +
> +        self.tester.scapy_append('dmac="%s"' % dst_mac)
> +        self.tester.scapy_append('smac="%s"' % src_mac)
> +        self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/IP(len=46)/UDP(len=26)/("X"*18)]')
> +        self.tester.scapy_append('wrpcap("test.pcap", flows)')
> +        self.tester.scapy_execute()
> +
> +        loss, _, _ = self.tester.traffic_generator_loss([(tx_port, rx_port, "test.pcap")], 10, delay=180)
> +
> +        self.verify(loss < 0.0001, "Excessive packet loss when sending 64 bytes packet with 10% line rate")
> +
> +    def measure_vf_performance(self, driver='default'):
> +
> +        if driver == 'igb_uio':
> +            self.setup_2vf_1vm_env(driver='igb_uio')
> +        else:
> +            self.setup_2vf_1vm_env(driver='')
> +
> +        self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
> +        port_id_0 = 0
> +        self.vm0_testpmd = PmdOutput(self.vm_dut_0)
> +        self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
> +        self.vm0_testpmd.execute_cmd('show port info all')
> +        pmd0_vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
> +        self.vm0_testpmd.execute_cmd('set fwd mac')
> +        self.vm0_testpmd.execute_cmd('start')
> +
> +        time.sleep(2)
> +
> +        frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> +        for config in self.core_configs:
> +            self.dut.kill_all()
> +            cores = self.dut.get_core_list(config['cores'])
> +
> +            tx_port = self.tester.get_local_port(self.dut_ports[0])
> +            rx_port = tx_port
> +            dst_mac = pmd0_vf0_mac
> +            src_mac = self.tester.get_mac(tx_port)
> +
> +            global size
> +            for size in frameSizes:
> +                self.tester.scapy_append('dmac="%s"' % dst_mac)
> +                self.tester.scapy_append('smac="%s"' % src_mac)
> +                self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/("X"*%d)]' % (size - 18))
> +                self.tester.scapy_append('wrpcap("test.pcap", flows)')
> +                self.tester.scapy_execute()
> +                tgenInput = []
> +                tgenInput.append((tx_port, rx_port, "test.pcap"))
> +                _, pps = self.tester.traffic_generator_throughput(tgenInput)
> +                config['pps'][size] = pps
> +
> +        for n in range(len(self.core_configs)):
> +            for size in frameSizes:
> +                self.verify(
> +                    self.core_configs[n]['pps'][size] is not 0, "No traffic detected")
> +
> +        # Print results
> +        dts.results_table_add_header(['Frame size'] + [n['cores'] for n in self.core_configs])
> +        for size in frameSizes:
> +            dts.results_table_add_row([size] + [n['pps'][size] for n in self.core_configs])
> +        dts.results_table_print()
> +
> +    def test_perf_kernel_pf_dpdk_vf_performance(self):
> +
> +        self.measure_vf_performance(driver='')
> +
> +    def test_perf_dpdk_pf_dpdk_vf_performance(self):
> +
> +        self.measure_vf_performance(driver='igb_uio')
> +
> +    def tear_down(self):
> +
> +        if self.setup_2vf_1vm_env_flag == 1:
> +            self.destroy_2vf_1vm_env()
> +
> +    def tear_down_all(self):
> +
> +        if getattr(self, 'vm0', None):
> +            self.vm0.stop()
> +
> +        for port_id in self.dut_ports:
> +            self.dut.destroy_sriov_vfs_by_port(port_id)
> +



More information about the dts mailing list