[dts] [PATCH] add VEB_test_plan to test_plans

Liu, Yong yong.liu at intel.com
Fri Oct 14 16:04:25 CEST 2016


Thanks yuan,  I can easily understand your test plan. BTW, please restrict each line smaller than 79 characters.

> -----Original Message-----
> From: dts [mailto:dts-bounces at dpdk.org] On Behalf Of Yuan Peng
> Sent: Friday, October 14, 2016 8:50 AM
> To: dts at dpdk.org
> Cc: Peng, Yuan
> Subject: [dts] [PATCH] add VEB_test_plan to test_plans
> 
> From: pengyuan <yuan.peng at intel.com>
> 
> Signed-off-by: pengyuan <yuan.peng at intel.com>
> 
> diff --git a/test_plans/VEB_test_plan.rst b/test_plans/VEB_test_plan.rst
> new file mode 100644
> index 0000000..6fad452
> --- /dev/null
> +++ b/test_plans/VEB_test_plan.rst
> @@ -0,0 +1,467 @@
> +.. Copyright (c) <2011>, Intel Corporation
> +      All rights reserved.
> +
> +   Redistribution and use in source and binary forms, with or without
> +   modification, are permitted provided that the following conditions
> +   are met:
> +
> +   - Redistributions of source code must retain the above copyright
> +     notice, this list of conditions and the following disclaimer.
> +
> +   - Redistributions in binary form must reproduce the above copyright
> +     notice, this list of conditions and the following disclaimer in
> +     the documentation and/or other materials provided with the
> +     distribution.
> +
> +   - Neither the name of Intel Corporation nor the names of its
> +     contributors may be used to endorse or promote products derived
> +     from this software without specific prior written permission.
> +
> +   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> +   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> +   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> +   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> +   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> +   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> +   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> +   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> +   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> +   OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +=====================================
> +VEB Switch and floating VEB Test Plan
> +=====================================
> +
> +VEB Switching Introduction
> +==========================
> +
> +IEEE EVB tutorial: http://www.ieee802.org/802_tutorials/2009-11/evb-
> tutorial-draft-20091116_v09.pdf
> +
> +Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a VLAN
> Bridge internal to Fortville that bridges the traffic of multiple VSIs
> over an internal virtual network.
> +
> +Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A
> VEPA multiplexes the traffic of one or more VSIs onto a single Fortville
> Ethernet port. The biggest difference between a VEB and a VEPA is that a
> VEB can switch packets internally between VSIs, whereas a VEPA cannot.
> +
> +Virtual Station Interface (VSI) - This is an IEEE EVB term that defines
> the properties of a virtual machine's (or a physical machine's) connection
> to the network. Each downstream v-port on a Fortville VEB or VEPA defines
> a VSI. A standards-based definition of VSI properties enables network
> management tools to perform virtual machine migration and associated
> network re-configuration in a vendor-neutral manner.
> +
> +My understanding of VEB is that it's an in-NIC switch(MAC/VLAN), and it
> can support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC
> internal switch. It's similar as Niantic's SRIOV switch.
> +
> +Floating VEB Introduction
> +=========================
> +
> +Floating VEB is based on VEB Switching. It will address 2 problems:
> +
> +Dependency on PF: When the physical port is link down, the functionality
> of the VEB/VEPA will not work normally. Even only data forwarding between
> the VF is required, one PF port will be wasted to create the related VEB.
> +
> +Ensure all the traffic from VF can only forwarding within the VFs connect
> to the floating VEB, cannot forward to the outside world.
> +
> +Prerequisites for VEB testing
> +=============================
> +
> +1. Get the pci device id of DUT, for example::
> +
> +    ./dpdk_nic_bind.py --st
> +
> +    0000:05:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens785f0
> drv=i40e unused=
> +
> +2.1  Host PF in kernel driver. Create 2 VFs from 1 PF with kernel driver,
> and set the VF MAC address at PF::
> +
> +    echo 2 > /sys/bus/pci/devices/0000\:05\:00.0/sriov_numvfs
> +    ./dpdk_nic_bind.py --st
> +
> +    0000:05:02.0 'XL710/X710 Virtual Function' unused=
> +    0000:05:02.1 'XL710/X710 Virtual Function' unused=
> +
> +    ip link set ens785f0 vf 0 mac 00:11:22:33:44:11
> +    ip link set ens785f0 vf 1 mac 00:11:22:33:44:12
> +
> +2.2  Host PF in DPDK driver. Create 2VFs from 1 PF with dpdk driver::
> +
> +    ./dpdk_nic_bind.py -b igb_uio 05:00.0
> +    echo 2 >/sys/bus/pci/devices/0000:05:00.0/max_vfs
> +    ./dpdk_nic_bind.py --st
> +    0000:05:02.0 'XL710/X710 Virtual Function' unused=i40evf,igb_uio
> +    0000:05:02.1 'XL710/X710 Virtual Function' unused=i40evf,igb_uio
> +
> +3. Bind the VFs to dpdk driver::
> +
> +    ./tools/dpdk-devbind.py -b igb_uio 05:02.0 05:02.1
> +
> +4. Reserve huge pages memory(before using DPDK)::
> +
> +     echo 4096 > /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
> +     mkdir /mnt/huge
> +     mount -t hugetlbfs nodev /mnt/huge
> +
> +
> +Test Case1: VEB Switching Inter VF-VF MAC switch
> +===================================================
> +
> +Summary: Kernel PF, then create 2VFs. VFs running dpdk testpmd, send
> traffic to VF1, and set the packet's DEST MAC to VF2, check if VF2 can
> receive the packets. Check Inter          VF-VF MAC switch.
> +
> +Details::
> +
> +1. In VF1, run testpmd::
> +
> +   ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 --socket-mem
> 1024,1024 -w 05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-
> peer=0,00:11:22:33:44:12
> +   testpmd>set fwd mac
> +   testpmd>set promisc all off
> +   testpmd>start
> +
> +   In VF2, run testpmd::
> +
> +   ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xa -n 4 --socket-mem
> 1024,1024 -w 05:02.1 --file-prefix=test2 -- -i --crc-strip
> +   testpmd>set fwd mac
> +   testpmd>set promisc all off
> +   testpmd>start
> +
> +
> +2. Send 100 packets to VF1's MAC address, check if VF2 can get 100
> packets. Check the packet content is no corrupted.
> +
> +Test Case2: VEB Switching Inter VF-VF MAC/VLAN switch
> +========================================================
> +
> +Summary: Kernel PF, then create 2VFs, assign VF1 with VLAN=1 in, VF2 with
> VLAN=2. VFs are running dpdk testpmd, send traffic to VF1 with VLAN=1,
> then let it forwards to VF2,it should not work since they are not in the
> same VLAN; set VF2 with VLAN=1, then send traffic to VF1 with VLAN=1, and
> VF2 can receive the packets. Check inter VF MAC/VLAN switch.
> +
> +Details:
> +1. Set the VLAN id of VF1 and VF2::
> +
> +    ip link set ens785f0 vf 0 vlan 1
> +    ip link set ens785f0 vf 1 vlan 2
> +
> +2. In VF1, run testpmd::
> +
> +   ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 0000:05:02.0 --file-
> prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12
> +   testpmd>set fwd mac
> +   testpmd>set promisc all off
> +   testpmd>start
> +
> +   In VF2, run testpmd::
> +
> +   ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 0000:05:02.1 --file-
> prefix=test2 -- -i --crc-strip
> +   testpmd>set fwd rxonly                         //need to set rxonly,
> or the Vf2 will count the fwd packet num
> +   testpmd>set promisc all off
> +   testpmd>start
> +
> +
> +4. Send 100 packets with VF1's MAC address and VLAN=1, check if VF2 can't
> get 100 packets since they are not in the same VLAN.
> +
> +5. Change the VLAN id of VF2::
> +
> +    ip link set ens785f0 vf 1 vlan 1
> +
> +6. Send 100 packets with VF1's MAC address and VLAN=1, check if VF2 can
> get 100 packets since they are in the same VLAN now. Check the packet
> content is not corrupted::
> +
> +
> sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*40)],ifac
> e="ens785f1")
> +
> +
> +Test Case3: VEB Switching Inter PF-VF MAC switch
> +===================================================
> +
> +Summary: DPDK PF, then create 1VF, PF in the host running dpdk testpmd,
> send traffic from PF to VF1, ensure PF->VF1(let VF1 in promisc mode); send
> traffic from VF1 to PF,            ensure VF1->PF can work.
> +
> +Details:
> +
> +1. vf->pf
> +   In host, launch testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i
> +   testpmd>set mac rxonly
> +   testpmd>set promisc all off
> +   testpmd>start
> +
> +   In VM1, run testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr
> +   testpmd>set mac txonly
> +   testpmd>set promisc all off
> +   testpmd>start
> +
> +2. pf->vf
> +   In host, launch testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,vf1_mac_addr
> +   testpmd>set mac txonly
> +   testpmd>set promisc all off
> +   testpmd>start
> +
> +   In VM1, run testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i
> +   testpmd>mac_addr add 0 vf1_mac_addr
> +   testpmd>set mac rxonly
> +   testpmd>set promisc all off
> +   testpmd>start
> +
> +3. tester->vf
> +
> +4. Send 100 packets with PF's MAC address from VF, check if PF can get
> 100 packets, so VF1->PF is working. Check the packet content is not
> corrupted.
> +
> +5. Send 100 packets with VF's MAC address from PF, check if VF1 can get
> 100 packets, so PF->VF1 is working. Check the packet content is not
> corrupted.
> +
> +6. Send 100 packets with VF's MAC address from tester, check if VF1 can
> get 100 packets, so tester->VF1 is working. Check the packet content is
> not corrupted.
> +
> +
> +Test Case4: VEB Switching Inter-VM PF-VF/VF-VF MAC switch Performance
> +=====================================================================
> +
> +Performance testing, repeat Testcase1(VF-VF) and Testcase3(PF-VF) to
> check the performance at different sizes(64B--1518B and jumbo frame--3000B)
> with 100% rate sending traffic
> +
> +Test Case5: Floating VEB inter VF-VF
> +=======================================================
> +
> +Summary: 1 DPDK PF, then create 2VF, PF in the host running dpdk testpmd,
> and VF0 are running dpdk testpmd, VF0 send traffic, and set the packet's
> DEST MAC to VF1, check if VF1 can receive the packets. Check Inter VF-VF
> MAC switch when PF is link down as well as up.
> +
> +Launch PF testpmd: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -
> -socket-mem 1024,1024 -w 05:00.0,enable_floating_veb=1 --file-prefix=test1
> -- -i
> +
> +2. In the host, run testpmd with floating parameters and make the link
> down::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 --socket-mem
> 1024,1024 -w 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i
> +    testpmd> port start all
> +    testpmd> show port info all
> +
> +3. In VM1, run testpmd::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4 --socket-mem
> 1024,1024 -w 05:02.0 --file-prefix=test2 -- -i --crc-strip
> +    testpmd>mac_addr add 0 vf1_mac_address
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +   In VM2, run testpmd::
> +
> +    ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -w 05:02.1 --file-
> prefix=test3 -- -i --crc-strip --eth-peer=0,vf1_mac_address
> +    testpmd>set fwd txonly
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +4. check if VF1 can get all the packets. Check the packet content is no
> corrupted. RX-packets=TX-packets, but there is a little RX-error. RF
> receive no packets.
> +
> +5. Set "testpmd> port stop all" and "testpmd> start" in step2, then run
> the step3-4 again. same result.
> +
> +
> +Test Case6: Floating VEB PF can't get traffic from VF
> +================================================================
> +DPDK PF, then create 1VF, PF in the host running dpdk testpmd, send
> traffic from PF to VF0, VF0 can't receive any packets; send traffic from
> VF0 to PF, PF can't receive any packets either.
> +
> +
> +1. In host, launch testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -w 82:00.0,enable_floating_veb=1 -- -i
> +   testpmd> set fwd rxonly
> +   testpmd> port start all
> +   testpmd> start
> +   testpmd> show port stats all
> +
> +3. In VM1, run testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr
> +   testpmd>set fwd txonly
> +   testpmd>start
> +   testpmd>show port stats all
> +
> +4. Check if PF can not get any packets, so VF1->PF is not working.
> +
> +5. Set "testpmd> port stop all" in step2, then run the test case again.
> Same result.
> +
> +
> +
> +Test Case6-2 Floating VEB VF can't receive traffic from outside world
> +======================================================
> +
> +DPDK PF, then create 1VF, send traffic from tester to VF1, in floating
> mode, check VF1 can't receive traffic from tester.
> +
> +1. Start VM1 with VF1, see the prerequisite part.
> +
> +2. In host, launch testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -w 82:00.0,enable_floating_veb=1 -- -i
> +   testpmd> set fwd mac
> +   testpmd> port start all
> +   testpmd> start
> +   testpmd> show port stats all
> +
> +
> +   In VM1, run testpmd:
> +
> +   ./testpmd -c 0x3 -n 4 -- -i
> +    testpmd>show port info all    //get VF_mac_address
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +   In tester, run scapy
> +
> +   packet=Ether(dst="VF_mac_address")/IP()/UDP()/Raw('x'*20)
> +   sendp(packet,iface="enp132s0f0")
> +
> +3. Check if VF1 can not get any packets, so tester->VF1 is not working.
> +4. Set "testpmd> port stop all" in step2 in Host, then run the test case
> again. same result.PF can't receive any packets.
> +
> +
> +Test Case7: Floating VEB VF can not communicate with legacy VEB VF
> +======================================================
> +
> +Summary: DPDK PF, then create 4VFs and 4VMs, VF1,VF3,VF4, floating VEB;
> VF2, lagecy VEB. Make PF link down(the cable can be pluged out), VFs in
> VMs are running dpdk testpmd.
> +1. VF1 send traffic, and set the packet's DEST MAC to VF2, check VF2 can
> not receive the packets.
> +2. VF1 send traffic, and set the packet's DEST MAC to VF3, check VF3 can
> receive the packets.
> +3. VF4 send traffic, and set the packet's DEST MAC to VF3, check VF3 can
> receive the packets.
> +4. VF2 send traffic, and set the packet's DEST MAC to VF1, check VF1 can
> not receive the packets.
> +Check Inter-VM VF-VF MAC switch when PF is link down as well as up.
> +
> +Launch PF testpmd: ./testpmd -c 0x3 -n 4 -w
> "82:00.0,enable_floating_veb=1,floating_veb_list=0;2-3" -- -i
> +
> +1. Start VM1 with VF1, VM2 with VF2, VM3 with VF3, VM4 with VF4,see the
> prerequisite part.
> +
> +2. In the host, run testpmd with floating parameters and make the link
> down::
> +
> +    ./testpmd -c 0x3 -n 4 -w
> "82:00.0,enable_floating_veb=1,floating_veb_list=0;2-3" -- -i     //VF1
> and VF3 in floating VEB, VF2 in legacy VEB
> +    testpmd> port stop all     //this step should be executed after vf
> running testpmd.
> +    testpmd> show port info all
> +
> +3. VF1 send traffic, and set the packet's DEST MAC to VF2, check VF2 can
> not receive the packets.
> +
> +    In VM2, run testpmd::
> +
> +    ./testpmd -c 0x3 -n 4 -- -i
> +    testpmd>set fwd rxonly
> +    testpmd>mac_addr add 0 vf2_mac_address     //set the vf2_mac_address
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +    In VM1, run testpmd::
> +
> +    ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,vf2_mac_address
> +    testpmd>set fwd txonly
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +    Check VF2 can not get any packets, so VF1->VF2 is not working.
> +
> +4. VF1 send traffic, and set the packet's DEST MAC to VF3, check VF3 can
> receive the packets.
> +
> +    In VM3, run testpmd::
> +
> +    ./testpmd -c 0x3 -n 4 -- -i
> +    testpmd>set fwd rxonly
> +    testpmd>show port info all     //get the vf3_mac_address
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +    In VM1, run testpmd::
> +
> +    ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,vf3_mac_address
> +    testpmd>set fwd txonly
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +    Check VF3 can get all the packets. Check the packet content is no
> corrupted., so VF1->VF2 is working.
> +
> +5. VF2 send traffic, and set the packet's DEST MAC to VF1, check VF1 can
> not receive the packets.
> +
> +    In VM1, run testpmd::
> +
> +    ./testpmd -c 0x3 -n 4 -- -i
> +    testpmd>set fwd rxonly
> +    testpmd>show port info all     //get the vf1_mac_address
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +    In VM2, run testpmd::
> +
> +    ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,vf1_mac_address
> +    testpmd>set fwd txonly
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +    Check VF1 can not get any packets, so VF2->VF1 is not working.
> +
> +6. Set "testpmd> port start all" and "testpmd> start" in step2, then run
> the step3-5 again. same result.
> +
> +Note: if ./testpmd -c 0x3 -n 4 -w
> "82:00.0,enable_floating_veb=1,floating_veb_list=0;3" -- -i
> +vf0 and vf3 is floating VEB
> +vf1 and vf2 is legacy VEB
> +when pf port stop all
> +traffic between vf0 and vf3 is normal
> +traffic between vf1 and vf2 is normal
> +traffic between vf0 and vf1 is down
> +
> +
> +Test Case8: PF can't get traffic from Floating VEB VF, but PF can get
> traffic from legacy VEB VF.
> +================================================================
> +DPDK PF, then create 2VFs, VF0 is in floating VEB, VF1 is in legacy VEB.
> +1. Send traffic from VF0 to PF, then check PF will not see any traffic;
> +2. Send traffic from VF1 to PF, then check PF will receive all the
> packets.
> +3. send traffic from tester to VF0, check VF0 can't receive traffic from
> tester.
> +4. send traffic from tester to VF1, check VF1 can receive all the traffic
> from tester.
> +
> +1. In host, launch testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -w
> 82:00.0,enable_floating_veb=1,floating_veb_list=0 -- -i   //VF1 in
> floating VEB, VF2 in legacy VEB
> +   testpmd> set fwd rxonly
> +   testpmd> port start all
> +   testpmd> start
> +   testpmd> show port stats all
> +
> +3. In VF1, run testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr
> +   testpmd>set fwd txonly
> +   testpmd>start
> +   testpmd>show port stats all
> +
> +   Check PF can not get any packets, so VF1->PF is not working.
> +
> +4. In VF2, run testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr
> +   testpmd>set fwd txonly
> +   testpmd>start
> +   testpmd>show port stats all
> +
> +   Check PF can get all the packets, so VF2->PF is working.
> +
> +5. Set "testpmd> port stop all" in step2 in Host, then run the test case
> again. same result.
> +
> +6. In host, launch testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -w
> 82:00.0,enable_floating_veb=1,floating_veb_list=0 -- -i   //VF1 in
> floating VEB, VF2 in legacy VEB
> +   testpmd> set fwd mac
> +   testpmd> port start all
> +   testpmd> start
> +   testpmd> show port stats all
> +
> +
> +7. In VF1, run testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i
> +    testpmd>show port info all    //get VF1_mac_address
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +   In tester, run scapy
> +
> +   packet=Ether(dst="VF1_mac_address")/IP()/UDP()/Raw('x'*20)
> +   sendp(packet,iface="enp132s0f0")
> +
> +   Check VF1 can not get any packets, so tester->VF1 is not working.
> +
> +8. In VF2, run testpmd::
> +
> +   ./testpmd -c 0x3 -n 4 -- -i
> +    testpmd>show port info all    //get VF2_mac_address
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +    testpmd>show port stats all
> +
> +   In tester, run scapy
> +
> +   packet=Ether(dst="VF2_mac_address")/IP()/UDP()/Raw('x'*20)
> +   sendp(packet,iface="enp132s0f0")
> +
> +   Check VF1 can get all the packets, so tester->VF2 is working.
> +
> +5. Set "testpmd> port stop all" in step2 in Host, then run the test case
> again. VF1 and VF2 cannot receive any packets. (because PF link down, and
> PF can't receive any packets. so even if VF2 can't receive any packets.)
> +
> --
> 2.5.0



More information about the dts mailing list