[dts] [PATCH V1 1/6] pmd_bonded_8023ad: upload test plan

Liu, Yong yong.liu at intel.com
Fri Jul 6 03:30:17 CEST 2018


Thanks, Yufen. Some comments are inline.

> -----Original Message-----
> From: dts [mailto:dts-bounces at dpdk.org] On Behalf Of yufengx.mo at intel.com
> Sent: Wednesday, June 06, 2018 1:38 PM
> To: dts at dpdk.org
> Cc: Mo, YufengX <yufengx.mo at intel.com>
> Subject: [dts] [PATCH V1 1/6] pmd_bonded_8023ad: upload test plan
> 
> From: yufengmx <yufengx.mo at intel.com>
> 
> 
> This test plan is for pmd bonded 8023ad feature.
> 
> IEEE 802.3ad Dynamic link aggregation.  Creates aggregation groups that
> share
> the same speed and duplex settings.  Utilizes all slaves in the active
> aggregator according to the 802.3ad specification. Slave selection for
> outgoing
> traffic is done according to the transmit hash policy.
> 
> Signed-off-by: yufengmx <yufengx.mo at intel.com>
> ---
>  test_plans/pmd_bonded_8023ad_test_plan.rst | 521
> +++++++++++++++++++++++++++++
>  1 file changed, 521 insertions(+)
>  create mode 100644 test_plans/pmd_bonded_8023ad_test_plan.rst
> 
> diff --git a/test_plans/pmd_bonded_8023ad_test_plan.rst
> b/test_plans/pmd_bonded_8023ad_test_plan.rst
> new file mode 100644
> index 0000000..d871323
> --- /dev/null
> +++ b/test_plans/pmd_bonded_8023ad_test_plan.rst
> @@ -0,0 +1,521 @@
> +.. Copyright (c) <2010-2018>, Intel Corporation
> +   All rights reserved.
> +
> +   Redistribution and use in source and binary forms, with or without
> +   modification, are permitted provided that the following conditions
> +   are met:
> +
> +   - Redistributions of source code must retain the above copyright
> +     notice, this list of conditions and the following disclaimer.
> +
> +   - Redistributions in binary form must reproduce the above copyright
> +     notice, this list of conditions and the following disclaimer in
> +     the documentation and/or other materials provided with the
> +     distribution.
> +
> +   - Neither the name of Intel Corporation nor the names of its
> +     contributors may be used to endorse or promote products derived
> +     from this software without specific prior written permission.
> +
> +   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> +   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> +   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> +   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> +   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> +   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> +   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> +   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> +   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> +   OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +Link Bonding for mode 4 (802.3ad)
> +=================================
> +
> +This test plan is mainly to test link bonding mode 4(802.3ad) function
> via
> +testpmd
> +
> +link bonding mode 4 is IEEE 802.3ad Dynamic link aggregation. Creates
> +aggregation groups that share the same speed and duplex settings.
> Utilizes all
> +slaves in the active aggregator according to the 802.3ad specification.
> DPDK
> +realize it based on 802.1AX specification, it includes LACP protocal and
> Marker
> +protocol. This mode requires a switch that supports IEEE 802.3ad Dynamic
> link
> +aggregation.
> +
> +note: Slave selection for outgoing traffic is done according to the
> transmit
> +hash policy, which may be changed from the default simple XOR layer2
> policy.
> +
> +**Requirements**
> +
> +* Bonded ports SHALL maintain statistics similar to that of normal ports
> +
> +* The slave links SHALL be monitor for link status change. See also the
> concept
> +  of up/down time delay to handle situations such as a switch reboots, it
> is
> +  possible that its ports report "link up" status before they become
> usable.
> +
> +* Upon unbonding the bonding PMD driver MUST restore the MAC addresses
> that the
> +  slaves had before they were enslaved.
> +
> +* According to the bond type, when the bond interface is placed in
> promiscuous
> +  mode it will propagate the setting to the slave devices.
> +
> +* Generally require that the switch should be compatible with IEEE
> 802.3AD.
> +  e.g. Cisco 5500 series with EtherChannel support or may be called a
> trunk
> +  group.
> +
> +* LACP control packet filtering offload. It is a idea of performance
> +  improvement, which use hardware offloads to improve packet
> classification.
> +
> +  technical details refer to content attached in website
> +  http://dpdk.org/ml/archives/dev/2017-May/066143.html
> +
> +*.support three 802.3ad aggregation selection logic modes
> (stable/bandwidth/
> +  count). The Selection Logic selects a compatible Aggregator for a port,
> using
> +  the port�s LAG ID. The Selection Logic may determine that the link
> should be
> +  operated as a standby link if there are constraints on the simultaneous
> +  attachment of ports that have selected the same Aggregator.
> +
> +  DPDK technical details refer to
> +    ``doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst``
> +        ``Link Aggregation 802.3AD (Mode 4)``
> +
> +  Linux technical details refer to ``linux_bonding.txt`` content of
> 802.3ad
> +
> +Prerequisites for Bonding
> +=========================
> +*. additional hardware requirements
> +   a switch that supports IEEE 802.3ad Dynamic link aggregation
> +
> +*. hardware configuration
> +  all link ports of switch/dut should be the same data rate and support
> +  full-duplex.
> +
> +Functional testing hardware configuration
> +-----------------------------------------
> +  NIC and DUT ports requriements.
> +  - Tester: 2 ports of nic
> +  - DUT:    2 ports of nic
> +
> + Connections ports between tester and DUT
> +           Tester                           DUT
> +          .-------.                      .-------.
> +          | port0 | <------------------> | port0 |
> +          | port1 | <------------------> | port1 |
> +          '-------'                      '-------'
> +
> +Performance testing hardware configuration
> +------------------------------------------
> +  NIC/DUT/IXIA/SWITCH ports requriements.
> +  - Tester: 5 ports of nic.
> +    niantic (2x10G) x 3
> +  - IXIA:   1 ixia ports.
> +        10G port
> +  - SWITCH: 4 switch ports.
> +    quanta hp_t3048(10G) / software: ONS CLI 1.0.1.1316-2
> +
> +    Connections ports between IXIA and DUT
> +    -----------------------------------------------------------------
> +                       quanta switch                               DUT
> +                        .---------.                      |
> +                        |   S xe1 | <------------------> | port0 (niantic)
> <---> |
> +                        |   W xe2 | <------------------> | port1 (niantic)
> <---> |
> +    - port-channel <--> |   I     |                      |
> |------> bond_port(port 5)
> +                        |   T xe3 | <------------------> | port2 (niantic)
> <---> |            ^
> +                        |   C xe4 | <------------------> | port3 (niantic)
> <---> |            |
> +                        |   H     |                      |
> |fwd
> +                        '---------'                      |
> |
> +                                                         |
> |
> +                       ixia 10G                          |
> |
> +                |  slot 6 port 5  |  ------------------> | port4
> (niantic)----------------------
> +

Bond-port is better to be placed in DUT side.


> +
> +Test Case : basic behavior start/stop
> +=====================================
> +*. check bonded device stop/start action under frequecy operation status
> +
> +steps::
> +*. bind two ports
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> address 2>
> +
> +*. boot up testpmd
> +
> +    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
> +
> +*. run testpmd command of bonding
> +
> +    testpmd> port stop all
> +    testpmd> create bonded device 4 0
> +    testpmd> add bonding slave 0 2
> +    testpmd> add bonding slave 1 2
> +
> +*. loop execute this step 10 times, check if bonded device still work
> +
> +    testpmd> port stop all
> +    testpmd> port start all
> +    testpmd> start
> +    testpmd> show bonding config 2
> +    testpmd> stop
> +
> +*. quit testpmd
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case : basic behavior mac
> +==============================
> +*. bonded device's default mac is one of each slave's mac after one slave
> has
> +   been added.
> +*. when no slave attached, mac should be 00:00:00:00:00:00
> +*. slave's mac restore the MAC addresses that the slave has before they
> were
> +   enslaved.
> +
> +steps::
> +*. bind two ports
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> address 2>
> +
> +*. boot up testpmd
> +
> +    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
> +
> +*. run testpmd command of bonding
> +
> +    testpmd> port stop all
> +    testpmd> create bonded device 4 0
> +
> +*. check bond device mac should be 00:00:00:00:00:00
> +
> +    testpmd> show bonding config 2
> +
> +*. add two slaves to bond port
> +
> +    testpmd> add bonding slave 0 2
> +    testpmd> add bonding slave 1 2
> +    testpmd> port start all
> +
> +*. check bond device mac should be one of each slave's mac
> +
> +    testpmd> show bonding config 0
> +    testpmd> show bonding config 1
> +    testpmd> show bonding config 2
> +
> +*. quit testpmd
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case : basic behavior link up/down
> +=======================================
> +*. bonded device should be down status without slaves
> +*. bonded device device should have the same status of link status
> +*. Active Slaves status should change with the slave status change
> +
> +steps::
> +*. bind two ports
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> address 2>
> +
> +*. boot up testpmd
> +
> +    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
> +
> +*. run testpmd command of bonding
> +
> +    testpmd> port stop all
> +    testpmd> create bonded device 4 0
> +    testpmd> add bonding slave 0 2
> +    testpmd> add bonding slave 1 2
> +
> +*. stop bonded device and check bonded device/slaves link status
> +
> +    testpmd> port stop 2
> +    testpmd> show bonding config 2
> +    testpmd> show bonding config 1
> +    testpmd> show bonding config 0
> +
> +*. start bonded device and check bonded device/slaves link status
> +
> +    testpmd> port start 2
> +    testpmd> show bonding config 2
> +    testpmd> show bonding config 1
> +    testpmd> show bonding config 0
> +
> +*. quit testpmd
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case : basic behavior promisc mode
> +=======================================
> +*. bonded device promiscuous mode should be ``enabled`` by default
> +*. bonded device/slave device should have the same status of promiscuous
> mode
> +
> +steps::
> +*. bind two ports
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> address 2>
> +
> +*. boot up testpmd
> +
> +    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
> +
> +*. run testpmd command of bonding
> +
> +    testpmd> port stop all
> +    testpmd> create bonded device 4 0
> +
> +*. check if bonded device promiscuous mode is ``enabled``
> +
> +    testpmd> show bonding config 2
> +
> +*. add two slaves and check if promiscuous mode is ``enabled``
> +
> +    testpmd> add bonding slave 0 2
> +    testpmd> add bonding slave 1 2
> +    testpmd> show bonding config 0
> +    testpmd> show bonding config 1
> +
> +*. disable bonded device promiscuous mode and check promiscuous mode
> +
> +    testpmd> set promisc 2 off
> +    testpmd> show bonding config 2
> +
> +*. enable bonded device promiscuous mode and check promiscuous mode
> +
> +    testpmd> set promisc 2 on
> +    testpmd> show bonding config 2
> +
> +*. check slaves' promiscuous mode
> +
> +    testpmd> show bonding config 0
> +    testpmd> show bonding config 1
> +
> +*. quit testpmd
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case : basic behavior agg mode
> +===================================
> +*. stable is the default agg mode
> +*. check 802.3ad aggregation mode configuration
> +support <agg_option>:
> +``count``
> +``stable``
> +``bandwidth``
> +
Hi Yufen,
Could you move the description of mode configuration here?

> +steps::
> +*. bind two ports
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> address 2>
> +
> +*. boot up testpmd
> +
> +    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
> +
> +*. run testpmd command of bonding
> +
> +    testpmd> port stop all
> +    testpmd> create bonded device 4 0
> +    testpmd> add bonding slave 0 2
> +    testpmd> add bonding slave 1 2
> +    testpmd> port start all
> +    testpmd> show bonding config 2
> +    testpmd> set bonding agg_mode 2 <agg_option>
> +
> +*. check if agg_mode set successful
> +
> +    testpmd> show bonding config 2
> +        Bonding mode: 4
> +        IEEE802.3AD Aggregator Mode: <agg_option>
> +        Slaves (2): [0 1]
> +        Active Slaves (2): [0 1]
> +        Primary: [0]
> +
> +*. quit testpmd
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case : basic behavior dedicated queues
> +===========================================
> +*. check 802.3ad dedicated queues is ``disable`` by default
> +*. check 802.3ad set dedicated queues
> +support <agg_option>:
> +``disable``
> +``enable``
> +

Yufen,
Could you please add some background introduction for dedicated queue setting?

> +steps:
> +*. bind two ports
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> address 2>
> +
> +*. boot up testpmd
> +
> +    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
> +
> +*. run testpmd command of bonding
> +
> +    testpmd> port stop all
> +    testpmd> create bonded device 4 0
> +    testpmd> add bonding slave 0 2
> +    testpmd> add bonding slave 1 2
> +    testpmd> show bonding config 2
> +
> +*. check if dedicated_queues disable successful
> +
> +    testpmd> set bonding lacp dedicated_queues 2 disable
> +
> +*. check if bonded port can start
> +
> +    testpmd> port start all
> +    testpmd> start
> +
> +*. check if dedicated_queues enable successful
> +
> +    testpmd> stop
> +    testpmd> port stop all
> +    testpmd> set bonding lacp dedicated_queues 2 enable
> +
> +*. check if bonded port can start
> +
> +    testpmd> port start all
> +    testpmd> start
> +
> +*. quit testpmd
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case : command line option
> +===============================
> +*. check command line option
> +slave=<0000:xx:00.0>
> +agg_mode=<bandwidth | stable | count>
> +*. compare bonding configuration with expected configuration.
> +
> +steps:
> +*. bind two ports
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> address 2>
> +
> +
> +*. boot up testpmd
> +
> +    ./testpmd -c 0x0f -n 4 \
> +    --vdev
> 'net_bonding0,slave=0000:xx:00.0,slave=0000:xx:00.1,mode=4,agg_mode=<agg_o
> ption>'  \
> +    -- -i --port-topology=chained
> +
> +*. run testpmd command of bonding
> +
> +    testpmd> port stop all
> +
> +*. check if bonded device has been created and slaves have been bonded
> successful
> +
> +    testpmd> show bonding config 2
> +        Bonding mode: 4
> +        IEEE802.3AD Aggregator Mode: <agg_option>
> +        Slaves (2): [0 1]
> +        Active Slaves (2): [0 1]
> +        Primary: [0]
> +
> +*. check if bonded port can start
> +
> +    testpmd> port start all
> +    testpmd> start
> +
> +*. check if dedicated_queues enable successful

No idea about why dedicated queue related to aggregate mode, please explain it.

> +
> +    testpmd> stop
> +    testpmd> port stop all
> +
> +*. quit testpmd
> +    testpmd> quit
> +
> +
> +Test Case : tx agg mode stable
> +==============================
> +stable: use slaves[default_slave] as index
> +        The active aggregator is chosen by largest aggregate
> +        bandwidth.
> +
> +        Reselection of the active aggregator occurs only when all
> +        slaves of the active aggregator are down or the active
> +        aggregator has no slaves.
> +
> +steps:
> +*. quanta switch setting
> +    vlan 100
> +    port-channel 4000
> +    slave interface: xe1/xe2/xe3/xe4
> +
> +*. dut port 1-4 link to quanta switch xe1/xe2/xe3/xe4
> +
> +*. dut port 5 link to ixia
> +
> +*. bind five ports
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> address 2> \
> +    <pci address 3> <pci address 4> <pci address 5>
> +
> +*. boot up testpmd with 4 slave ports
> +
> +    ./testpmd -c 0x0f -n 4 \
> +    --vdev
> 'net_bonding0,slave=0000:xx:00.0,slave=0000:xx:00.1,slave=0000:xx:00.2,sla
> ve=0000:xx:00.3,mode=4,agg_mode=stable' \
> +    -- -i --port-topology=chained
> +
> +*. packet setting, udp packet(in folder "stream_abcd")
> +
> +    [Ether(dst=nutmac, src=srcmac)/IP(dst=destip, src=srcip,
> len=46)/UDP(sport=srcport, dport=destport)/Raw(load='P'*26)
> +
> +    create flow_a/flow_b/flow_c/flow_d stream flow
> +
> +*. start ixia traffic
> +
> +    ixia_stream=(flow_a+flow_b+flow_c+flow_d)
> +    10G line rate run at percentPacketRate(100%)
> +    traffic lasting 250 second or so
> +
> +*. run testpmd command of bonding, ixia traffic is forward from port 4 to
> bond port 5
> +
> +    testpmd> port stop all
> +    testpmd> set bonding lacp dedicated_queues 5 enable
> +    testpmd> set portlist 0,5
> +    testpmd> port start all
> +    testpmd> start
> +
> +*. keep ixia traffic lasting 5 minutes
> +
> +*. stop ixia and get ixia statistcis
> +
> +*. stop testpmd and get ixia statistcis
> +
> +    testpmd> stop
> +    testpmd> port stop all
> +    testpmd> show port stats all
> +
> +*. get quanta switch xe1/xe2/xe3/xe4 statistcis
> +
> +*. compare switch statistcis with testpmd statistcis
> +

Some typos here, statistcis should be statistics. 

> +Test Case : tx agg mode count
> +=============================
> +count:  use agg_count amount as index
> +        The active aggregator is chosen by the largest number of
> +        ports (slaves).  Reselection occurs as described under the
> +        "bandwidth/count mode aggregator reselection".
> +
> +steps are the same as ``tx agg mode count``,  change the testpmd command
> as
> +
> +    ./testpmd -c 0x0f -n 4 \
> +    --vdev
> 'net_bonding0,slave=0000:xx:00.0,slave=0000:xx:00.1,slave=0000:xx:00.2,sla
> ve=0000:xx:00.3,mode=4,agg_mode=count' \
> +    -- -i --port-topology=chained
> +
> +Test Case : tx agg mode bandwidth
> +=================================
> +bandwidth: use link_speed amount as index
> +        The active aggregator is chosen by largest aggregate
> +        bandwidth. Reselection occurs as described under the
> +        "bandwidth/count mode aggregator reselection".
> +
> +steps are the same as ``tx agg mode count``,  change the testpmd command
> as
> +
> +    ./testpmd -c 0x0f -n 4 \
> +    --vdev
> 'net_bonding0,slave=0000:xx:00.0,slave=0000:xx:00.1,slave=0000:xx:00.2,sla
> ve=0000:xx:00.3,mode=4,agg_mode=bandwidth' \
> +    -- -i --port-topology=chained
> \ No newline at end of file
> --
> 1.9.3



More information about the dts mailing list