[dpdk-dev] 18.11.2 (LTS) patches review and test

Ian Stokes ian.stokes at intel.com
Thu May 30 10:15:35 CEST 2019


On 5/21/2019 3:01 PM, Kevin Traynor wrote:
> Hi all,
> 
> Here is a list of patches targeted for LTS release 18.11.2.
> 
> The planned date for the final release is 11th June.
> 
> Please help with testing and validation of your use cases and report
> any issues/results. For the final release I will update the release
> notes with fixes and reported validations.
> 
> A release candidate tarball can be found at:
> 
>      https://dpdk.org/browse/dpdk-stable/tag/?id=v18.11.2-rc1
> 
> These patches are located at branch 18.11 of dpdk-stable repo:
>      https://dpdk.org/browse/dpdk-stable/
> 
> Thanks.
> 
> Kevin Traynor
> 

Hi Kevin,

I've validated with current head OVS Master and OVS 2.11.1 with VSPERF. 
Tested with i40e (X710), i40eVF, ixgbe (82599ES), ixgbeVF, igb(I350) and 
igbVF devices.

Following tests were conducted and passed:

* vswitch_p2p_tput: vSwitch - configure switch and execute RFC2544 
throughput test.
* vswitch_p2p_cont: vSwitch - configure switch and execute RFC2544 
continuous stream test.
* vswitch_pvp_tput: vSwitch - configure switch, vnf and execute RFC2544 
throughput test.
* vswitch_pvp_cont: vSwitch - configure switch, vnf and execute RFC2544 
continuous stream test.
* ovsdpdk_hotplug_attach: Ensure successful port-add after binding a 
device to igb_uio after ovs-vswitchd is launched.
* ovsdpdk_mq_p2p_rxqs: Setup rxqs on NIC port.
* ovsdpdk_mq_pvp_rxqs: Setup rxqs on vhost user port.
* ovsdpdk_mq_pvp_rxqs_linux_bridge: Confirm traffic received over vhost 
RXQs with Linux virtio device in guest.
* ovsdpdk_mq_pvp_rxqs_testpmd: Confirm traffic received over vhost RXQs 
with DPDK device in guest.
* ovsdpdk_vhostuser_client: Test vhost-user client mode.
* ovsdpdk_vhostuser_client_reconnect: Test vhost-user client mode 
reconnect feature.
* ovsdpdk_vhostuser_server: Test vhost-user server mode.
* ovsdpdk_vhostuser_sock_dir: Verify functionality of vhost-sock-dir flag.
* ovsdpdk_vdev_add_null_pmd: Test addition of port using the null DPDK 
PMD driver.
* ovsdpdk_vdev_del_null_pmd: Test deletion of port using the null DPDK 
PMD driver.
* ovsdpdk_vdev_add_af_packet_pmd: Test addition of port using the 
af_packet DPDK PMD driver.
* ovsdpdk_vdev_del_af_packet_pmd: Test deletion of port using the 
af_packet DPDK PMD driver.
* ovsdpdk_numa: Test vhost-user NUMA support. Vhostuser PMD threads 
should migrate to the same numa slot, where QEMU is executed.
* ovsdpdk_jumbo_p2p: Ensure that jumbo frames are received, processed 
and forwarded correctly by DPDK physical ports.
* ovsdpdk_jumbo_pvp: Ensure that jumbo frames are received, processed 
and forwarded correctly by DPDK vhost-user ports.
* ovsdpdk_jumbo_p2p_upper_bound: Ensure that jumbo frames above the 
configured Rx port's MTU are not accepted.
* ovsdpdk_jumbo_mtu_upper_bound_vport: Verify that the upper bound limit 
is enforced for OvS DPDK vhost-user ports.
* ovsdpdk_rate_p2p: Ensure when a user creates a rate limiting physical 
interface that the traffic is limited to the specified policer rate in a 
p2p setup.
* ovsdpdk_rate_pvp: Ensure when a user creates a rate limiting vHost 
User interface that the traffic is limited to the specified policer rate 
in a pvp setup.
* ovsdpdk_qos_p2p: In a p2p setup, ensure when a QoS egress policer is 
created that the traffic is limited to the specified rate.
* ovsdpdk_qos_pvp: In a pvp setup, ensure when a QoS egress policer is 
created that the traffic is limited to the specified rate.
* phy2phy_scalability: LTD.Scalability.Flows.RFC2544.0PacketLoss
* phy2phy_scalability_cont: Phy2Phy Scalability Continuous Stream
* pvp_cont: PVP Continuous Stream
* pvvp_cont: PVVP Continuous Stream
* pvpv_cont: Two VMs in parallel with Continuous Stream

Regards
Ian


More information about the dev mailing list