[dpdk-dev] [PATCH v3 0/5] consistent PMD batching behaviour

Yao, Lei A lei.a.yao at intel.com
Fri Mar 31 09:00:01 CEST 2017



> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ferruh Yigit
> Sent: Thursday, March 30, 2017 8:55 PM
> To: Yang, Zhiyong <zhiyong.yang at intel.com>; dev at dpdk.org
> Cc: Ananyev, Konstantin <konstantin.ananyev at intel.com>; Richardson,
> Bruce <bruce.richardson at intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 0/5] consistent PMD batching behaviour
> 
> On 3/29/2017 8:16 AM, Zhiyong Yang wrote:
> > The rte_eth_tx_burst() function in the file Rte_ethdev.h is invoked to
> > transmit output packets on the output queue for DPDK applications as
> > follows.
> >
> > static inline uint16_t
> > rte_eth_tx_burst(uint8_t port_id, uint16_t queue_id,
> >                  struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
> >
> > Note: The fourth parameter nb_pkts: The number of packets to transmit.
> >
> > The rte_eth_tx_burst() function returns the number of packets it actually
> > sent. Most of PMD drivers can support the policy "send as many packets to
> > transmit as possible" at the PMD level. but the few of PMDs have some
> sort
> > of artificial limits for the pkts sent successfully. For example, VHOST tx
> > burst size is limited to 32 packets. Some rx_burst functions have the
> > similar problem. The main benefit is consistent batching behavior for user
> > to simplify their logic and avoid misusage at the application level, there
> > is unified rte_eth_tx/rx_burst interface already, there is no reason for
> > inconsistent behaviors.
> > This patchset fixes it via adding wrapper function at the PMD level.
> >
> > Changes in V3:
> >
> > 1. Updated release_17_05 in patch 5/5
> > 2. Rebase on top of next net tree. i40e_rxtx_vec_altivec.c is updated in
> > patch 2/5.
> > 3. fix one checkpatch issue in 2/5.
> >
> > Changes in V2:
> > 1. rename ixgbe, i40e and fm10k vec function XXX_xmit_pkts_vec to new
> name
> > XXX_xmit_fixed_burst_vec, new wrapper functions use original name
> > XXX_xmit_pkts_vec according to Bruce's suggestion.
> > 2. simplify the code to avoid the if or if/else.
> >
> > Zhiyong Yang (5):
> >   net/fm10k: remove limit of fm10k_xmit_pkts_vec burst size
> >   net/i40e: remove limit of i40e_xmit_pkts_vec burst size
> >   net/ixgbe: remove limit of ixgbe_xmit_pkts_vec burst size
> >   net/vhost: remove limit of vhost TX burst size
> >   net/vhost: remove limit of vhost RX burst size
> 
> Series applied to dpdk-next-net/master, thanks.
> 
> (doc patch exported into separate patch)
> 
> This is the PMD update on fast path, effected PMDs, can you please
> confirm the performance after test?
Hi, 

I have compare the vhost PVP performance with and without Zhiyong's 
Patch. Almost no performance drop
Mergeable path: -0.2%
Normal Path: -0.73%
Vector Path : -0.55%

Test bench:
Ubutnu16.04
Kernal:  4.4.0
gcc : 5.4.0

BRs
Lei






More information about the dev mailing list