[dpdk-dev] [PATCH RFC 1/2] net/i40e: allow bulk alloc for the max size desc ring

Ananyev, Konstantin konstantin.ananyev at intel.com
Tue Nov 29 13:50:22 CET 2016


Hi Ilya,

> Ping.
> 
> Best regards, Ilya Maximets.
> 
> On 19.10.2016 17:07, Ilya Maximets wrote:
> > The only reason why bulk alloc disabled for the rings with
> > more than (I40E_MAX_RING_DESC - RTE_PMD_I40E_RX_MAX_BURST)
> > descriptors is the possible out-of-bound access to the dma
> > memory. But it's the artificial limit and can be easily
> > avoided by allocating of RTE_PMD_I40E_RX_MAX_BURST more
> > descriptors in memory. This will not interfere the HW and,
> > as soon as all rings' memory zeroized, Rx functions will
> > work correctly.
> >
> > This change allows to use vectorized Rx functions with
> > 4096 descriptors in Rx ring which is important to achieve
> > zero packet drop rate in high-load installations.
> >
> > Signed-off-by: Ilya Maximets <i.maximets at samsung.com>
> > ---
> >  drivers/net/i40e/i40e_rxtx.c | 24 +++++++++++++-----------
> >  1 file changed, 13 insertions(+), 11 deletions(-)
> >
> > diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> > index 7ae7d9f..1f76691 100644
> > --- a/drivers/net/i40e/i40e_rxtx.c
> > +++ b/drivers/net/i40e/i40e_rxtx.c
> > @@ -409,15 +409,6 @@ check_rx_burst_bulk_alloc_preconditions(__rte_unused struct i40e_rx_queue *rxq)
> >  			     "rxq->rx_free_thresh=%d",
> >  			     rxq->nb_rx_desc, rxq->rx_free_thresh);
> >  		ret = -EINVAL;
> > -	} else if (!(rxq->nb_rx_desc < (I40E_MAX_RING_DESC -
> > -				RTE_PMD_I40E_RX_MAX_BURST))) {
> > -		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> > -			     "rxq->nb_rx_desc=%d, "
> > -			     "I40E_MAX_RING_DESC=%d, "
> > -			     "RTE_PMD_I40E_RX_MAX_BURST=%d",
> > -			     rxq->nb_rx_desc, I40E_MAX_RING_DESC,
> > -			     RTE_PMD_I40E_RX_MAX_BURST);
> > -		ret = -EINVAL;
> >  	}
> >  #else
> >  	ret = -EINVAL;
> > @@ -1698,8 +1689,19 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
> >  	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
> >
> >  	/* Allocate the maximun number of RX ring hardware descriptor. */
> > -	ring_size = sizeof(union i40e_rx_desc) * I40E_MAX_RING_DESC;
> > -	ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN);
> > +	len = I40E_MAX_RING_DESC;
> > +
> > +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
> > +	/**
> > +	 * Allocating a little more memory because vectorized/bulk_alloc Rx
> > +	 * functions doesn't check boundaries each time.
> > +	 */
> > +	len += RTE_PMD_I40E_RX_MAX_BURST;
> > +#endif
> > +

Looks good to me.
One question, though do we really need '+#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC' here?
Why just not remove this ifdef here and always add allocate extra descriptors.
Konstantin

> > +	ring_size = RTE_ALIGN(len * sizeof(union i40e_rx_desc),
> > +			      I40E_DMA_MEM_ALIGN);
> > +
> >  	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
> >  			      ring_size, I40E_RING_BASE_ALIGN, socket_id);
> >  	if (!rz) {
> >


More information about the dev mailing list