[dpdk-dev] [PATCH RFC 2/2] net/ixgbe: allow bulk alloc for the max size desc ring

Ilya Maximets i.maximets at samsung.com
Tue Nov 29 11:59:24 CET 2016


Ping.

Best regards, Ilya Maximets.

On 19.10.2016 17:07, Ilya Maximets wrote:
> The only reason why bulk alloc disabled for the rings with
> more than (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
> descriptors is the possible out-of-bound access to the dma
> memory. But it's the artificial limit and can be easily
> avoided by allocating of RTE_PMD_IXGBE_RX_MAX_BURST more
> descriptors in memory. This will not interfere the HW and,
> as soon as all rings' memory zeroized, Rx functions will
> work correctly.
> 
> This change allows to use vectorized Rx functions with
> 4096 descriptors in Rx ring which is important to achieve
> zero packet drop rate in high-load installations.
> 
> Signed-off-by: Ilya Maximets <i.maximets at samsung.com>
> ---
>  drivers/net/ixgbe/ixgbe_rxtx.c | 17 +----------------
>  drivers/net/ixgbe/ixgbe_rxtx.h |  2 +-
>  2 files changed, 2 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index 2ce8234..07c04c3 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -2585,7 +2585,6 @@ check_rx_burst_bulk_alloc_preconditions(struct ixgbe_rx_queue *rxq)
>  	 *   rxq->rx_free_thresh >= RTE_PMD_IXGBE_RX_MAX_BURST
>  	 *   rxq->rx_free_thresh < rxq->nb_rx_desc
>  	 *   (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
> -	 *   rxq->nb_rx_desc<(IXGBE_MAX_RING_DESC-RTE_PMD_IXGBE_RX_MAX_BURST)
>  	 * Scattered packets are not supported.  This should be checked
>  	 * outside of this function.
>  	 */
> @@ -2607,15 +2606,6 @@ check_rx_burst_bulk_alloc_preconditions(struct ixgbe_rx_queue *rxq)
>  			     "rxq->rx_free_thresh=%d",
>  			     rxq->nb_rx_desc, rxq->rx_free_thresh);
>  		ret = -EINVAL;
> -	} else if (!(rxq->nb_rx_desc <
> -	       (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST))) {
> -		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> -			     "rxq->nb_rx_desc=%d, "
> -			     "IXGBE_MAX_RING_DESC=%d, "
> -			     "RTE_PMD_IXGBE_RX_MAX_BURST=%d",
> -			     rxq->nb_rx_desc, IXGBE_MAX_RING_DESC,
> -			     RTE_PMD_IXGBE_RX_MAX_BURST);
> -		ret = -EINVAL;
>  	}
>  
>  	return ret;
> @@ -2632,12 +2622,7 @@ ixgbe_reset_rx_queue(struct ixgbe_adapter *adapter, struct ixgbe_rx_queue *rxq)
>  	/*
>  	 * By default, the Rx queue setup function allocates enough memory for
>  	 * IXGBE_MAX_RING_DESC.  The Rx Burst bulk allocation function requires
> -	 * extra memory at the end of the descriptor ring to be zero'd out. A
> -	 * pre-condition for using the Rx burst bulk alloc function is that the
> -	 * number of descriptors is less than or equal to
> -	 * (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST). Check all the
> -	 * constraints here to see if we need to zero out memory after the end
> -	 * of the H/W descriptor ring.
> +	 * extra memory at the end of the descriptor ring to be zero'd out.
>  	 */
>  	if (adapter->rx_bulk_alloc_allowed)
>  		/* zero out extra memory */
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> index 2608b36..1abc6f2 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> @@ -67,7 +67,7 @@
>  #define RTE_IXGBE_MAX_RX_BURST          RTE_IXGBE_RXQ_REARM_THRESH
>  #endif
>  
> -#define RX_RING_SZ ((IXGBE_MAX_RING_DESC + RTE_IXGBE_DESCS_PER_LOOP - 1) * \
> +#define RX_RING_SZ ((IXGBE_MAX_RING_DESC + RTE_PMD_IXGBE_RX_MAX_BURST) * \
>  		    sizeof(union ixgbe_adv_rx_desc))
>  
>  #ifdef RTE_PMD_PACKET_PREFETCH
> 


More information about the dev mailing list