[dpdk-dev] [PATCH 5/6] net/vmxnet3: receive queue lockup and memleak

Charles (Chas) Williams ciwillia at brocade.com
Thu Jun 1 14:24:16 CEST 2017


While looking at another issue, I think one of the issues fixed in this
commit has already been fixed in the last DPDK release by:

	commit 8fce14b789aecdb4345a62f6980e7b6e7f4ba245
	Author: Stefan Puiu <stefan.puiu at gmail.com>
	Date:   Mon Dec 19 11:40:53 2016 +0200

	    net/vmxnet3: fix Rx deadlock

	    Our use case is that we have an app that needs to keep mbufs around
	    for a while. We've seen cases when calling vmxnet3_post_rx_bufs() from
	    vmxet3_recv_pkts(), it might not succeed to add any mbufs to any RX
	    descriptors (where it returns -err). Since there are no mbufs that the
	    virtual hardware can use, no packets will be received after this; the
	    driver won't refill the mbuf after this so it gets stuck in this
	    state. I call this a deadlock for lack of a better term - the virtual
	    HW waits for free mbufs, while the app waits for the hardware to
	    notify it for data (by flipping the generation bit on the used Rx
	    descriptors). Note that after this, the app can't recover.
	...

The mbuf leak due to an error during receive still exists.  That can be
refactored into a new commit.

On 05/19/2017 01:55 PM, Charles (Chas) Williams wrote:
> From: Mandeep Rohilla <mrohilla at brocade.com>
>
> The receive queue can lockup if all the rx descriptors have lost
> their mbufs and temporarily there are no mbufs available. This
> can happen if there is an mbuf leak or if the application holds
> on to the mbuf for a while.
>
> This also addresses an mbuf leak in an error condition during
> packet receive.
>
> Signed-off-by: Mandeep Rohilla <mrohilla at brocade.com>
> ---
>  drivers/net/vmxnet3/vmxnet3_rxtx.c | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
>
> diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
> index d8713a1..d21679d 100644
> --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
> +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
> @@ -731,6 +731,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>  	uint16_t nb_rx;
>  	uint32_t nb_rxd, idx;
>  	uint8_t ring_idx;
> +	uint8_t i;
>  	vmxnet3_rx_queue_t *rxq;
>  	Vmxnet3_RxCompDesc *rcd;
>  	vmxnet3_buf_info_t *rbi;
> @@ -800,6 +801,12 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>  				   (int)(rcd - (struct Vmxnet3_RxCompDesc *)
>  					 rxq->comp_ring.base), rcd->rxdIdx);
>  			rte_pktmbuf_free_seg(rxm);
> +			if (rxq->start_seg) {
> +				struct rte_mbuf *start = rxq->start_seg;
> +
> +				rxq->start_seg = NULL;
> +				rte_pktmbuf_free(start);
> +			}
>  			goto rcd_done;
>  		}
>
> @@ -893,6 +900,18 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>  		}
>  	}
>
> +	/*
> +	 * Try to replenish the rx descriptors with the new mbufs
> +	 */
> +	for (i = 0; i < VMXNET3_RX_CMDRING_SIZE; i++) {
> +		vmxnet3_post_rx_bufs(rxq, i);
> +		if (unlikely(rxq->shared->ctrl.updateRxProd)) {
> +			VMXNET3_WRITE_BAR0_REG(hw,
> +				rxprod_reg[i] +
> +					(rxq->queue_id * VMXNET3_REG_ALIGN),
> +				rxq->cmd_ring[i].next2fill);
> +		}
> +	}
>  	return nb_rx;
>  }
>
>


More information about the dev mailing list