[PATCH 2/2] net/af_xdp: Refactor af_xdp_tx_zc()
Ariel Otilibili
ariel.otilibili at 6wind.com
Thu Jan 16 23:20:06 CET 2025
Hi Stephen,
On Thu, Jan 16, 2025 at 10:47 PM Stephen Hemminger <
stephen at networkplumber.org> wrote:
> On Thu, 16 Jan 2025 20:56:39 +0100
> Ariel Otilibili <ariel.otilibili at 6wind.com> wrote:
>
> > Both branches of the loop share the same logic. Now each one is a
> > goto dispatcher; either to out (end of function), or to
> > stats (continuation of the loop).
> >
> > Bugzilla ID: 1440
> > Depends-on: patch-1 ("net/af_xdp: fix use after free in af_xdp_tx_zc()")
> > Signed-off-by: Ariel Otilibili <ariel.otilibili at 6wind.com>
> > ---
> > drivers/net/af_xdp/rte_eth_af_xdp.c | 57 ++++++++++++++---------------
> > 1 file changed, 27 insertions(+), 30 deletions(-)
> >
> > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > index 4326a29f7042..8b42704b6d9f 100644
> > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > @@ -551,6 +551,7 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs,
> uint16_t nb_pkts)
> > uint64_t addr, offset;
> > struct xsk_ring_cons *cq = &txq->pair->cq;
> > uint32_t free_thresh = cq->size >> 1;
> > + struct rte_mbuf *local_mbuf = NULL;
> >
> > if (xsk_cons_nb_avail(cq, free_thresh) >= free_thresh)
> > pull_umem_cq(umem, XSK_RING_CONS__DEFAULT_NUM_DESCS, cq);
> > @@ -565,21 +566,10 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs,
> uint16_t nb_pkts)
> > &idx_tx))
> > goto out;
> > }
> > - desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> > - desc->len = mbuf->pkt_len;
> > - addr = (uint64_t)mbuf - (uint64_t)umem->buffer -
> > - umem->mb_pool->header_size;
> > - offset = rte_pktmbuf_mtod(mbuf, uint64_t) -
> > - (uint64_t)mbuf +
> > - umem->mb_pool->header_size;
> > - offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> > - desc->addr = addr | offset;
> > - tx_bytes += mbuf->pkt_len;
> > - count++;
> > +
> > + goto stats;
> > } else {
> > - struct rte_mbuf *local_mbuf =
> > - rte_pktmbuf_alloc(umem->mb_pool);
> > - void *pkt;
> > + local_mbuf = rte_pktmbuf_alloc(umem->mb_pool);
> >
> > if (local_mbuf == NULL)
> > goto out;
> > @@ -589,23 +579,30 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs,
> uint16_t nb_pkts)
> > goto out;
> > }
> >
> > - desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> > - desc->len = mbuf->pkt_len;
> > -
> > - addr = (uint64_t)local_mbuf -
> (uint64_t)umem->buffer -
> > - umem->mb_pool->header_size;
> > - offset = rte_pktmbuf_mtod(local_mbuf, uint64_t) -
> > - (uint64_t)local_mbuf +
> > - umem->mb_pool->header_size;
> > - pkt = xsk_umem__get_data(umem->buffer, addr +
> offset);
> > - offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> > - desc->addr = addr | offset;
> > - rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *),
> > - desc->len);
> > - tx_bytes += mbuf->pkt_len;
> > - rte_pktmbuf_free(mbuf);
> > - count++;
> > + goto stats;
> > }
> > +stats:
> > + struct rte_mbuf *tmp;
> > + void *pkt;
> > + tmp = mbuf->pool == umem->mb_pool ? mbuf : local_mbuf;
> > +
> > + desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> > + desc->len = mbuf->pkt_len;
> > +
> > + addr = (uint64_t)tmp - (uint64_t)umem->buffer -
> umem->mb_pool->header_size;
> > + offset = rte_pktmbuf_mtod(tmp, uint64_t) - (uint64_t)tmp +
> umem->mb_pool->header_size;
> > + offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> > + desc->addr = addr | offset;
> > +
> > + if (mbuf->pool == umem->mb_pool) {
> > + tx_bytes += mbuf->pkt_len;
> > + } else {
> > + pkt = xsk_umem__get_data(umem->buffer, addr + offset);
> > + rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), desc->len);
> > + tx_bytes += mbuf->pkt_len;
> > + rte_pktmbuf_free(mbuf);
> > + }
> > + count++;
> > }
> >
> > out:
>
> Indentation here is wrong, and looks suspect.
> Either stats tag should be outside of loop
> Or stats is inside loop, and both of those goto's are unnecessary
>
Thanks for the feedback; I am pushing a new series with an extra tab.
So it be obvious that stats belongs to the the loop.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/dev/attachments/20250116/e35cce25/attachment.htm>
More information about the dev
mailing list