[PATCH v1] mbuf: remove the redundant code for mbuf prefree
Konstantin Ananyev
konstantin.ananyev at huawei.com
Mon Dec 4 12:07:08 CET 2023
> > For 'rte_pktmbuf_prefree_seg' function, 'rte_mbuf_refcnt_read(m) == 1'
> > and '__rte_mbuf_refcnt_update(m, -1) == 0' are the same cases where
> > mbuf's refcnt value should be 1. Thus we can simplify the code and
> > remove the redundant part.
> >
> > Furthermore, according to [1], when the mbuf is stored inside the
> > mempool, the m->refcnt value should be 1. And then it is detached
> > from its parent for an indirect mbuf. Thus change the description of
> > 'rte_pktmbuf_prefree_seg' function.
> >
> > [1] https://patches.dpdk.org/project/dpdk/patch/20170404162807.20157-4-
> > olivier.matz at 6wind.com/
> >
> > Suggested-by: Ruifeng Wang <ruifeng.wang at arm.com>
> > Signed-off-by: Feifei Wang <feifei.wang2 at arm.com>
> > ---
> > lib/mbuf/rte_mbuf.h | 22 +++-------------------
> > 1 file changed, 3 insertions(+), 19 deletions(-)
> >
> > diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
> > index 286b32b788..42e9b50d51 100644
> > --- a/lib/mbuf/rte_mbuf.h
> > +++ b/lib/mbuf/rte_mbuf.h
> > @@ -1328,7 +1328,7 @@ static inline int
> > __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
> > *
> > * This function does the same than a free, except that it does not
> > * return the segment to its pool.
> > - * It decreases the reference counter, and if it reaches 0, it is
> > + * It decreases the reference counter, and if it reaches 1, it is
>
> No, the original description is correct.
> However, the reference counter is set to 1 when put back in the pool, as a shortcut so it isn't needed to be set back to 1 when
> allocated from the pool.
>
> > * detached from its parent for an indirect mbuf.
> > *
> > * @param m
> > @@ -1358,25 +1358,9 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
>
> The preceding "if (likely(rte_mbuf_refcnt_read(m) == 1)) {" is only a shortcut for the likely case.
>
> > m->nb_segs = 1;
> >
> > return m;
> > -
> > - } else if (__rte_mbuf_refcnt_update(m, -1) == 0) {
> > -
> > - if (!RTE_MBUF_DIRECT(m)) {
> > - rte_pktmbuf_detach(m);
> > - if (RTE_MBUF_HAS_EXTBUF(m) &&
> > - RTE_MBUF_HAS_PINNED_EXTBUF(m) &&
> > - __rte_pktmbuf_pinned_extbuf_decref(m))
> > - return NULL;
> > - }
> > -
> > - if (m->next != NULL)
> > - m->next = NULL;
> > - if (m->nb_segs != 1)
> > - m->nb_segs = 1;
> > - rte_mbuf_refcnt_set(m, 1);
> > -
> > - return m;
> > }
> > +
> > + __rte_mbuf_refcnt_update(m, -1);
> > return NULL;
> > }
> >
> > --
> > 2.25.1
>
> NAK.
>
> This patch is not race safe.
+1, It is a bad idea.
> With the patch:
>
> This thread:
> if (likely(rte_mbuf_refcnt_read(m) == 1)) { // Assume it's 2.
>
> The other thread:
> if (likely(rte_mbuf_refcnt_read(m) == 1)) { // It's 2.
> __rte_mbuf_refcnt_update(m, -1); // Now it's 1.
> return NULL;
>
> This thread:
> __rte_mbuf_refcnt_update(m, -1); // Now it's 0.
> return NULL;
>
> None of the threads have done the "prefree" work.
More information about the dev
mailing list