[PATCH v7] mbuf: optimize segment prefree

Morten Brørup mb at smartsharesystems.com
Fri Oct 24 10:58:47 CEST 2025


> From: Konstantin Ananyev [mailto:konstantin.ananyev at huawei.com]
> Sent: Friday, 24 October 2025 10.20
> 
> >
> > Refactored rte_pktmbuf_prefree_seg() for both performance and
> readability.
> >
> > With the optimized RTE_MBUF_DIRECT() macro, the common likely code
> path
> > now fits within one instruction cache line on x86-64 when built with
> GCC.
> >
> > Signed-off-by: Morten Brørup <mb at smartsharesystems.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev at huawei.com>
> > Acked-by: Chengwen Feng <fengchengwen at huawei.com>
> > Reviewed-by: Bruce Richardson <bruce.richardson at intel.com>
> > ---
> > v7:
> > * Go back to long names instead of numerical value in
> RTE_MBUF_DIRECT()
> >   macro.
> >   (Konstantin Ananyev)
> > * Updated static_assert() accordingly.

[...]

> >   *
> >   * If a mbuf embeds its own data after the rte_mbuf structure, this
> mbuf
> >   * can be defined as a direct mbuf.
> > - */
> > + *
> > + * Note: Macro optimized for code size.
> > + *
> > + * The plain macro would be:
> > + * \code{.c}
> > + *      #define RTE_MBUF_DIRECT(mb) \
> > + *          (!((mb)->ol_flags & (RTE_MBUF_F_INDIRECT |
> > RTE_MBUF_F_EXTERNAL)))
> > + * \endcode
> > + *
> > + * The flags RTE_MBUF_F_INDIRECT and RTE_MBUF_F_EXTERNAL are both in
> > the MSB (most significant
> > + * byte) of the 64-bit ol_flags field, so we only compare this one
> byte instead of
> > all 64 bits.
> > + *
> > + * E.g., GCC version 16.0.0 20251019 (experimental) generates the
> following
> > code for x86-64.
> > + *
> > + * With the plain macro, 17 bytes of instructions:
> > + * \code
> > + *      movabs rax,0x6000000000000000       // 10 bytes
> > + *      and    rax,QWORD PTR [rdi+0x18]     // 4 bytes
> > + *      sete   al                           // 3 bytes
> > + * \endcode
> > + * With this optimized macro, only 7 bytes of instructions:
> > + * \code
> > + *      test   BYTE PTR [rdi+0x1f],0x60     // 4 bytes
> > + *      sete   al                           // 3 bytes
> > + * \endcode
> > + */
> > +#ifdef __DOXYGEN__
> > +#define RTE_MBUF_DIRECT(mb) \
> > +	!(((const char *)(&(mb)->ol_flags))[MSB_OFFSET /* 7 or 0,
> depending on
> > endianness */] & \
> > +	(char)((RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL) >> (7 *
> > CHAR_BIT)))
> > +#else /* !__DOXYGEN__ */
> > +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > +/* On little endian architecture, the MSB of a 64-bit integer is at
> byte offset 7. */
> > +#define RTE_MBUF_DIRECT(mb) \
> > +	!(((const char *)(&(mb)->ol_flags))[7] & \
> > +	(char)((RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL) >> (7 *
> > CHAR_BIT)))
> > +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> > +/* On big endian architecture, the MSB of a 64-bit integer is at
> byte offset 0. */
> >  #define RTE_MBUF_DIRECT(mb) \
> > -	(!((mb)->ol_flags & (RTE_MBUF_F_INDIRECT |
> > RTE_MBUF_F_EXTERNAL)))
> > +	!(((const char *)(&(mb)->ol_flags))[0] & \
> > +	(char)((RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL) >> (7 *
> > CHAR_BIT)))
> > +#endif /* RTE_BYTE_ORDER */
> > +#endif /* !__DOXYGEN__ */
> > +/* Verify the optimization above. */
> > +static_assert(((RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL) &
> > (UINT64_C(0xFF) << (7 * CHAR_BIT))) ==
> > +	(RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL),
> > +	"(RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL) is not at MSB");
> >
> >  /** Uninitialized or unspecified port. */
> >  #define RTE_MBUF_PORT_INVALID UINT16_MAX
> > --
> 
> LGTM, thanks for refactoring.

Thank you for reviewing, Konstantin.

I had no preference for v7 or v6, but Bruce and Thomas preferred v6, so v6 was applied.



More information about the dev mailing list