[PATCH v5] mbuf: optimize segment prefree

Bruce Richardson bruce.richardson at intel.com
Thu Oct 23 10:08:52 CEST 2025


On Thu, Oct 23, 2025 at 08:01:36AM +0000, Morten Brørup wrote:
> Refactored rte_pktmbuf_prefree_seg() for both performance and readability.
> 
> With the optimized RTE_MBUF_DIRECT() macro, the common likely code path
> now fits within one instruction cache line on x86-64 when built with GCC.
> 
> Signed-off-by: Morten Brørup <mb at smartsharesystems.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev at huawei.com>
> Acked-by: Chengwen Feng <fengchengwen at huawei.com>
> Reviewed-by: Bruce Richardson <bruce.richardson at intel.com>
> ---
> v5:
> * Removed the plain RTE_MBUF_DIRECT() macro, and only use the optimized
>   variant. (Bruce Richardson)
>   Further testing on Godbolt confirmed that other compilers benefit from
>   the optimized macro too.
> * Shortened the description of the RTE_MBUF_DIRECT() macro, and only
>   provide one example of code emitted by a compiler. (Bruce Richardson)
> * Consolidated the static_assert() into one, covering both little and big
>   endian.
>   This also reduces the amount of endian-conditional source code and
>   improves readability.
>   (Bruce Richardson)
> * Added comment about MSB meaning "most significant byte".

LGTM now thanks!


More information about the dev mailing list