[PATCH] eal/x86: remove redundant round to improve performance
Morten Brørup
mb at smartsharesystems.com
Wed Mar 29 11:30:22 CEST 2023
> From: Leyi Rong [mailto:leyi.rong at intel.com]
> Sent: Wednesday, 29 March 2023 11.17
>
> In rte_memcpy_aligned(), one redundant round is taken in the 64 bytes
> block copy loops if the size is a multiple of 64. So, let the catch-up
> copy the last 64 bytes in this case.
>
> Suggested-by: Morten Brørup <mb at smartsharesystems.com>
> Signed-off-by: Leyi Rong <leyi.rong at intel.com>
> ---
> lib/eal/x86/include/rte_memcpy.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/eal/x86/include/rte_memcpy.h
> b/lib/eal/x86/include/rte_memcpy.h
> index d4d7a5cfc8..fd151be708 100644
> --- a/lib/eal/x86/include/rte_memcpy.h
> +++ b/lib/eal/x86/include/rte_memcpy.h
> @@ -846,7 +846,7 @@ rte_memcpy_aligned(void *dst, const void *src, size_t n)
> }
>
> /* Copy 64 bytes blocks */
> - for (; n >= 64; n -= 64) {
> + for (; n > 64; n -= 64) {
> rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> dst = (uint8_t *)dst + 64;
> src = (const uint8_t *)src + 64;
> --
> 2.34.1
>
Reviewed-by: Morten Brørup <mb at smartsharesystems.com>
More information about the dev
mailing list