[PATCH v2] eal: fix unaligned loads/stores in rte_memcpy_generic
Stephen Hemminger
stephen at networkplumber.org
Sat Jan 15 23:13:42 CET 2022
On Sat, 15 Jan 2022 16:39:50 -0500
Luc Pelletier <lucp.at.work at gmail.com> wrote:
> diff --git a/lib/eal/x86/include/rte_memcpy.h b/lib/eal/x86/include/rte_memcpy.h
> index 1b6c6e585f..e422397e49 100644
> --- a/lib/eal/x86/include/rte_memcpy.h
> +++ b/lib/eal/x86/include/rte_memcpy.h
> @@ -45,6 +45,23 @@ extern "C" {
> static __rte_always_inline void *
> rte_memcpy(void *dst, const void *src, size_t n);
>
> +/**
> + * Copy bytes from one location to another,
> + * locations should not overlap.
> + * Use with unaligned src/dst, and n <= 15.
> + */
> +static __rte_always_inline void *
> +rte_mov15_or_less_unaligned(void *dst, const void *src, size_t n)
> +{
> + void *ret = dst;
> + for (; n; n--) {
> + *((char *)dst) = *((const char *) src);
> + dst = ((char *)dst) + 1;
> + src = ((const char *)src) + 1;
> + }
> + return ret;
> +}
X86 always allows unaligned access. Irregardless of what tools say.
Why impose additional overhead in performance critical code.
More information about the dev
mailing list