[PATCH v2 1/2] eal: provide leading and trailing zero bit count abstraction
Tyler Retzlaff
roretzla at linux.microsoft.com
Wed Apr 5 17:22:16 CEST 2023
On Wed, Apr 05, 2023 at 09:44:54AM +0100, Bruce Richardson wrote:
> On Tue, Apr 04, 2023 at 02:23:22PM -0700, Tyler Retzlaff wrote:
> > On Thu, Jan 05, 2023 at 08:09:19AM +0100, Morten Brørup wrote:
> > > > From: Tyler Retzlaff [mailto:roretzla at linux.microsoft.com]
> > > > Sent: Thursday, 24 November 2022 00.43
> > > >
> > > > Provide an abstraction for leading and trailing zero bit counting
> > > > functions to hide compiler specific intrinsics and builtins.
> > > >
> > > > Signed-off-by: Tyler Retzlaff <roretzla at linux.microsoft.com>
> > > > ---
> > > > lib/eal/include/meson.build | 1 +
> > > > lib/eal/include/rte_bitcount.h | 265
> > > > +++++++++++++++++++++++++++++++++++++++++
> > > > 2 files changed, 266 insertions(+)
> > > > create mode 100644 lib/eal/include/rte_bitcount.h
> > > >
> > > > diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
> > > > index cfcd40a..8ff1d65 100644
> > > > --- a/lib/eal/include/meson.build
> > > > +++ b/lib/eal/include/meson.build
> > > > @@ -5,6 +5,7 @@ includes += include_directories('.')
> > > >
> > > > headers += files(
> > > > 'rte_alarm.h',
> > > > + 'rte_bitcount.h',
> > > > 'rte_bitmap.h',
> > > > 'rte_bitops.h',
> > > > 'rte_branch_prediction.h',
> > > > diff --git a/lib/eal/include/rte_bitcount.h
> > > > b/lib/eal/include/rte_bitcount.h
> > > > new file mode 100644
> > > > index 0000000..587de52
> > > > --- /dev/null
> > > > +++ b/lib/eal/include/rte_bitcount.h
> > > > @@ -0,0 +1,265 @@
> > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > + * Copyright (C) 2022 Microsoft Corporation
> > > > + */
> > > > +
> > > > +#ifndef _RTE_BITCOUNT_H_
> > > > +#define _RTE_BITCOUNT_H_
> > > > +
> > > > +#include <rte_compat.h>
> > > > +
> > > > +#ifdef __cplusplus
> > > > +extern "C" {
> > > > +#endif
> > > > +
> > > > +#ifdef RTE_TOOLCHAIN_MSVC
> > > > +
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> > > > notice
> > > > + *
> > > > + * Get the count of leading 0-bits in v.
> > > > + *
> > > > + * @param v
> > > > + * The value.
> > > > + * @return
> > > > + * The count of leading zero bits.
> > > > + */
> > > > +__rte_experimental
> > > > +static inline unsigned int
> > > > +rte_clz(unsigned int v)
> > > > +{
> > > > + unsigned long rv;
> > > > +
> > > > + (void)_BitScanReverse(&rv, v);
> > > > +
> > > > + return (unsigned int)rv;
> > > > +}
> > > > +
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> > > > notice
> > > > + *
> > > > + * Get the count of leading 0-bits in v.
> > > > + *
> > > > + * @param v
> > > > + * The value.
> > > > + * @return
> > > > + * The count of leading zero bits.
> > > > + */
> > > > +__rte_experimental
> > > > +static inline unsigned int
> > > > +rte_clzl(unsigned long v)
> > >
> > > Don't use l (long) and ll (long long) for names (and types), use explicit bit lengths, 32 and 64.
> > >
> > > E.g.: rte_clz32(uint32_t v)
> >
> > so i just noticed this, but sometimes these functions receive size_t so
> > naming them specifically 32/64 bit becomes problematic because are going
> > to end up with promotion on sizeof(size_t) == sizeof(long) == 4
> > platforms.
> >
> > i.e.
> > size_t s = ...;
> > x = rte_clz64(s); // assume 64-bit today
> >
> > this code is now broken because on 32-bit platform s will get promoted
> > and the extra 32 zero-bits will be returned in the result breaking
> > calculations.
> >
> > any thoughts? should we go back to l, ll?
> >
>
> Yes, promotion will happen, but I still think that the 32 and 64 versions
> are far clearer here in all cases. Anyone looking at the code will
> recognise that the result will be the leading zero count of a 64-bit number
> irrespective of the type actually passed in. It's less confusing now IMHO.
here's an example in the code that would result in a bad calculation or
at least i believe so at a glance. switching to rte_clz32() would break
on 64-bit since it would truncate.
lib/eal/common/malloc_elem.c
-log2 = sizeof(size) * 8 - __builtin_clzl(size);
+log2 = sizeof(size) * 8 - rte_clz64(size);
if i'm right you'd have to conditionally compile at the site.
#ifdef 64-bit
rte_clz64()
#else
rte_clz32()
#endif
and that seems very undesirable. another solution is to defer this
change until post 23.07 release (where C11 can be used) and we could
then just provide a single generic.
with C11 i can provide a single macro that doesn't need 8/16/32/64
suffix.
size_t v;
n = rte_clz(v); // sizeof(v) doesn't matter.
>
> /Bruce
More information about the dev
mailing list