[PATCH] eal/x86: cache queried CPU flags

Konstantin Ananyev konstantin.ananyev at huawei.com
Fri Oct 11 15:00:40 CEST 2024



> -----Original Message-----
> From: Bruce Richardson <bruce.richardson at intel.com>
> Sent: Friday, October 11, 2024 1:48 PM
> To: Konstantin Ananyev <konstantin.ananyev at huawei.com>
> Cc: dev at dpdk.org; david.marchand at redhat.com
> Subject: Re: [PATCH] eal/x86: cache queried CPU flags
> 
> On Fri, Oct 11, 2024 at 12:42:01PM +0000, Konstantin Ananyev wrote:
> >
> >
> > > Rather than re-querying the HW each time a CPU flag is requested, we can
> > > just save the return value in the flags array. This should speed up
> > > repeated querying of CPU flags, and provides a workaround for a reported
> > > issue where errors are seen with constant querying of the AVX-512 CPU
> > > flag from a non-AVX VM.
> > >
> > > Bugzilla Id: 1501
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
> > > ---
> > >  lib/eal/x86/rte_cpuflags.c | 20 +++++++++++++++-----
> > >  1 file changed, 15 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
> > > index 26163ab746..62e782fb4b 100644
> > > --- a/lib/eal/x86/rte_cpuflags.c
> > > +++ b/lib/eal/x86/rte_cpuflags.c
> > > @@ -8,6 +8,7 @@
> > >  #include <errno.h>
> > >  #include <stdint.h>
> > >  #include <string.h>
> > > +#include <stdbool.h>
> > >
> > >  #include "rte_cpuid.h"
> > >
> > > @@ -21,12 +22,14 @@ struct feature_entry {
> > >  	uint32_t bit;				/**< cpuid register bit */
> > >  #define CPU_FLAG_NAME_MAX_LEN 64
> > >  	char name[CPU_FLAG_NAME_MAX_LEN];       /**< String for printing */
> > > +	bool has_value;
> > > +	bool value;
> > >  };
> > >
> > >  #define FEAT_DEF(name, leaf, subleaf, reg, bit) \
> > >  	[RTE_CPUFLAG_##name] = {leaf, subleaf, reg, bit, #name },
> > >
> > > -const struct feature_entry rte_cpu_feature_table[] = {
> > > +struct feature_entry rte_cpu_feature_table[] = {
> > >  	FEAT_DEF(SSE3, 0x00000001, 0, RTE_REG_ECX,  0)
> > >  	FEAT_DEF(PCLMULQDQ, 0x00000001, 0, RTE_REG_ECX,  1)
> > >  	FEAT_DEF(DTES64, 0x00000001, 0, RTE_REG_ECX,  2)
> > > @@ -147,7 +150,7 @@ const struct feature_entry rte_cpu_feature_table[] = {
> > >  int
> > >  rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> > >  {
> > > -	const struct feature_entry *feat;
> > > +	struct feature_entry *feat;
> > >  	cpuid_registers_t regs;
> > >  	unsigned int maxleaf;
> > >
> > > @@ -156,6 +159,8 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> > >  		return -ENOENT;
> > >
> > >  	feat = &rte_cpu_feature_table[feature];
> > > +	if (feat->has_value)
> > > +		return feat->value;
> > >
> > >  	if (!feat->leaf)
> > >  		/* This entry in the table wasn't filled out! */
> > > @@ -163,8 +168,10 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> > >
> > >  	maxleaf = __get_cpuid_max(feat->leaf & 0x80000000, NULL);
> > >
> > > -	if (maxleaf < feat->leaf)
> > > -		return 0;
> > > +	if (maxleaf < feat->leaf) {
> > > +		feat->value = 0;
> > > +		goto out;
> > > +	}
> > >
> > >  #ifdef RTE_TOOLCHAIN_MSVC
> > >  	__cpuidex(regs, feat->leaf, feat->subleaf);
> > > @@ -175,7 +182,10 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> > >  #endif
> > >
> > >  	/* check if the feature is enabled */
> > > -	return (regs[feat->reg] >> feat->bit) & 1;
> > > +	feat->value = (regs[feat->reg] >> feat->bit) & 1;
> > > +out:
> > > +	feat->has_value = true;
> > > +	return feat->value;
> >
> > If that function can be called by 2 (or more) threads simultaneously,
> > then In theory  'feat->has_value = true;' can be reordered with
> > ' feat->value = (regs[feat->reg] >> feat->bit) & 1;'  (by cpu or complier)
> > and some thread(s) can get wrong feat->value.
> > The probability of such collision is really low, but still seems not impossible.
> >
> 
> Well since this code is x86-specific the externally visible store ordering
> will match the instruction store ordering. Therefore, I think a compiler
> barrier is all that is necessary before feat->has_value assignment,
> correct?

Yep, seems so.


More information about the dev mailing list