[PATCH] eal: add cache guard to per-lcore PRNG state
Morten Brørup
mb at smartsharesystems.com
Fri Sep 29 20:55:01 CEST 2023
PING for review.
Stephen, the discussion took quite a few turns, but didn't seem to reach a better solution. If you don't object to this simple patch, could you please also ack/review it, so it can be applied.
> From: Mattias Rönnblom [mailto:hofors at lysator.liu.se]
> Sent: Monday, 4 September 2023 13.57
>
> On 2023-09-04 11:26, Morten Brørup wrote:
> > The per-lcore random state is frequently updated by their individual
> > lcores, so add a cache guard to prevent CPU cache thrashing.
> >
>
> "to prevent false sharing in case the CPU employs a next-N-lines (or
> similar) hardware prefetcher"
>
> In my world, cache trashing and cache line contention are two different
> things.
You are right, Mattias.
I didn't think give the description much thought, and simply used "cache trashing" in a broad, general sense. I think most readers will get the point anyway. Or they could take a look at the description provided for the RTE_CACHE_GUARD itself. :-)
>
> Other than that,
> Acked-by: Mattias Rönnblom <mattias.ronnblom at ericsson.com>
>
> > Depends-on: series-29415 ("clarify purpose of empty cache lines")
> >
> > Signed-off-by: Morten Brørup <mb at smartsharesystems.com>
> > ---
> > lib/eal/common/rte_random.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
> > index 565f2401ce..3df0c7004a 100644
> > --- a/lib/eal/common/rte_random.c
> > +++ b/lib/eal/common/rte_random.c
> > @@ -18,6 +18,7 @@ struct rte_rand_state {
> > uint64_t z3;
> > uint64_t z4;
> > uint64_t z5;
> > + RTE_CACHE_GUARD;
> > } __rte_cache_aligned;
> >
> > /* One instance each for every lcore id-equipped thread, and one
More information about the dev
mailing list