[PATCH 2/2] service: fix potential stats race-condition on MT services
Morten Brørup
mb at smartsharesystems.com
Fri Jul 8 18:29:42 CEST 2022
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli at arm.com]
> Sent: Friday, 8 July 2022 17.16
>
> <snip>
> >
> > This commit fixes a potential racey-add that could occur if multiple
> service-
> > lcores were executing the same MT-safe service at the same time, with
> > service statistics collection enabled.
> >
> > Because multiple threads can run and execute the service, the stats
> values
> > can have multiple writer threads, resulting in the requirement of
> using
> > atomic addition for correctness.
> >
> > Note that when a MT unsafe service is executed, a spinlock is held,
> so the
> > stats increments are protected. This fact is used to avoid executing
> atomic
> > add instructions when not required.
> >
> > This patch causes a 1.25x increase in cycle-cost for polling a MT
> safe service
> > when statistics are enabled. No change was seen for MT unsafe
> services, or
> > when statistics are disabled.
> >
> > Reported-by: Mattias Rönnblom <mattias.ronnblom at ericsson.com>
> > Suggested-by: Honnappa Nagarahalli <Honnappa.Nagarahalli at arm.com>
> > Suggested-by: Morten Brørup <mb at smartsharesystems.com>
> > Signed-off-by: Harry van Haaren <harry.van.haaren at intel.com>
> >
> > ---
> > ---
> > lib/eal/common/rte_service.c | 10 ++++++++--
> > 1 file changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/eal/common/rte_service.c
> b/lib/eal/common/rte_service.c
> > index ef31b1f63c..f045e74ef3 100644
> > --- a/lib/eal/common/rte_service.c
> > +++ b/lib/eal/common/rte_service.c
> > @@ -363,9 +363,15 @@ service_runner_do_callback(struct
> > rte_service_spec_impl *s,
> > uint64_t start = rte_rdtsc();
> > s->spec.callback(userdata);
> > uint64_t end = rte_rdtsc();
> > - s->cycles_spent += end - start;
> > + uint64_t cycles = end - start;
> > cs->calls_per_service[service_idx]++;
> > - s->calls++;
> > + if (service_mt_safe(s)) {
> > + __atomic_fetch_add(&s->cycles_spent, cycles,
> > __ATOMIC_RELAXED);
> > + __atomic_fetch_add(&s->calls, 1,
> > __ATOMIC_RELAXED);
> > + } else {
> > + s->cycles_spent += cycles;
> > + s->calls++;
> This is still a problem from a reader perspective. It is possible that
> the writes could be split while a reader is reading the stats. These
> need to be atomic adds.
I don't understand what you suggest can go wrong here, Honnappa. If you talking about 64 bit counters on 32 bit architectures, then I understand the problem (and have many years of direct experience with it myself). Otherwise, I hope you can elaborate or direct me to educational material about the issue, considering this a learning opportunity. :-)
>
> > + }
> > } else
> > s->spec.callback(userdata);
> > }
> > --
> > 2.32.0
More information about the dev
mailing list