[dpdk-dev] [PATCH v2 1/3] eal/arm64: add 128-bit atomic compare exchange

Honnappa Nagarahalli Honnappa.Nagarahalli at arm.com
Mon Jun 24 18:12:47 CEST 2019


<snip>

> > >
> > > Add 128-bit atomic compare exchange on aarch64.
> > >
> > > Signed-off-by: Phil Yang <phil.yang at arm.com>
> > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli at arm.com>
> > > Tested-by: Honnappa Nagarahalli <honnappa.nagarahalli at arm.com>
> > > ---
> > > This patch depends on 'eal/stack: fix 'pointer-sign' warning'
> > > http://patchwork.dpdk.org/patch/54840/
> > >
> > > +
> > > +#ifdef __ARM_FEATURE_ATOMICS
> > > +static inline rte_int128_t
> > > +__rte_casp(rte_int128_t *dst, rte_int128_t old, rte_int128_t
> > > +updated, int mo) {
> >
> > Better to change to "const int mo".
> >
> > > +
> > > +	/* caspX instructions register pair must start from even-numbered
> > > +	 * register at operand 1.
> > > +	 * So, specify registers for local variables here.
> > > +	 */
> > > +	register uint64_t x0 __asm("x0") = (uint64_t)old.val[0];
> > > +	register uint64_t x1 __asm("x1") = (uint64_t)old.val[1];
> > > +	register uint64_t x2 __asm("x2") = (uint64_t)updated.val[0];
> > > +	register uint64_t x3 __asm("x3") = (uint64_t)updated.val[1];
> > > +
> > > +	if (mo ==  __ATOMIC_RELAXED) {
> > > +		asm volatile(
> > > +				"casp %[old0], %[old1], %[upd0], %[upd1],
> > > [%[dst]]"
> > > +				: [old0] "+r" (x0),
> > > +				  [old1] "+r" (x1)
> > > +				: [upd0] "r" (x2),
> > > +				  [upd1] "r" (x3),
> > > +				  [dst] "r" (dst)
> > > +				: "memory");
> > > +	} else if (mo == __ATOMIC_ACQUIRE) {
> > > +		asm volatile(
> > > +				"caspa %[old0], %[old1], %[upd0], %[upd1],
> > > [%[dst]]"
> > > +				: [old0] "+r" (x0),
> > > +				  [old1] "+r" (x1)
> > > +				: [upd0] "r" (x2),
> > > +				  [upd1] "r" (x3),
> > > +				  [dst] "r" (dst)
> > > +				: "memory");
> > > +	} else if (mo == __ATOMIC_ACQ_REL) {
> > > +		asm volatile(
> > > +				"caspal %[old0], %[old1], %[upd0], %[upd1],
> > > [%[dst]]"
> > > +				: [old0] "+r" (x0),
> > > +				  [old1] "+r" (x1)
> > > +				: [upd0] "r" (x2),
> > > +				  [upd1] "r" (x3),
> > > +				  [dst] "r" (dst)
> > > +				: "memory");
> > > +	} else if (mo == __ATOMIC_RELEASE) {
> > > +		asm volatile(
> > > +				"caspl %[old0], %[old1], %[upd0], %[upd1],
> > > [%[dst]]"
> > > +				: [old0] "+r" (x0),
> > > +				  [old1] "+r" (x1)
> > > +				: [upd0] "r" (x2),
> > > +				  [upd1] "r" (x3),
> > > +				  [dst] "r" (dst)
> > > +				: "memory");
> >
> > I think, This duplication code can be avoid with macro and
> > casp/capsa/casal/caspl as argument.
> >
> > > +	} else {
> > > +		rte_panic("Invalid memory order\n");
> >
> >
> > rte_panic should be removed from library. In this case, I think,
> > invalid mo can go for strongest barrier.
It is added here to capture programming errors. Memory order can be passed during compilation or during run time. 'rte_panic' supports both of these.
Adding code with strongest memory order will mask the programming error.

> >
> > > +	}
> > > +
> > > +	old.val[0] = x0;
> > > +	old.val[1] = x1;
> > > +
> > > +	return old;
> > > +}
> > > +#else

<snip>



More information about the dev mailing list