[dpdk-dev] [PATCH v4 4/4] eal/atomic: add wrapper for c11 atomics

Honnappa Nagarahalli Honnappa.Nagarahalli at arm.com
Wed May 13 17:32:07 CEST 2020


<snip>

> > Subject: RE: [PATCH v4 4/4] eal/atomic: add wrapper for c11 atomics
> >
> > > From: Phil Yang [mailto:phil.yang at arm.com]
> > > Sent: Tuesday, May 12, 2020 10:03 AM
> > >
> > > Wraps up compiler c11 atomic built-ins with explicit memory ordering
> > > parameter.
> > >
> > > Signed-off-by: Phil Yang <phil.yang at arm.com>
> > > ---
> > >  lib/librte_eal/include/generic/rte_atomic_c11.h | 139
> > > ++++++++++++++++++++++++
> > >  lib/librte_eal/include/meson.build              |   1 +
> > >  2 files changed, 140 insertions(+)
> > >  create mode 100644 lib/librte_eal/include/generic/rte_atomic_c11.h
> > >
> > > diff --git a/lib/librte_eal/include/generic/rte_atomic_c11.h
> > > b/lib/librte_eal/include/generic/rte_atomic_c11.h
> > > new file mode 100644
> > > index 0000000..20490f4
> > > --- /dev/null
> > > +++ b/lib/librte_eal/include/generic/rte_atomic_c11.h
> > > @@ -0,0 +1,139 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + * Copyright(c) 2020 Arm Limited
> > > + */
> > > +
> > > +#ifndef _RTE_ATOMIC_C11_H_
> > > +#define _RTE_ATOMIC_C11_H_
> > > +
> > > +#include <rte_common.h>
> > > +
> > > +/**
> > > + * @file
> > > + * c11 atomic operations
> > > + *
> > > + * This file wraps up compiler (GCC) c11 atomic built-ins.
> > > + *
> > > +https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html
> > > + */
> > > +
> > > +#define memory_order_relaxed __ATOMIC_RELAXED #define
> > > +memory_order_consume __ATOMIC_CONSUME #define
> memory_order_acquire
> > > +__ATOMIC_ACQUIRE #define memory_order_release
> __ATOMIC_RELEASE
> > > +#define memory_order_acq_rel __ATOMIC_ACQ_REL #define
> > > +memory_order_seq_cst __ATOMIC_SEQ_CST
> >
> > Why redefine these instead of using the original names?
> >
> > If we need to redefine them, they should be upper case and RTE_ prefixed.
> 
> Agreed, we don't need to redefine them. I was trying to align with the
> stdatomic library.
> I will remove them in the next version.
Agree, this will keep it inline with rte_atomic128_cmp_exchange API.

> 
> >
> > > +
> > > +/* Generic atomic load.
> > > + * It returns the contents of *PTR.
> > > + *
> > > + * The valid memory order variants are:
> > > + * memory_order_relaxed
> > > + * memory_order_consume
> > > + * memory_order_acquire
> > > + * memory_order_seq_cst
> > > + */
> > > +#define rte_atomic_load(PTR, MO)\
> > > +(__extension__ ({\
> > > +typeof(PTR) _ptr = (PTR);\
> > > +typeof(*_ptr) _ret;\
> > > +__atomic_load(_ptr, &_ret, (MO));\
> > > +_ret;\
> > > +}))
> > > +
> > > +/* Generic atomic store.
> > > + * It stores the value of VAL into *PTR.
> > > + *
> > > + * The valid memory order variants are:
> > > + * memory_order_relaxed
> > > + * memory_order_release
> > > + * memory_order_seq_cst
> > > + */
> > > +#define rte_atomic_store(PTR, VAL, MO)\ (__extension__ ({\
> > > +typeof(PTR) _ptr = (PTR);\
> > > +typeof(*_ptr) _val = (VAL);\
> > > +__atomic_store(_ptr, &_val, (MO));\
> > > +}))
> > > +
> > > +/* Generic atomic exchange.
> > > + * It stores the value of VAL into *PTR.
> > > + * It returns the original value of *PTR.
> > > + *
> > > + * The valid memory order variants are:
> > > + * memory_order_relaxed
> > > + * memory_order_acquire
> > > + * memory_order_release
> > > + * memory_order_acq_rel
> > > + * memory_order_seq_cst
> > > + */
> > > +#define rte_atomic_exchange(PTR, VAL, MO)\ (__extension__ ({\
> > > +typeof(PTR) _ptr = (PTR);\
> > > +typeof(*_ptr) _val = (VAL);\
> > > +typeof(*_ptr) _ret;\
> > > +__atomic_exchange(_ptr, &_val, &_ret, (MO));\ _ret;\
> > > +}))
> > > +
> > > +/* Generic atomic compare and exchange.
> > > + * It compares the contents of *PTR with the contents of *EXP.
> > > + * If equal, the operation is a read-modify-write operation that
> > > + * writes DES into *PTR.
> > > + * If they are not equal, the operation is a read and the current
> > > + * contents of *PTR are written into *EXP.
> > > + *
> > > + * The weak compare_exchange may fail spuriously and the strong
> > > + * variation will never fails spuriously.
> >
> > "will never fails spuriously" -> "will never fail" / "never fails".
> 
> Thanks, I will fix it in the next version.
> 
> >
> > And I suggest that you elaborate what "fail" means here, i.e. what
> > exactly can happen when it fails.
> 
> Yes. That would be better. I will update it in the new version.
> Fail spuriously means the compare exchange operation acts as *PTR != *EXP
> and return false even if they are equal.
> 
> >
> > > + *
> > > + * If DES is written into *PTR then true is returned and memory is
> > > + * affected according to the memory order specified by SUC_MO.
> > > + * There are no restrictions on what memory order can be used here.
> > > + *
> > > + * Otherwise, false is returned and memory is affected according to
> > > + * FAIL_MO. This memory order cannot be memory_order_release nor
> > > + * memory_order_acq_rel. It also cannot be a stronger order than
> > > +that
> > > + * specified by SUC_MO.
> > > + */
> > > +#define rte_atomic_compare_exchange_weak(PTR, EXP, DES, SUC_MO,
> > > FAIL_MO)    \
> > > +(__extension__ ({    \
> > > +typeof(PTR) _ptr = (PTR);    \
> > > +typeof(*_ptr) _des = (DES);    \
> > > +__atomic_compare_exchange(_ptr, (EXP), &_des, 1,    \
> > > + (SUC_MO), (FAIL_MO));
> >     \
> > > +}))
> > > +
> > > +#define rte_atomic_compare_exchange_strong(PTR, EXP, DES, SUC_MO,
> > > FAIL_MO)  \
> > > +(__extension__ ({    \
> > > +typeof(PTR) _ptr = (PTR);    \
> > > +typeof(*_ptr) _des = (DES);    \
> > > +__atomic_compare_exchange(_ptr, (EXP), &_des, 0,    \
> > > + (SUC_MO), (FAIL_MO));
> >     \
> > > +}))
> > > +
> > > +#define rte_atomic_fetch_add(PTR, VAL, MO)\
> > > +__atomic_fetch_add((PTR), (VAL), (MO)) #define
> > > +rte_atomic_fetch_sub(PTR, VAL, MO)\ __atomic_fetch_sub((PTR),
> > > +(VAL), (MO)) #define rte_atomic_fetch_or(PTR, VAL, MO)\
> > > +__atomic_fetch_or((PTR), (VAL), (MO)) #define
> > > +rte_atomic_fetch_xor(PTR, VAL, MO)\ __atomic_fetch_xor((PTR),
> > > +(VAL), (MO)) #define rte_atomic_fetch_and(PTR, VAL, MO)\
> > > +__atomic_fetch_and((PTR), (VAL), (MO))
> > > +
> > > +#define rte_atomic_add_fetch(PTR, VAL, MO)\
> > > +__atomic_add_fetch((PTR), (VAL), (MO)) #define
> > > +rte_atomic_sub_fetch(PTR, VAL, MO)\ __atomic_sub_fetch((PTR),
> > > +(VAL), (MO)) #define rte_atomic_or_fetch(PTR, VAL, MO)\
> > > +__atomic_or_fetch((PTR), (VAL), (MO)) #define
> > > +rte_atomic_xor_fetch(PTR, VAL, MO)\ __atomic_xor_fetch((PTR),
> > > +(VAL), (MO)) #define rte_atomic_and_fetch(PTR, VAL, MO)\
> > > +__atomic_and_fetch((PTR), (VAL), (MO))
> > > +
> > > +/* Synchronization fence between threads based on
> > > + * the specified memory order.
> > > + */
> > > +#define rte_atomic_thread_fence(MO) __atomic_thread_fence((MO))
> > > +
> > > +#endif /* _RTE_ATOMIC_C11_H_ */
> > > diff --git a/lib/librte_eal/include/meson.build
> > > b/lib/librte_eal/include/meson.build
> > > index bc73ec2..dac1aac 100644
> > > --- a/lib/librte_eal/include/meson.build
> > > +++ b/lib/librte_eal/include/meson.build
> > > @@ -51,6 +51,7 @@ headers += files(
> > >  # special case install the generic headers, since they go in a
> > > subdir  generic_headers = files(  'generic/rte_atomic.h',
> > > +'generic/rte_atomic_c11.h',
> > >  'generic/rte_byteorder.h',
> > >  'generic/rte_cpuflags.h',
> > >  'generic/rte_cycles.h',
> > > --
> > > 2.7.4
> > >
> >
> > Thumbs up for the good function documentation. :-)
> 
> Thank you for your comments.
> 
> Thanks,
> Phil
> 
> >
> >
> > Med venlig hilsen / kind regards
> > - Morten Brørup
> >
> >
> 



More information about the dev mailing list