[dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORDERED_MEM_OPS configuration parameter

Ananyev, Konstantin konstantin.ananyev at intel.com
Tue Nov 3 18:12:21 CET 2015



> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> Sent: Tuesday, November 03, 2015 4:53 PM
> To: Ananyev, Konstantin
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORDERED_MEM_OPS configuration parameter
> 
> On Tue, Nov 03, 2015 at 04:28:00PM +0000, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> > > Sent: Tuesday, November 03, 2015 4:19 PM
> > > To: Ananyev, Konstantin
> > > Cc: dev at dpdk.org
> > > Subject: Re: [dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORDERED_MEM_OPS configuration parameter
> > >
> > > On Tue, Nov 03, 2015 at 03:57:24PM +0000, Ananyev, Konstantin wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
> > > > > Sent: Tuesday, November 03, 2015 3:52 PM
> > > > > To: dev at dpdk.org
> > > > > Subject: [dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORDERED_MEM_OPS configuration parameter
> > > > >
> > > > > rte_ring implementation needs explicit memory barrier
> > > > > in weakly ordered architecture like ARM unlike
> > > > > strongly ordered architecture like X86
> > > > >
> > > > > Introducing RTE_ARCH_STRONGLY_ORDERED_MEM_OPS
> > > > > configuration to abstract such dependency so that other
> > > > > weakly ordered architectures can reuse this infrastructure.
> > > >
> > > > Looks a bit clumsy.
> > > > Please try to follow this suggestion instead:
> > > > http://dpdk.org/ml/archives/dev/2015-October/025505.html
> > >
> > > Make sense. Do we agree on a macro that is defined based upon
> > > RTE_ARCH_STRONGLY_ORDERED_MEM_OP to remove clumsy #ifdef every where ?
> >
> > Why do we need that macro at all?
> > Why just not have architecture specific macro as was discussed in that thread?
> >
> > So for intel somewhere inside
> > lib/librte_eal/common/include/arch/x86/rte_atomic.h
> >
> > it would be:
> >
> > #define rte_smp_wmb()	rte_compiler_barrier()
> >
> > For arm inside lib/librte_eal/common/include/arch/x86/rte_atomic.h
> >
> > #define rte_smp_wmb()	rte_wmb()
> 
> I am not sure about the other architecture but in armv8 device memory
> (typically mapped through NIC PCIe BAR space) are strongly ordered.
> So there is one more dimension to the equation(normal memory or device
> memory).
> IMO rte_smp_wmb() -> rte_wmb() mapping to deal with device memory may
> not be correct on arm64 ?

I thought we are talking now for multi-processor case no?
For that would be: rte_smp_... set of macros.
Similar to what linux guys have. 
Konstantin

> 
> Thoughts ?




> 
> >
> > And so on.
> >
> > I think it was already an attempt (not finished) to do similar stuff for ppc:
> > http://dpdk.org/dev/patchwork/patch/5884/
> >
> > Konstantin
> >
> > >
> > > Jerin
> > >
> > > >
> > > > Konstantin
> > > >
> > > > >
> > > > > Signed-off-by: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> > > > > ---
> > > > >  config/common_bsdapp                         |  5 +++++
> > > > >  config/common_linuxapp                       |  5 +++++
> > > > >  config/defconfig_arm64-armv8a-linuxapp-gcc   |  1 +
> > > > >  config/defconfig_arm64-thunderx-linuxapp-gcc |  1 +
> > > > >  lib/librte_ring/rte_ring.h                   | 20 ++++++++++++++++++++
> > > > >  5 files changed, 32 insertions(+)
> > > > >
> > > > > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > > > > index b37dcf4..c8d1f63 100644
> > > > > --- a/config/common_bsdapp
> > > > > +++ b/config/common_bsdapp
> > > > > @@ -79,6 +79,11 @@ CONFIG_RTE_FORCE_INTRINSICS=n
> > > > >  CONFIG_RTE_ARCH_STRICT_ALIGN=n
> > > > >
> > > > >  #
> > > > > +# Machine has strongly-ordered memory operations on normal memory like x86
> > > > > +#
> > > > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=y
> > > > > +
> > > > > +#
> > > > >  # Compile to share library
> > > > >  #
> > > > >  CONFIG_RTE_BUILD_SHARED_LIB=n
> > > > > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > > > > index 0de43d5..d040a74 100644
> > > > > --- a/config/common_linuxapp
> > > > > +++ b/config/common_linuxapp
> > > > > @@ -79,6 +79,11 @@ CONFIG_RTE_FORCE_INTRINSICS=n
> > > > >  CONFIG_RTE_ARCH_STRICT_ALIGN=n
> > > > >
> > > > >  #
> > > > > +# Machine has strongly-ordered memory operations on normal memory like x86
> > > > > +#
> > > > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=y
> > > > > +
> > > > > +#
> > > > >  # Compile to share library
> > > > >  #
> > > > >  CONFIG_RTE_BUILD_SHARED_LIB=n
> > > > > diff --git a/config/defconfig_arm64-armv8a-linuxapp-gcc b/config/defconfig_arm64-armv8a-linuxapp-gcc
> > > > > index 6ea38a5..5289152 100644
> > > > > --- a/config/defconfig_arm64-armv8a-linuxapp-gcc
> > > > > +++ b/config/defconfig_arm64-armv8a-linuxapp-gcc
> > > > > @@ -37,6 +37,7 @@ CONFIG_RTE_ARCH="arm64"
> > > > >  CONFIG_RTE_ARCH_ARM64=y
> > > > >  CONFIG_RTE_ARCH_64=y
> > > > >  CONFIG_RTE_ARCH_ARM_NEON=y
> > > > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=n
> > > > >
> > > > >  CONFIG_RTE_FORCE_INTRINSICS=y
> > > > >
> > > > > diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
> > > > > index e8fccc7..79fa9e6 100644
> > > > > --- a/config/defconfig_arm64-thunderx-linuxapp-gcc
> > > > > +++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
> > > > > @@ -37,6 +37,7 @@ CONFIG_RTE_ARCH="arm64"
> > > > >  CONFIG_RTE_ARCH_ARM64=y
> > > > >  CONFIG_RTE_ARCH_64=y
> > > > >  CONFIG_RTE_ARCH_ARM_NEON=y
> > > > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=n
> > > > >
> > > > >  CONFIG_RTE_FORCE_INTRINSICS=y
> > > > >
> > > > > diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
> > > > > index af68888..1ccd186 100644
> > > > > --- a/lib/librte_ring/rte_ring.h
> > > > > +++ b/lib/librte_ring/rte_ring.h
> > > > > @@ -457,7 +457,12 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
> > > > >
> > > > >  	/* write entries in ring */
> > > > >  	ENQUEUE_PTRS();
> > > > > +
> > > > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS
> > > > >  	rte_compiler_barrier();
> > > > > +#else
> > > > > +	rte_wmb();
> > > > > +#endif
> > > > >
> > > > >  	/* if we exceed the watermark */
> > > > >  	if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {
> > > > > @@ -552,7 +557,12 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
> > > > >
> > > > >  	/* write entries in ring */
> > > > >  	ENQUEUE_PTRS();
> > > > > +
> > > > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS
> > > > >  	rte_compiler_barrier();
> > > > > +#else
> > > > > +	rte_wmb();
> > > > > +#endif
> > > > >
> > > > >  	/* if we exceed the watermark */
> > > > >  	if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {
> > > > > @@ -643,7 +653,12 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
> > > > >
> > > > >  	/* copy in table */
> > > > >  	DEQUEUE_PTRS();
> > > > > +
> > > > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS
> > > > >  	rte_compiler_barrier();
> > > > > +#else
> > > > > +	rte_rmb();
> > > > > +#endif
> > > > >
> > > > >  	/*
> > > > >  	 * If there are other dequeues in progress that preceded us,
> > > > > @@ -727,7 +742,12 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
> > > > >
> > > > >  	/* copy in table */
> > > > >  	DEQUEUE_PTRS();
> > > > > +
> > > > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS
> > > > >  	rte_compiler_barrier();
> > > > > +#else
> > > > > +	rte_rmb();
> > > > > +#endif
> > > > >
> > > > >  	__RING_STAT_ADD(r, deq_success, n);
> > > > >  	r->cons.tail = cons_next;
> > > > > --
> > > > > 2.1.0
> > > >


More information about the dev mailing list