[dpdk-dev] [RFC v5] /net: memory interface (memif)

Honnappa Nagarahalli Honnappa.Nagarahalli at arm.com
Tue May 7 13:29:17 CEST 2019


> >
> >> On 3/22/2019 11:57 AM, Jakub Grajciar wrote:
> >>> Memory interface (memif), provides high performance packet transfer
> >>> over shared memory.
> >>>
> >>> Signed-off-by: Jakub Grajciar <jgrajcia at cisco.com>
> >>
> >
> > <...>
> >
> >>    With that in mind, I believe that 23Mpps is fine performance. No
> >> performance target is
> >>    defined, the goal is to be as fast as possible.
> > Use of C11 atomics have proven to provide better performance on weakly
> ordered architectures (at least on Arm). IMO, C11 atomics should be used to
> implement the fast path functions at least. This ensures optimal performance
> on all supported architectures in DPDK.
> >
> >    Atomics are not required by memif driver.
> 
> Correct, only thing we need is store barrier once per batch of packets, to
> make sure that descriptor changes are globally visible before we bump head
> pointer.
May be I was not clear in my comments, I meant that the use of GCC C++11 memory model aware atomic operations [1] show better performance. So, instead of using full memory barriers you can use store-release and load-acquire semantics. A similar change was done to svm_fifo data structure in VPP [2] (though the original algorithm used was different from the one used in this memif patch).

[1] https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html
[2] https://gerrit.fd.io/r/#/c/18223/

> 
> --
> Damjan


More information about the dev mailing list