[dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline packet

Shahaf Shuler shahafs at mellanox.com
Thu Oct 17 12:59:59 CEST 2019


Thursday, October 17, 2019 11:17 AM, Jerin Jacob:
> Subject: Re: [dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline
> packet
> 
> On Thu, Oct 17, 2019 at 12:57 PM Shahaf Shuler <shahafs at mellanox.com>
> wrote:
> >
> > Some PMDs inline the mbuf data buffer directly to device. This is in
> > order to save the overhead of the PCI headers involved when the device
> > DMA read the buffer pointer. For some devices it is essential in order
> > to reach the pick BW.
> >
> > However, there are cases where such inlining is in-efficient. For
> > example when the data buffer resides on other device memory (like GPU
> > or storage device). attempt to inline such buffer will result in high
> > PCI overhead for reading and copying the data from the remote device.
> 
> Some questions to understand the use case 
> # Is this use case where CPU, local DRAM, NW card and GPU memory connected on the coherent bus

Yes. For example one can allocate GPU memory and map it to the GPU bar, make it accessible from the host CPU through LD/ST. 

> # Assuming the CPU needs to touch the buffer prior to Tx, In that case, it will
> be useful?

If the CPU needs to modify the data then no. it will be more efficient to copy the data to CPU and then send it.
However there are use cases where the data is DMA w/ zero copy to the GPU (for example) , GPU perform the processing on the data, and then CPU send the mbuf (w/o touching the data). 

> # How the application knows, The data buffer is in GPU memory in order to
> use this flag efficiently?

Because it made it happen. For example it attached the mbuf external buffer from the other device memory. 

> # Just an random thought, Does it help, if we create two different mempools
> one from local DRAM and one from GPU memory so that the application can
> work transparently.

But you will still need to teach the PMD which pool it can inline and which cannot. 
IMO it is more generic to have it per mbuf. Moreover, application has this info. 

> 
> 
> 
> 
> 
> >
> > To support a mixed traffic pattern (some buffers from local DRAM, some
> > buffers from other devices) with high BW, a hint flag is introduced in
> > the mbuf.
> > Application will hint the PMD whether or not it should try to inline
> > the given mbuf data buffer. PMD should do best effort to act upon this
> > request.
> >
> > Signed-off-by: Shahaf Shuler <shahafs at mellanox.com>
> > ---
> >  lib/librte_mbuf/rte_mbuf.h | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 98225ec80b..5934532b7f 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -203,6 +203,15 @@ extern "C" {
> >  /* add new TX flags here */
> >
> >  /**
> > + * Hint to PMD to not inline the mbuf data buffer to device
> > + * rather let the device use its DMA engine to fetch the data with
> > +the
> > + * provided pointer.
> > + *
> > + * This flag is a only a hint. PMD should enforce it as best effort.
> > + */
> > +#define PKT_TX_DONT_INLINE_HINT (1ULL << 39)
> > +
> > +/**
> >   * Indicate that the metadata field in the mbuf is in use.
> >   */
> >  #define PKT_TX_METADATA        (1ULL << 40)
> > --
> > 2.12.0
> >


More information about the dev mailing list