[dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline packet

Shahaf Shuler shahafs at mellanox.com
Tue Oct 22 08:29:44 CEST 2019


Thursday, October 17, 2019 6:15 PM, Stephen Hemminger:
> Subject: Re: [dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline
> packet
> 
> On Thu, 17 Oct 2019 07:27:34 +0000
> Shahaf Shuler <shahafs at mellanox.com> wrote:
> 
> > Some PMDs inline the mbuf data buffer directly to device. This is in
> > order to save the overhead of the PCI headers involved when the device
> > DMA read the buffer pointer. For some devices it is essential in order
> > to reach the pick BW.
> >
> > However, there are cases where such inlining is in-efficient. For
> > example when the data buffer resides on other device memory (like GPU
> > or storage device). attempt to inline such buffer will result in high
> > PCI overhead for reading and copying the data from the remote device.
> >
> > To support a mixed traffic pattern (some buffers from local DRAM, some
> > buffers from other devices) with high BW, a hint flag is introduced in
> > the mbuf.
> > Application will hint the PMD whether or not it should try to inline
> > the given mbuf data buffer. PMD should do best effort to act upon this
> > request.
> >
> > Signed-off-by: Shahaf Shuler <shahafs at mellanox.com>
> 
> This kind of optimization is hard, and pushing the problem to the application
> to decide seems like the wrong step.

See my comments to Jerin on other thread. This optimization is for custom application who do unique acceleration using look aside accelerators for compute while utilizing network device zero copy. 

 Can the driver just infer this already
> because some mbuf's are external?

Having mbuf w/ external buffer does not necessarily  means the buffer location is on other PCI device. 
Making optimization based on such heuristics may lead to unexpected behavior.   



More information about the dev mailing list