[dpdk-dev] dmadev discussion summary

Morten Brørup mb at smartsharesystems.com
Fri Jul 2 09:39:10 CEST 2021


> From: Bruce Richardson [mailto:bruce.richardson at intel.com]
> Sent: Thursday, 1 July 2021 18.34
> 
> On Thu, Jul 01, 2021 at 08:31:00PM +0530, Jerin Jacob wrote:
> > On Sat, Jun 26, 2021 at 9:29 AM fengchengwen
> <fengchengwen at huawei.com> wrote:
> > >
> > > Hi, all
> > >   I analyzed the current DPAM DMA driver and drew this summary in
> conjunction
> > > with the previous discussion, and this will as a basis for the V2
> implementation.
> > >   Feedback is welcome, thanks
> >
> > Thanks for the write-up.
> >
> > >
> <snip>
> > >      [queue_setup]: create one virt-queue, has following main
> parameters:
> > >          HW-queue-index: the HW-queue index used
> > >          nb_desc: the number of HW descriptors
> > >          opaque: driver's specific info
> > >          Note1: this API return virt-queue index which will used in
> later API.
> > >                 If user want create multiple virt-queue one the
> same HW-queue,
> > >                 they could achieved by call queue_setup with the
> same
> > >                 HW-queue-index.
> > >          Note2: I think it's hard to define queue_setup config
> paramter, and
> > >                 also this is control API, so I think it's OK to use
> opaque
> > >                 pointer to implement it.
> > >       [dma_copy/memset/sg]: all has vq_id input parameter.
> > >          Note: I notice dpaa can't support single and sg in one
> virt-queue, and
> > >                I think it's maybe software implement policy other
> than HW
> > >                restriction because virt-queue could share the same
> HW-queue.
> > >       Here we use vq_id to tackle different scenario, like local-
> to-local/
> > >       local-to-host and etc.
> >
> > IMO, The index representation has an additional overhead as one needs
> > to translate it
> > to memory pointer. I prefer to avoid by having object handle and use
> > _lookup() API get to make it work
> > in multi-process cases to avoid the additional indirection. Like
> mempool object.
> 
> While it doesn't matter to me that much which is chosen, an index seems
> cleaner to me and more consistent with other device types in DPDK. The
> objects pointed to by the memory pointers you refer too can't just be
> stored in an internal array in your driver and accessed directly by
> index,
> saving that layer of redirection?

The rte_eth_rx/tx_burst() functions use the parameter "uint16_t port_id" to identify the device.

rte_eth_rx/tx_burst() needs to look up the rte_eth_dev pointer in the rte_eth_devices array, which could be avoided by using "struct rte_eth_dev *dev" instead of "uint16_t port_id". It would be faster.

However, in the case of rte_eth_rx/tx_burst(), the port_id is rapidly changing at runtime, and often comes from the mbuf or similar, where it is preferable storing a 16 bit value rather than a 64 bit pointer.

I think that the choice of DMA device (and virt-queue) for DMA fast path operations will be much more constant than the choice of port_id (and queue_id) in Ethdev fast path operations. If you agree with this, we should use "struct rte_dma_dev *dev" rather than "uint16_t dev_id" as parameter to the DMA fast path functions.

Going even further, why do we need to pass both dev_id and virt_queue_id to the DMA fast path functions, instead of just passing a pointer to the virt-queue? That pointer could be returned by the rte_dma_queue_setup() function. And it could be opaque or a well defined structure. Or even more exotic: It could be a structure where the first part is common and well defined, and the rest of the structure is driver specific.

> 
> > >   5) And the dmadev public data-plane API (just prototype):
> > >      dma_cookie_t rte_dmadev_memset(dev, vq_id, pattern, dst, len,
> flags)
> > >        -- flags: used as an extended parameter, it could be
> uint32_t
> > >      dma_cookie_t rte_dmadev_memcpy(dev, vq_id, src, dst, len,
> flags)
> > >      dma_cookie_t rte_dmadev_memcpy_sg(dev, vq_id, sg, sg_len,
> flags)
> > >        -- sg: struct dma_scatterlist array
> > >      uint16_t rte_dmadev_completed(dev, vq_id, dma_cookie_t
> *cookie,
> > >                                    uint16_t nb_cpls, bool
> *has_error)
> > >        -- nb_cpls: indicate max process operations number
> > >        -- has_error: indicate if there is an error
> > >        -- return value: the number of successful completed
> operations.
> > >        -- example:
> > >           1) If there are already 32 completed ops, and 4th is
> error, and
> > >              nb_cpls is 32, then the ret will be 3(because 1/2/3th
> is OK), and
> > >              has_error will be true.
> > >           2) If there are already 32 completed ops, and all
> successful
> > >              completed, then the ret will be min(32, nb_cpls), and
> has_error
> > >              will be false.
> > >           3) If there are already 32 completed ops, and all failed
> completed,
> > >              then the ret will be 0, and has_error will be true.
> >
> > +1. IMO, it is better to call ring_idx instead of a cookie. To
> enforce
> > that it the ring index.
> >
> +1, I like that name better too.

If it is "ring index" then it probably should be an uintXX_t type, and thus the functions cannot return <0 to indicate error.

By "ring index", do you actually mean "descriptor index" (for the DMA engine's descriptors)? Does the application need to process this value as anything but an opaque handle? Wouldn't an opaque pointer type provide performance? It would also allow using NULL to indicate error.

Do you expect applications to store the cookie in structures that are instantiated many times (e.g. like the port_id is stored in the mbuf structure), so there is an advantage to using an uint16_t instead of a pointer?

> 
> > >      uint16_t rte_dmadev_completed_status(dev_id, vq_id,
> dma_cookie_t *cookie,
> > >                                           uint16_t nb_status,
> uint32_t *status)
> > >        -- return value: the number of failed completed operations.
> >
> > See above. Here we are assuming it is an index otherwise we need to
> > pass an array
> > cookies.
> >
> > >      And here I agree with Morten: we should design API which
> adapts to DPDK
> > >      service scenarios. So we don't support some sound-cards DMA,
> and 2D memory
> > >      copy which mainly used in video scenarios.
> > >   6) The dma_cookie_t is signed int type, when <0 it mean error,
> it's
> > >      monotonically increasing base on HW-queue (other than virt-
> queue). The
> > >      driver needs to make sure this because the damdev framework
> don't manage
> > >      the dma_cookie's creation.
> >
> > +1 and see above.
> >
> > >   7) Because data-plane APIs are not thread-safe, and user could
> determine
> > >      virt-queue to HW-queue's map (at the queue-setup stage), so it
> is user's
> > >      duty to ensure thread-safe.
> >
> > +1. But I am not sure how easy for the fast-path application to have
> this logic,
> > Instead, I think, it is better to tell the capa for queue by driver
> > and in channel configuration,
> > the application can request for requirement (Is multiple producers
> enq
> > to the same HW queue or not).
> > Based on the request, the implementation can pick the correct
> function
> > pointer for enq.(lock vs lockless version if HW does not support
> > lockless)
> >
> 
> Non-thread safety is the norm in DPDK for all other queue resources,
> however, haivng multi-thread safety as a capability sounds reasonable.
> >
> > >   8) One example: vq_id = rte_dmadev_queue_setup(dev,
> > >   config.{HW-queue-index=x, opaque}); if (vq_id < 0) { // create
> > >   virt-queue failed return; } // submit memcpy task cookit =
> > >   rte_dmadev_memcpy(dev, vq_id, src, dst, len, flags); if (cookie <
> 0)
> > >   { // submit failed return; } // get complete task ret =
> > >   rte_dmadev_completed(dev, vq_id, &cookie, 1, has_error); if
> > >   (!has_error && ret == 1) { // the memcpy successful complete }
> >
> > +1
> >
> > >   9) As octeontx2_dma support sg-list which has many valid buffers
> in
> > >   dpi_dma_buf_ptr_s, it could call the rte_dmadev_memcpy_sg API.
> >
> > +1
> >
> > >   10) As ioat, it could delcare support one HW-queue at
> dev_configure
> > >   stage, and only support create one virt-queue.  11) As
> dpaa2_qdma, I
> > >   think it could migrate to new framework, but still wait for
> > >   dpaa2_qdma guys feedback.  12) About the prototype src/dst
> parameters
> > >   of rte_dmadev_memcpy API, we have two candidates which are iova
> and
> > >   void *, how about introduce dma_addr_t type which could be va or
> iova
> > >   ?
> >
> > I think, conversion looks ugly, better to have void * and share the
> > constraints of void * as limitation/capability using flag. So that
> driver
> > can update it.
> >
> I'm ok with either rte_iova_t or void * as parameter type. Let's not
> define a new type though,

+1

> and +1 to just using capabilities to define what kinds
> of addressing are supported by the device instances.

+1

> 
> /Bruce

And regarding naming: Consider rte_dma_xx() instead of rte_dmadev_xx().

-Morten



More information about the dev mailing list