[dpdk-dev] [PATCH v19 1/7] dmadev: introduce DMA device library public APIs
Bruce Richardson
bruce.richardson at intel.com
Mon Sep 6 10:08:57 CEST 2021
On Mon, Sep 06, 2021 at 03:52:21PM +0800, fengchengwen wrote:
> I think we can add support for DIR_ANY.
> @Bruce @Jerin Would you please take a look at my proposal?
>
I don't have a strong opinion on this. However, is one of the reasons we
have virtual-channels in the API rather than HW channels so that this
info can be encoded in the virtual channel setup? If a HW channel can
support multiple types of copy simultaneously, I thought the original
design was to create a vchan on this HW channel to support each copy type
needed?
> On 2021/9/6 14:48, Gagandeep Singh wrote:
> >
> >
> >> -----Original Message-----
> >> From: fengchengwen <fengchengwen at huawei.com>
> >> Sent: Saturday, September 4, 2021 7:02 AM
> >> To: Gagandeep Singh <G.Singh at nxp.com>; thomas at monjalon.net;
> >> ferruh.yigit at intel.com; bruce.richardson at intel.com; jerinj at marvell.com;
> >> jerinjacobk at gmail.com; andrew.rybchenko at oktetlabs.ru
> >> Cc: dev at dpdk.org; mb at smartsharesystems.com; Nipun Gupta
> >> <nipun.gupta at nxp.com>; Hemant Agrawal <hemant.agrawal at nxp.com>;
> >> maxime.coquelin at redhat.com; honnappa.nagarahalli at arm.com;
> >> david.marchand at redhat.com; sburla at marvell.com; pkapoor at marvell.com;
> >> konstantin.ananyev at intel.com; conor.walsh at intel.com
> >> Subject: Re: [dpdk-dev] [PATCH v19 1/7] dmadev: introduce DMA device library
> >> public APIs
> >>
> >> On 2021/9/3 19:42, Gagandeep Singh wrote:
> >>> Hi,
> >>>
> >>> <snip>
> >>>> +
> >>>> +/**
> >>>> + * @warning
> >>>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>>> + *
> >>>> + * Close a DMA device.
> >>>> + *
> >>>> + * The device cannot be restarted after this call.
> >>>> + *
> >>>> + * @param dev_id
> >>>> + * The identifier of the device.
> >>>> + *
> >>>> + * @return
> >>>> + * 0 on success. Otherwise negative value is returned.
> >>>> + */
> >>>> +__rte_experimental
> >>>> +int
> >>>> +rte_dmadev_close(uint16_t dev_id);
> >>>> +
> >>>> +/**
> >>>> + * rte_dma_direction - DMA transfer direction defines.
> >>>> + */
> >>>> +enum rte_dma_direction {
> >>>> + RTE_DMA_DIR_MEM_TO_MEM,
> >>>> + /**< DMA transfer direction - from memory to memory.
> >>>> + *
> >>>> + * @see struct rte_dmadev_vchan_conf::direction
> >>>> + */
> >>>> + RTE_DMA_DIR_MEM_TO_DEV,
> >>>> + /**< DMA transfer direction - from memory to device.
> >>>> + * In a typical scenario, the SoCs are installed on host servers as
> >>>> + * iNICs through the PCIe interface. In this case, the SoCs works in
> >>>> + * EP(endpoint) mode, it could initiate a DMA move request from
> >>>> memory
> >>>> + * (which is SoCs memory) to device (which is host memory).
> >>>> + *
> >>>> + * @see struct rte_dmadev_vchan_conf::direction
> >>>> + */
> >>>> + RTE_DMA_DIR_DEV_TO_MEM,
> >>>> + /**< DMA transfer direction - from device to memory.
> >>>> + * In a typical scenario, the SoCs are installed on host servers as
> >>>> + * iNICs through the PCIe interface. In this case, the SoCs works in
> >>>> + * EP(endpoint) mode, it could initiate a DMA move request from device
> >>>> + * (which is host memory) to memory (which is SoCs memory).
> >>>> + *
> >>>> + * @see struct rte_dmadev_vchan_conf::direction
> >>>> + */
> >>>> + RTE_DMA_DIR_DEV_TO_DEV,
> >>>> + /**< DMA transfer direction - from device to device.
> >>>> + * In a typical scenario, the SoCs are installed on host servers as
> >>>> + * iNICs through the PCIe interface. In this case, the SoCs works in
> >>>> + * EP(endpoint) mode, it could initiate a DMA move request from device
> >>>> + * (which is host memory) to the device (which is another host memory).
> >>>> + *
> >>>> + * @see struct rte_dmadev_vchan_conf::direction
> >>>> + */
> >>>> +};
> >>>> +
> >>>> +/**
> >>>> ..
> >>> The enum rte_dma_direction must have a member RTE_DMA_DIR_ANY for a
> >> channel that supports all 4 directions.
> >>
> >> We've discussed this issue before. The earliest solution was to set up channels to
> >> support multiple DIRs, but
> >> no hardware/driver actually used this (at least at that time). they (like
> >> octeontx2_dma/dpaa) all setup one logic
> >> channel server single transfer direction.
> >>
> >> So, do you have that kind of desire for your driver ?
> >>
> > Both DPAA1 and DPAA2 drivers can support ANY direction on a channel, so we would like to have this option as well.
> >
> >>
> >> If you have a strong desire, we'll consider the following options:
> >>
> >> Once the channel was setup, there are no other parameters to indicate the copy
> >> request's transfer direction.
> >> So I think it is not enough to define RTE_DMA_DIR_ANY only.
> >>
> >> Maybe we could add RTE_DMA_OP_xxx marco
> >> (RTE_DMA_OP_FLAG_M2M/M2D/D2M/D2D), these macro will as the flags
> >> parameter
> >> passsed to enqueue API, so the enqueue API knows which transfer direction the
> >> request corresponding.
> >>
> >> We can easily expand from the existing framework with following:
> >> a. define capability RTE_DMADEV_CAPA_DIR_ANY, for those device which
> >> support it could declare it.
> >> b. define direction macro: RTE_DMA_DIR_ANY
> >> c. define dma_op: RTE_DMA_OP_FLAG_DIR_M2M/M2D/D2M/D2D which will
> >> passed as the flags parameters.
> >>
> >> For that driver which don't support this feature, just don't declare support it, and
> >> framework ensure that
> >> RTE_DMA_DIR_ANY is not passed down, and it can ignored
> >> RTE_DMA_OP_FLAG_DIR_xxx flag when enqueue API.
> >>
> >> For that driver which support this feature, application could create one channel
> >> with RTE_DMA_DIR_ANY or RTE_DMA_DIR_MEM_TO_MEM.
> >> If created with RTE_DMA_DIR_ANY, the RTE_DMA_OP_FLAG_DIR_xxx should be
> >> sensed in the driver.
> >> If created with RTE_DMA_DIR_MEM_TO_MEM, the
> >> RTE_DMA_OP_FLAG_DIR_xxx could be ignored.
> >>
> > Your design looks ok to me.
> >
> >>
> >>> <snip>
> >>>
> >>>
> >>> Regards,
> >>> Gagan
> >>>
More information about the dev
mailing list