[EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device DMA capability support in dmadev
Vamsi Krishna Attunuru
vattunuru at marvell.com
Wed Jul 16 12:59:14 CEST 2025
>
>Thanks for the explanation.
>
>Let me tell you what I understand:
>1\ Two dmadev (must belong to the same DMA controller?) each passthrough
>to diffent domain (VM or container) 2\ The kernel DMA controller driver could
>config access groups --- there is a secure mechanism (like Intel IDPTE)
> and the two dmadev could communicate if the kernel DMA controller driver
>has put them in the same access groups.
>3\ Application setup access group and get handle (maybe the new 'dev_idx'
>which you announce in this commit),
> and then setup one vchan which config the handle.
> and later launch copy request based on this vchan.
>4\ The driver will pass the request to dmadev-1 hardware, and dmadev-1
>hardware will do some verification,
> and maybe use dmadev-2 stream ID for read/write operations?
>
>A few question about this:
>1\ What the prototype of 'dev_idx', is it uint16_t?
Yes, it can be uint16_t and use two different dev_idx (src_dev_idx & dest_dev_idx)
for read & write.
>2\ How to implement read/write between two dmadev ? use two different
>dev_idx? the first for read and the second for write?
Yes, two different dev_idx will be used.
>
>
>I also re-read the patchset "[PATCH v1 0/3] Add support for inter-domain
>DMA operations", it introduce:
>1\ One 'int controller-id' in the rte_dma_info. which maybe used in vendor-
>specific secure mechanism.
>2\ Two new OP_flag and two new datapath API.
>The reason why this patch didn't continue (I guess) is whether setup one new
>vchan. Yes, vchan was designed to represents different transfer contexts. But
>each vchan has its own enqueue/dequeue/ring, it more act like one logic
>dmadev, some of the hardware can fit this model well, some may not (like
>Intel in this case).
>
>
>So how about the following scheme:
>1\ Add inter-domain capability bits, for example:
>RTE_DMA_CAPA_INTER_PROCESS_DOMAIN,
>RTE_DMA_CAPA_INTER_OS_DOMAIN 2\ Add one domain_controller_id in
>the rte_dma_info which maybe used in vendor-specific secure mechanism.
>3\ Add four OP_FLAGs:
>RTE_DMA_OP_FLAG_SRC_INTER_PROCESS_DOMAIN_HANDLE,
>RTE_DMA_OP_FLAG_DST_INTER_PROCESS_DOMAIN_HANDLE
> RTE_DMA_OP_FLAG_SRC_INTER_OS_DOMAIN_HANDLE,
>RTE_DMA_OP_FLAG_DST_INTER_OS_DOMAIN_HANDLE
>4\ Reserved 32bit from flag parameter (which all enqueue API both supports)
>as the src and dst handle.
> or only reserved 16bit from flag parameter if we restrict don't support 3rd
>transfer.
Yes, the above approach seems acceptable to me. I believe src & dst handles require
16-bit values. Reserving 32-bits from flag parameter would leave 32 flags available,
which should be fine.
>
>Thanks
>
>On 2025/7/15 13:35, Vamsi Krishna Attunuru wrote:
>> Hi Feng,
>>
>> Thanks for depicting the feature use case.
>>
>> From the application’s perspective, inter VM/process communication is
>required to exchange the src & dst buffer details, however the specifics of this
>communication mechanism are outside the scope in this context. Regarding
>the address translations, these buffer addresses can be either IOVA as PA or
>IOVA as VA. The DMA hardware must use the appropriate IOMMU stream IDs
>when initiating the DMA transfers. For example, in the use case shown in the
>diagram, dmadev-1 and dmadev-2 would join an access group managed by
>the kernel DMA controller driver. This controller driver will configure the
>access group on the DMA hardware, enabling the hardware to select the
>correct stream IDs for read/write operations. New rte_dma APIs could be
>introduced to join or leave the access group or to query the access group
>details. Additionally, a secure token mechanism (similar to vfio-pci token) can
>be implemented to validate any dmadev attempting to join the access group.
>>
>> Regards.
>>
>> From: fengchengwen <fengchengwen at huawei.com>
>> Sent: Tuesday, July 15, 2025 6:29 AM
>> To: Vamsi Krishna Attunuru <vattunuru at marvell.com>; dev at dpdk.org;
>> Pavan Nikhilesh Bhagavatula <pbhagavatula at marvell.com>;
>> kevin.laatz at intel.com; bruce.richardson at intel.com;
>> mb at smartsharesystems.com
>> Cc: Jerin Jacob <jerinj at marvell.com>; thomas at monjalon.net
>> Subject: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device DMA
>> capability support in dmadev
>>
>> Hi Vamsi, From the commit log, I guess this commit mainly want to meet
>> following case: --------------- ---------------- | Container | |
>> VirtMachine | | | | | | dmadev-1 | | dmadev2 | --------------- ---------------- |
>| ------------------------------ ZjQcmQRYFpfptBannerStart Prioritize security for
>external emails:
>> Confirm sender and content safety before clicking links or opening
>attachments
>> Report Suspicious <https://us-phishalarm-
>ewt.proofpoint.com/EWT/v1/CRVmXkqW!tg3ZldV0Yr_wdSwWmT2aDdKMi-
>4rn2z58vFaxwfOeocS1P19w1BeRdyGs5sjnhV2rU_6m8MOWj4KFbuXKkKJIvcq
>wWD2WEwJW_0$ >
>> ZjQcmQRYFpfptBannerEnd
>>
>> Hi Vamsi,
>>
>>
>>
>> From the commit log, I guess this commit mainly want to meet following
>case:
>>
>>
>>
>> --------------- ----------------
>>
>> | Container | | VirtMachine |
>>
>> | | | |
>>
>> | dmadev-1 | | dmadev2 |
>>
>> --------------- ----------------
>>
>> | |
>>
>> ------------------------------
>>
>>
>>
>> App run in the container could launch DMA transfer from local buffer
>> to the VirtMachine by config
>>
>> dmadev-1/2 (the dmadev-1/2 are passthrough to diffent OS domain).
>>
>>
>>
>> Could you explain how to use it from application perspective (for
>> example address translation) and
>>
>> application & hardware restrictions?
>>
>>
>>
>>
>>
>> BTW: In this case, there are two OS domain communication, and I
>> remember there are also inter-process
>>
>> DMA RFC, so maybe we could design more generic solution if you provide
>more info.
>>
>>
>>
>> Thanks
>>
>>
>>
>> On 2025/7/10 16:51, Vamsi Krishna wrote:
>>
>>> From: Vamsi Attunuru
>>> <vattunuru at marvell.com<mailto:vattunuru at marvell.com>>
>>
>>>
>>
>>> Modern DMA hardware supports data transfer between multiple
>>
>>> DMA devices, enabling data communication across isolated domains or
>>
>>> containers. To facilitate this, the ``dmadev`` library requires
>>> changes
>>
>>> to allow devices to register with or unregisters from DMA groups for
>>
>>> inter-device communication. This feature is planned for inclusion
>>
>>> in DPDK 25.11.
>>
>>>
>>
>>> Signed-off-by: Vamsi Attunuru
>>> <vattunuru at marvell.com<mailto:vattunuru at marvell.com>>
>>
>>> ---
>>
>>> doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>
>>> 1 file changed, 7 insertions(+)
>>
>>>
>>
>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>> b/doc/guides/rel_notes/deprecation.rst
>>
>>> index e2d4125308..46836244dd 100644
>>
>>> --- a/doc/guides/rel_notes/deprecation.rst
>>
>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>
>>> @@ -152,3 +152,10 @@ Deprecation Notices
>>
>>> * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in
>>
>>> ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to DPDK.
>>
>>> Those API functions are used internally by DPDK core and netvsc PMD.
>>
>>> +
>>
>>> +* dmadev: a new capability flag ``RTE_DMA_CAPA_INTER_DEV`` will be
>>> +added
>>
>>> + to advertise DMA device's inter-device DMA copy capability. To
>>> + enable
>>
>>> + this functionality, a few dmadev APIs will be added to configure
>>> + the DMA
>>
>>> + access groups, facilitating coordinated data communication between
>devices.
>>
>>> + A new ``dev_idx`` field will be added to the ``struct
>>> + rte_dma_vchan_conf``
>>
>>> + structure to configure a vchan for data transfers between any two DMA
>devices.
>>
>>
More information about the dev
mailing list