OVS DPDK DMA-Dev library/Design Discussion
Morten Brørup
mb at smartsharesystems.com
Tue Mar 29 18:45:19 CEST 2022
> From: Maxime Coquelin [mailto:maxime.coquelin at redhat.com]
> Sent: Tuesday, 29 March 2022 18.24
>
> Hi Morten,
>
> On 3/29/22 16:44, Morten Brørup wrote:
> >> From: Van Haaren, Harry [mailto:harry.van.haaren at intel.com]
> >> Sent: Tuesday, 29 March 2022 15.02
> >>
> >>> From: Morten Brørup <mb at smartsharesystems.com>
> >>> Sent: Tuesday, March 29, 2022 1:51 PM
> >>>
> >>> Having thought more about it, I think that a completely different
> architectural approach is required:
> >>>
> >>> Many of the DPDK Ethernet PMDs implement a variety of RX and TX
> packet burst functions, each optimized for different CPU vector
> instruction sets. The availability of a DMA engine should be treated
> the same way. So I suggest that PMDs copying packet contents, e.g.
> memif, pcap, vmxnet3, should implement DMA optimized RX and TX packet
> burst functions.
> >>>
> >>> Similarly for the DPDK vhost library.
> >>>
> >>> In such an architecture, it would be the application's job to
> allocate DMA channels and assign them to the specific PMDs that should
> use them. But the actual use of the DMA channels would move down below
> the application and into the DPDK PMDs and libraries.
> >>>
> >>>
> >>> Med venlig hilsen / Kind regards,
> >>> -Morten Brørup
> >>
> >> Hi Morten,
> >>
> >> That's *exactly* how this architecture is designed & implemented.
> >> 1. The DMA configuration and initialization is up to the application
> (OVS).
> >> 2. The VHost library is passed the DMA-dev ID, and its new async
> rx/tx APIs, and uses the DMA device to accelerate the copy.
> >>
> >> Looking forward to talking on the call that just started. Regards, -
> Harry
> >>
> >
> > OK, thanks - as I said on the call, I haven't looked at the patches.
> >
> > Then, I suppose that the TX completions can be handled in the TX
> function, and the RX completions can be handled in the RX function,
> just like the Ethdev PMDs handle packet descriptors:
> >
> > TX_Burst(tx_packet_array):
> > 1. Clean up descriptors processed by the NIC chip. --> Process TX
> DMA channel completions. (Effectively, the 2nd pipeline stage.)
> > 2. Pass on the tx_packet_array to the NIC chip descriptors. --> Pass
> on the tx_packet_array to the TX DMA channel. (Effectively, the 1st
> pipeline stage.)
>
> The problem is Tx function might not be called again, so enqueued
> packets in 2. may never be completed from a Virtio point of view. IOW,
> the packets will be copied to the Virtio descriptors buffers, but the
> descriptors will not be made available to the Virtio driver.
In that case, the application needs to call TX_Burst() periodically with an empty array, for completion purposes.
Or some sort of TX_Keepalive() function can be added to the DPDK library, to handle DMA completion. It might even handle multiple DMA channels, if convenient - and if possible without locking or other weird complexity.
Here is another idea, inspired by a presentation at one of the DPDK Userspace conferences. It may be wishful thinking, though:
Add an additional transaction to each DMA burst; a special transaction containing the memory write operation that makes the descriptors available to the Virtio driver.
> >
> > RX_burst(rx_packet_array):
> > 1. Pass on the finished NIC chip RX descriptors to the
> rx_packet_array. --> Process RX DMA channel completions. (Effectively,
> the 2nd pipeline stage.)
> > 2. Replenish NIC chip RX descriptors. --> Start RX DMA channel for
> any waiting packets. (Effectively, the 1nd pipeline stage.)
> >
> > PMD_init():
> > - Prepare NIC chip RX descriptors. (In other words: Replenish NIC
> chip RX descriptors. = RX pipeline stage 1.)
> >
> > PS: Rearranged the email, so we can avoid top posting.
> >
>
More information about the dev
mailing list