[dpdk-dev] [RFC v2 2/2] net/vhost_dma: add vHost DMA driver

Maxime Coquelin maxime.coquelin at redhat.com
Tue Dec 17 11:20:18 CET 2019



On 12/17/19 9:27 AM, Maxime Coquelin wrote:
> Hi Jiayu,
> 
> On 11/1/19 9:54 AM, Jiayu Hu wrote:
>> This patch introduces a new PMD for DMA accelerated vhost-user, which
>> provides basic functionality of packet reception and transmission. This
>> PMD leverages librte_vhost to handle vhost messages, but it implements
>> own vring's enqueue and dequeue operations.
>>
>> The PMD leverages DMA engines (e.g., I/OAT, a DMA engine in Intel's
>> processor), to accelerate large data movements in enqueue and dequeue
>> operations. Large copies are offloaded to the DMA in an asynchronous mode.
>> That is, the CPU just submits copy jobs to the DMA but without waiting
>> for its completion; there is no CPU intervention during data transfer.
>> Thus, we can save precious CPU cycles and improve the overall performance
>> for vhost-user based applications, like OVS. The PMD still uses the CPU to
>> performs small copies, due to startup overheads associated with the DMA.
>>
>> Note that the PMD is able to support various DMA engines to accelerate
>> data movements in enqueue and dequeue operations; currently the supported
>> DMA engine is I/OAT. The PMD just supports I/OAT acceleration in the
>> enqueue operation, and it still uses the CPU to perform all copies in
>> the dequeue operation. In addition, the PMD only supports the split ring.
>>
>> The DMA device used by a queue is assigned by users; for the queue
>> without assigning a DMA device, the PMD will use the CPU to perform
>> all copies for both enqueue and dequeue operations. Currently, the PMD
>> just supports I/OAT, and a queue can only use one I/OAT device, and an
>> I/OAT device can only be used by one queue at a time.
>>
>> The PMD has 4 parameters.
>>  - iface: The parameter is used to specify a path to connect to a
>>  	front end device.
>>  - queues: The parameter is used to specify the number of the queues
>>  	front end device has (Default is 1).
>>  - client: The parameter is used to specify the vhost port working as
>>  	client mode or server mode (Default is server mode).
>>  - dmas: This parameter is used to specify the assigned DMA device
>>  	of a queue.
>>
>> Here is an example.
>> $ ./testpmd -c f -n 4 \
>> --vdev 'dma_vhost0,iface=/tmp/sock0,queues=1,dmas=txq0 at 00:04.0,client=0'
> 
> dma_vhost0 is not a good name, you have to mention it is net specific.
> 
> Is there a tool to list available DMA engines?

Thinking at it again, wouldn't it be possible that the user doesn't
specify a specific DMA device ID, but instead allocate one device
at init time by specifying all the capabilities the DMA device need
to match?

If no DMA device available with matching capabilities, then fallback to
SW mode.

Also, I think we don't want to call directly IOAT API directly here, but
instead introduce a DMA library so that the Vhost DMA stuff isn't vendor
specific.

Adding a few ARM people in cc, to know whether they have plan/interrest
in supporting DMA acceleration for Vhost.

Regards,
Maxime

>>
>> Signed-off-by: Jiayu Hu <jiayu.hu at intel.com>
>> ---
>>  config/common_base                                 |    2 +
>>  config/common_linux                                |    1 +
>>  drivers/Makefile                                   |    2 +-
>>  drivers/net/Makefile                               |    1 +
>>  drivers/net/vhost_dma/Makefile                     |   31 +
>>  drivers/net/vhost_dma/eth_vhost.c                  | 1495 ++++++++++++++++++++
>>  drivers/net/vhost_dma/eth_vhost.h                  |  264 ++++
>>  drivers/net/vhost_dma/internal.h                   |  225 +++
>>  .../net/vhost_dma/rte_pmd_vhost_dma_version.map    |    4 +
>>  drivers/net/vhost_dma/virtio_net.c                 | 1234 ++++++++++++++++
>>  mk/rte.app.mk                                      |    1 +
>>  11 files changed, 3259 insertions(+), 1 deletion(-)
>>  create mode 100644 drivers/net/vhost_dma/Makefile
>>  create mode 100644 drivers/net/vhost_dma/eth_vhost.c
>>  create mode 100644 drivers/net/vhost_dma/eth_vhost.h
>>  create mode 100644 drivers/net/vhost_dma/internal.h
>>  create mode 100644 drivers/net/vhost_dma/rte_pmd_vhost_dma_version.map
>>  create mode 100644 drivers/net/vhost_dma/virtio_net.c
> 
> You need to add Meson support.
> 
> 
> More generally, I have been through the series and I'm not sure having a
> dedicated PMD driver for this is a good idea due to all the code
> duplication it implies.
> 
> I understand it has been done this way to avoid impacting the pure SW
> datapath implementation. But I'm sure the series could be reduced to a
> few hundred of lines if it was integrated in vhost-user library.
> Moreover, your series does not support packed ring, so it means even
> more code would need to be duplicated in the end.
> 
> What do you think?
> 
> Thanks,
> Maxime
> 



More information about the dev mailing list