[PATCH v5] net/vhost: support asynchronous data path

Xia, Chenbo chenbo.xia at intel.com
Mon Oct 24 11:02:38 CEST 2022


Hi Yuan,

> -----Original Message-----
> From: Wang, YuanX <yuanx.wang at intel.com>
> Sent: Monday, October 24, 2022 11:15 PM
> To: Maxime Coquelin <maxime.coquelin at redhat.com>; Xia, Chenbo
> <chenbo.xia at intel.com>
> Cc: dev at dpdk.org; Hu, Jiayu <jiayu.hu at intel.com>; Jiang, Cheng1
> <cheng1.jiang at intel.com>; Ma, WenwuX <wenwux.ma at intel.com>; He, Xingguang
> <xingguang.he at intel.com>; Wang, YuanX <yuanx.wang at intel.com>
> Subject: [PATCH v5] net/vhost: support asynchronous data path
> 
> Vhost asynchronous data-path offloads packet copy from the CPU
> to the DMA engine. As a result, large packet copy can be accelerated
> by the DMA engine, and vhost can free CPU cycles for higher level
> functions.
> 
> In this patch, we enable asynchronous data-path for vhostpmd.
> Asynchronous data path is enabled per tx/rx queue, and users need
> to specify the DMA device used by the tx/rx queue. Each tx/rx queue
> only supports to use one DMA device, but one DMA device can be shared
> among multiple tx/rx queues of different vhost PMD ports.
> 
> Two PMD parameters are added:
> - dmas:	specify the used DMA device for a tx/rx queue.
> 	(Default: no queues enable asynchronous data path)
> - dma-ring-size: DMA ring size.
> 	(Default: 4096).
> 
> Here is an example:
> --vdev
> 'eth_vhost0,iface=./s0,dmas=[txq0 at 0000:00.01.0;rxq0 at 0000:00.01.1],dma-
> ring-size=4096'
> 
> Signed-off-by: Jiayu Hu <jiayu.hu at intel.com>
> Signed-off-by: Yuan Wang <yuanx.wang at intel.com>
> Signed-off-by: Wenwu Ma <wenwux.ma at intel.com>
> 

Sorry that I just realize that we need to change release notes because this
is new feature for vhost PMD. Please mention the async support and new driver
api you added.

Thanks,
Chenbo


More information about the dev mailing list