[RFC 00/27] Add VDUSE support to Vhost library

Maxime Coquelin maxime.coquelin at redhat.com
Fri Apr 14 14:06:54 CEST 2023



On 4/14/23 12:48, Ferruh Yigit wrote:
> On 4/13/2023 8:59 AM, Maxime Coquelin wrote:
>> Hi,
>>
>> On 4/13/23 09:08, Xia, Chenbo wrote:
>>>> -----Original Message-----
>>>> From: Morten Brørup <mb at smartsharesystems.com>
>>>> Sent: Thursday, April 13, 2023 3:41 AM
>>>> To: Maxime Coquelin <maxime.coquelin at redhat.com>; Ferruh Yigit
>>>> <ferruh.yigit at amd.com>; dev at dpdk.org; david.marchand at redhat.com; Xia,
>>>> Chenbo <chenbo.xia at intel.com>; mkp at redhat.com; fbl at redhat.com;
>>>> jasowang at redhat.com; Liang, Cunming <cunming.liang at intel.com>; Xie,
>>>> Yongji
>>>> <xieyongji at bytedance.com>; echaudro at redhat.com; eperezma at redhat.com;
>>>> amorenoz at redhat.com
>>>> Subject: RE: [RFC 00/27] Add VDUSE support to Vhost library
>>>>
>>>>> From: Maxime Coquelin [mailto:maxime.coquelin at redhat.com]
>>>>> Sent: Wednesday, 12 April 2023 17.28
>>>>>
>>>>> Hi Ferruh,
>>>>>
>>>>> On 4/12/23 13:33, Ferruh Yigit wrote:
>>>>>> On 3/31/2023 4:42 PM, Maxime Coquelin wrote:
>>>>>>> This series introduces a new type of backend, VDUSE,
>>>>>>> to the Vhost library.
>>>>>>>
>>>>>>> VDUSE stands for vDPA device in Userspace, it enables
>>>>>>> implementing a Virtio device in userspace and have it
>>>>>>> attached to the Kernel vDPA bus.
>>>>>>>
>>>>>>> Once attached to the vDPA bus, it can be used either by
>>>>>>> Kernel Virtio drivers, like virtio-net in our case, via
>>>>>>> the virtio-vdpa driver. Doing that, the device is visible
>>>>>>> to the Kernel networking stack and is exposed to userspace
>>>>>>> as a regular netdev.
>>>>>>>
>>>>>>> It can also be exposed to userspace thanks to the
>>>>>>> vhost-vdpa driver, via a vhost-vdpa chardev that can be
>>>>>>> passed to QEMU or Virtio-user PMD.
>>>>>>>
>>>>>>> While VDUSE support is already available in upstream
>>>>>>> Kernel, a couple of patches are required to support
>>>>>>> network device type:
>>>>>>>
>>>>>>> https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_poc
>>>>>>>
>>>>>>> In order to attach the created VDUSE device to the vDPA
>>>>>>> bus, a recent iproute2 version containing the vdpa tool is
>>>>>>> required.
>>>>>>
>>>>>> Hi Maxime,
>>>>>>
>>>>>> Is this a replacement to the existing DPDK vDPA framework? What is the
>>>>>> plan for long term?
>>>>>>
>>>>>
>>>>> No, this is not a replacement for DPDK vDPA framework.
>>>>>
>>>>> We (Red Hat) don't have plans to support DPDK vDPA framework in our
>>>>> products, but there are still contribution to DPDK vDPA by several vDPA
>>>>> hardware vendors (Intel, Nvidia, Xilinx), so I don't think it is going
>>>>> to be deprecated soon.
>>>>
>>>> Ferruh's question made me curious...
>>>>
>>>> I don't know anything about VDUSE or vDPA, and don't use any of it, so
>>>> consider me ignorant in this area.
>>>>
>>>> Is VDUSE an alternative to the existing DPDK vDPA framework? What are
>>>> the
>>>> differences, e.g. in which cases would an application developer (or
>>>> user)
>>>> choose one or the other?
>>>
>>> Maxime should give better explanation.. but let me just explain a bit.
>>>
>>> Vendors have vDPA HW that support vDPA framework (most likely in their
>>> DPU/IPU
>>> products). This work is introducing a way to emulate a SW vDPA device in
>>> userspace (DPDK), and this SW vDPA device also supports vDPA framework.
>>>
>>> So it's not an alternative to existing DPDK vDPA framework :)
>>
>> Correct.
>>
>> When using DPDK vDPA, the datapath of a Vhost-user port is offloaded to
>> a compatible physical NIC (i.e. a NIC that implements Virtio rings
>> support), the control path remains the same as a regular Vhost-user
>> port, i.e. it provides a Vhost-user unix socket to the application (like
>> QEMU or DPDK Virtio-user PMD).
>>
>> When using Kernel vDPA, the datapath is also offloaded to a vDPA
>> compatible device, and the control path is managed by the vDPA bus.
>> It can either be consumed by a Kernel Virtio device (here Virtio-net)
>> when using Virtio-vDPA. In this case the device is exposed as a regular
>> netdev and, in the case of Kubernetes, can be used as primary interfaces
>> for the pods.
>> Or it can be exposed to user-space via Vhost-vDPA, a chardev that can be
>> seen as an alternative to Vhost-user sockets. In this case it can for
>> example be used by QEMU or DPDK Virtio-user PMD. In Kubernetes, it can
>> be used as a secondary interface.
>>
>> Now comes VDUSE. VDUSE is a Kernel vDPA device, but instead of being a
>> physical device where the Virtio datapath is offloaded, the Virtio
>> datapath is offloaded to a user-space application. With this series, a
>> DPDK application, like OVS-DPDK for instance, can create VDUSE device
>> and expose them either as regular netdev when binding them to Kernel
>> Virtio-net driver via Virtio-vDPA, or as Vhost-vDPA interface to be
>> consumed by another userspace appliation like QEMU or DPDK application
>> using Virtio-user PMD. With this solution, OVS-DPDK could serve both
>> primary and secondary interfaces of Kubernetes pods.
>>
>> I hope it clarifies, I will add these information in the cover-letter
>> for next revisions. Let me know if anything is still unclear.
>>
>> I did a presentation at last DPDK summit [0], maybe the diagrams will
>> help to clarify furthermore.
>>
> 
> Thanks Chenbo, Maxime for clarification.
> 
> After reading a little more (I think) I got it better, slides [0] were
> useful.
> 
> So this is more like a backend/handler, similar to vhost-user, although
> it is vDPA device emulation.
> Can you please describe more the benefit of vduse comparing to vhost-user?

The main benefit is that VDUSE device can be exposed as a regular
netdev, while this is not possible with Vhost-user.

> Also what is "VDUSE daemon", which is referred a few times in
> documentation, is it another userspace implementation of the vduse?

VDUSE daemon is the application that implements the VDUSE device, e.g.
OVS-DPDK with DPDK Vhost library using this series in our case.

Maxime
> 
>>>>
>>>> And if it is a better alternative, perhaps the documentation should
>>>> mention that it is recommended over DPDK vDPA. Just like we started
>>>> recommending alternatives to the KNI driver, so we could phase it out
>>>> and
>>>> eventually get rid of it.
>>>>
>>>>>
>>>>> Regards,
>>>>> Maxime
>>>
>>
>> [0]:
>> https://static.sched.com/hosted_files/dpdkuserspace22/9f/Open%20DPDK%20to%20containers%20networking%20with%20VDUSE.pdf
>>
> 



More information about the dev mailing list