[dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver

Shahaf Shuler shahafs at mellanox.com
Tue Sep 10 15:44:47 CEST 2019


Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin:
> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
> 
> Hi Shahaf,
> 
> On 9/9/19 1:55 PM, Shahaf Shuler wrote:
> > Hi Maxime,
> >
> > Thursday, August 29, 2019 11:00 AM, Maxime Coquelin:
> >> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
> >>
> >> vDPA allows to offload Virtio Datapath processing by supported NICs,
> >> like IFCVF for example.
> >>
> >> The control path has to be handled by a dedicated vDPA driver, so
> >> that it can translate Vhost-user protocol requests to proprietary
> >> NICs registers accesses.
> >>
> >> This driver is the vDPA driver for Virtio devices, meaning that
> >> Vhost-user protocol requests get translated to Virtio registers
> >> accesses as per defined in the Virtio spec.
> >>
> >> Basically, it can be used within a guest with a para-virtualized
> >> Virtio-net device, or even with a full Virtio HW offload NIC directly on
> host.
> >
> > Can you elaborate more on the use cases to use such driver?
> >
> > 1. If the underlying HW can support full virtio device, why we need to work
> w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one?
> > 2. why it is preferable to work w/ virtio device as the backend device to be
> used w/ vDPA v.s. working w/ the underlying HW VF?
> 
> 
> IMHO, I see two uses cases where it can make sense to use vDPA with a full
> offload HW device:
> 1. Live-migration support:  It makes it possible to switch to rings
>    processing in SW during the migration as Virtio HH does not support
>    dirty pages logging.

Can you elaborate why specifically using virtio_vdpa PMD enables this SW relay during migration?
e.g. the vdpa PMD of intel that runs on top of VF do that today as well. 

> 
> 2. Can be used to provide a single standard interface (the vhost-user
>    socket) to containers in the scope of CNFs. Doing so, the container
>    does not need to be modified, whatever the HW NIC: Virtio datapath
>    offload only, full Virtio offload, or no offload at all. In the
>    latter case, it would not be optimal as it implies forwarding between
>    the Vhost PMD and the HW NIC PMD but it would work.

It is not clear to me the interface map in such system.
From what I understand the container will have virtio-user i/f and the host will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or not. 
For full emulation I guess you will need to expose the netdev of the fully emulated virtio device to the container?

Am trying to map when it is beneficial to use this virtio_vdpa PMD and when it is better to use the vendor specific vDPA PMD on top of VF. 

> 
> > Is nested virtualization is what you have in mind?
> 
> For the para-virtualized virtio device, either nested virtualization, or
> container running within the guest.
> 
> Maxime


More information about the dev mailing list