[dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver

Maxime Coquelin maxime.coquelin at redhat.com
Wed Sep 11 09:15:20 CEST 2019



On 9/11/19 7:15 AM, Shahaf Shuler wrote:
> Tuesday, September 10, 2019 4:56 PM, Maxime Coquelin:
>> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
>> On 9/10/19 3:44 PM, Shahaf Shuler wrote:
>>> Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin:
>>>> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
> 
> [...]
> 
>>>>
>>>> Hi Shahaf,
>>>>
>>>>
>>>> IMHO, I see two uses cases where it can make sense to use vDPA with a
>>>> full offload HW device:
>>>> 1. Live-migration support:  It makes it possible to switch to rings
>>>>    processing in SW during the migration as Virtio HH does not support
>>>>    dirty pages logging.
>>>
>>> Can you elaborate why specifically using virtio_vdpa PMD enables this SW
>> relay during migration?
>>> e.g. the vdpa PMD of intel that runs on top of VF do that today as well.
>>
>> I think there were a misunderstanding. When I said:
>> "
>> I see two uses cases where it can make sense to use vDPA with a full offload
>> HW device "
>>
>> I meant, I see two uses cases where it can make sense to use vDPA with a full
>> offload HW device, instead of the full offload HW device to use Virtio PMD.
>>
>> In other words, I think it is preferable to only offload the datapath, so that it
>> is possible to support SW live-migration.
>>
>>>>
>>>> 2. Can be used to provide a single standard interface (the vhost-user
>>>>    socket) to containers in the scope of CNFs. Doing so, the container
>>>>    does not need to be modified, whatever the HW NIC: Virtio datapath
>>>>    offload only, full Virtio offload, or no offload at all. In the
>>>>    latter case, it would not be optimal as it implies forwarding between
>>>>    the Vhost PMD and the HW NIC PMD but it would work.
>>>
>>> It is not clear to me the interface map in such system.
>>> From what I understand the container will have virtio-user i/f and the host
>> will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or
>> not.
>>> For full emulation I guess you will need to expose the netdev of the fully
>> emulated virtio device to the container?
>>>
>>> Am trying to map when it is beneficial to use this virtio_vdpa PMD and
>> when it is better to use the vendor specific vDPA PMD on top of VF.
>>
>> I think that with above clarification, I made it clear that the goal of this driver
>> is not to replace vendors vDPA drivers (their control path maybe not even be
>> compatible), but instead to provide a generic driver that can be used either
>> within a guest with a para-virtualized Virtio- net device or with HW NIC that
>> fully offloads Virtio (both data and control paths).
> 
> Thanks Maxim, It is clearer now. 
> From what I understand this driver is to be used w/ vDPA when the underlying device is virtio. 
> 
> I can perfectly understand the para-virt ( + nested virtualization / container inside VM) use case. 
> 
> Regarding the fully emulated virtio device on the host (instead of a plain VF) - for me the benefit still not clear - if you have HW that can expose VF why not use VF + vendor specific vDPA driver.

If you need a vendor specific vDPA driver for the VF, then you
definitely want to use the vendor specific driver.

However, if there is a HW or VF that implements the Virtio Spec even for
the control path (i.e. the PCI registers layout), one may be tempted to
do device assignment directly to the guest and use Virtio PMD.
The downside of doing that is it won't support live-migration.

The benefit of using vDPA with virtio vDPA driver in this case is
provide a way to support live-migration (by switch to SW ring processing
and perform dirty pages logging).

> 
> Anyway - for the series,
> Acked-by: Shahaf Shuler <shahafs at mellanox.com>
> 

Thanks!
Maxime


More information about the dev mailing list