[dpdk-dev] discussion: creating a new class for vdpa drivers

Maxime Coquelin maxime.coquelin at redhat.com
Mon Dec 16 09:50:04 CET 2019


Hi Andrew,

On 12/16/19 9:46 AM, Andrew Rybchenko wrote:
> On 12/16/19 11:29 AM, Matan Azrad wrote:
>>
>> Hi all
>>
>> I understand all of you agree \ not object with the new class for vdpa drivers.
> 
> I have two control questions:
> 
> 1. If so, is it allowed to have vDPA driver in
>    drivers/net/<driver> if it is better from code sharing point
>    of view?

If it has something to share, I think we should move the common bits
to the common directory.

> 2. If drivers/common is used, is exported functions which are
>    used by drivers/net/<driver> and drivers/vdpa/<driver> and
>    data structures are a part of public API/ABI? Hopefully not,
>    but I'd like to double-check and ensure that it is solved in
>    the case of shared libraries build.

Common functions and data should not be part of the API/ABI I agree.
I guess we should use relative paths for including the common headers.

>> Based on that, I'm going to start it.
>>
>> From: Tiwei Bie
>>> On Tue, Dec 10, 2019 at 09:00:33AM +0100, Thomas Monjalon wrote:
>>>> 10/12/2019 03:41, Tiwei Bie:
>>>>> On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
>>>>>> On 12/9/19 1:41 PM, Ori Kam wrote:
>>>>>>> From: Andrew Rybchenko
>>>>>>>> On 12/8/19 10:06 AM, Matan Azrad wrote:
>>>>>>>>> From: Andrew Rybchenko
>>>>>>>>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
>>>>>>>>>>> From: Bie, Tiwei <tiwei.bie at intel.com>
>>>>>>>>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
>>>>>>>>>>>>> Hi all
>>>>>>>>>>>>>
>>>>>>>>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox
>>>>>>>>>>>>> devices”, a new vdpa drivers is going to be added for
>>>>>>>>>>>>> Mellanox devices – mlx5_vdpa
>>>>>>>>>>>>>
>>>>>>>>>>>>> The only vdpa driver now is the IFC driver that is located
>>>>>>>>>>>>> in net
>>>>>>>> directory.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The IFC driver and the new mlx5_vdpa driver provide the
>>>>>>>>>>>>> vdpa ops
>>>>>>>> and
>>>>>>>>>>>>> not the eth_dev ops.
>>>>>>>>>>>>>
>>>>>>>>>>>>> All the others drivers in net provide the eth-dev ops.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I suggest to create a new class for vdpa drivers, to move
>>>>>>>>>>>>> IFC to this class and to add the mlx5_vdpa to this class too.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Later, all the new drivers that implements the vdpa ops
>>>>>>>>>>>>> will be added to the vdpa class.
>>>>>>>>>>>>
>>>>>>>>>>>> +1. Sounds like a good idea to me.
>>>>>>>>>>> +1
>>>>>>>>>>
>>>>>>>>>> vDPA drivers are vendor-specific and expected to talk to vendor
>>> NIC. I.e.
>>>>>>>>>> there are significant chances to share code with network drivers
>>> (e.g.
>>>>>>>> base
>>>>>>>>>> driver). Should base driver be moved to drivers/common in
>>>>>>>>>> this case or is
>>>>>>>> it
>>>>>>>>>> still allows to have vdpa driver in drivers/net together with ethdev
>>> driver?
>>>>>>>>>
>>>>>>>>> Yes, I think this should be the method, shared code should be
>>>>>>>>> moved to
>>>>>>>> the drivers/common directory.
>>>>>>>>> I think there is a precedence with shared code in common which
>>>>>>>>> shares a
>>>>>>>> vendor specific code between crypto and net.
>>>>>>>>
>>>>>>>> I see motivation behind driver/vdpa. However, vdpa and net
>>>>>>>> drivers tightly related and I would prefer to avoid extra
>>>>>>>> complexity here. Right now simplicity over-weights for me.
>>>>>>>> No strong opinion on the topic, but it definitely requires
>>>>>>>> better and more clear motivation why a new class should be
>>>>>>>> introduced and existing drivers restructured.
>>>>>>>>
>>>>>>>
>>>>>>> Why do you think there is extra complexity?
>>>>>>
>>>>>> Even grep becomes a bit more complicated J
>>>>>>
>>>>>>> I think from design correctness it is more correct to create a dedicated
>>> class for the following reasons:
>>>>>>> 1. All of the classes implements a different set of ops. For
>>>>>>> example the cryptodev has a defined set of ops, same goes for the
>>> compress driver and the ethdev driver. Each ones of them has different ops.
>>> Going by this definition since VDPA has a different set of ops, it makes sense
>>> that it will be in a different class.
>>>>>>>
>>>>>>> 2. even that both drivers (eth/vdpa) handle traffic from the nic
>>>>>>> most of the code is different (the difference is also dependent on the
>>> manufacture) So even the shared code base is not large and can be shared
>>> using the common directory. For example ifc doesn't have any shared code.
>>>>>>>
>>>>>>> What do you think?
>>>>>>
>>>>>> The true reason is: if the difference in ops implemented is a key
>>>>>> difference which should enforce location in different directories.
>>>>>> Or underlying device class is a key.
>>>>>> Roughly:
>>>>>>  - net driver is a control+data path
>>>>>>  - vdpa driver is a control path only My fear is that control path
>>>>>> will grow more and more (Rx mode, RSS, filters and so on)
>>>>>
>>>>> I think this is a reasonable concern.
>>>>>
>>>>> One thing needs to be clarified is that, the control path (ops) in
>>>>> vdpa driver is something very different with the control path in net
>>>>> driver. vdpa is very generic (or more precisely vhost-related),
>>>>> instead of network related:
>>>>>
>>>>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgi
>>>>>
>>> thub.com%2FDPDK%2Fdpdk%2Fblob%2Faef1d0733179%2Flib%2Flibrte_vhos
>>> t%2F
>>>>> rte_vdpa.h%23L40-
>>> L78&data=02%7C01%7Cmatan%40mellanox.com%7Cfac15
>>>>>
>>> 729a67c4c81ee7608d77d7434a2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C
>>> 0%7
>>>>>
>>> C0%7C637115810358231304&sdata=%2BZa39vxadtKx5Ov7vmqcU3RuZhz
>>> kOP9o
>>>>> 8roEB0d5j6M%3D&reserved=0
>>>>>
>>>>> It's built on top of vhost-user protocol, manipulates virtqueues,
>>>>> virtio/vhost features, memory table, ...
>>>>>
>>>>> Technically, it's possible to have blk, crypto, ...
>>>>> vdpa devices as well (we already have vhost-user-blk,
>>>>> vhost-user-crypto, ...).
>>>>>
>>>>> But network specific features will come eventually, e.g. RSS. One
>>>>> possible way to solve it is to define a generic event callback in
>>>>> vdpa ops, and vdpa driver can request the corresponding info from
>>>>> vhost based on the event received.
>>>>>
>>>>> Another thing needs to be clarified is that, the control path
>>>>> supposed to be used by DPDK apps directly in vdpa should always be
>>>>> generic, it should just be something like:
>>>>>
>>>>> int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr); int
>>>>> rte_vhost_driver_attach_vdpa_device(const char *path, int did); int
>>>>> rte_vhost_driver_detach_vdpa_device(const char *path); ...
>>>>>
>>>>> That is to say, users just need to bind the vdpa device to the vhost
>>>>> connection. The control path ops in vdpa is supposed to be called by
>>>>> vhost-library transparently based on the events on the vhost-user
>>>>> connection, i.e.
>>>>> the vdpa device will be configured (including RSS) by the guest
>>>>> driver in QEMU "directly" via the vhost-user protocol instead of the
>>>>> DPDK app in the host.
>>>>
>>>> Tiwei, in order to be clear,
>>>> You think vDPA drivers should be in drivers/vdpa directory?
>>>
>>> I was just trying to clarify two facts in vDPA to address Andrew's concern.
>>> And back to the question, to make sure that we don't miss anything
>>> important, (although maybe not very related) it might be helpful to also
>>> clarify how to support vDPA in OvS at the same time which isn't quite clear to
>>> me yet..
>>>
>>> Regards,
>>> Tiwei
>>>
>>>>
>>>>
> 



More information about the dev mailing list