[dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem

dave at barachs.net dave at barachs.net
Fri Feb 21 16:53:18 CET 2020

I can share a data-point with respect to constructing a reasonably functional network stack. Original work on the project which eventually became fd.io vpp started in 2002. I've worked on the vpp code base full-time for 18 years.

In terms of lines of code: the vpp graph subsystem is a minuscule fraction of the project as a whole. We've rewritten performance-critical bits of the vpp netstack multiple times.

FWIW... Dave  

-----Original Message-----
From: Mattias Rönnblom <mattias.ronnblom at ericsson.com> 
Sent: Friday, February 21, 2020 10:39 AM
To: Thomas Monjalon <thomas at monjalon.net>; Jerin Jacob <jerinjacobk at gmail.com>
Cc: Jerin Jacob <jerinj at marvell.com>; Ray Kinsella <mdr at ashroe.eu>; dpdk-dev <dev at dpdk.org>; Prasun Kapoor <pkapoor at marvell.com>; Nithin Dabilpuram <ndabilpuram at marvell.com>; Kiran Kumar K <kirankumark at marvell.com>; Pavan Nikhilesh <pbhagavatula at marvell.com>; Narayana Prasad <pathreya at marvell.com>; nsaxena at marvell.com; sshankarnara at marvell.com; Honnappa Nagarahalli <honnappa.nagarahalli at arm.com>; David Marchand <david.marchand at redhat.com>; Ferruh Yigit <ferruh.yigit at intel.com>; Andrew Rybchenko <arybchenko at solarflare.com>; Ajit Khaparde <ajit.khaparde at broadcom.com>; Ye, Xiaolong <xiaolong.ye at intel.com>; Raslan Darawsheh <rasland at mellanox.com>; Maxime Coquelin <maxime.coquelin at redhat.com>; Akhil Goyal <akhil.goyal at nxp.com>; Cristian Dumitrescu <cristian.dumitrescu at intel.com>; John McNamara <john.mcnamara at intel.com>; Richardson, Bruce <bruce.richardson at intel.com>; Anatoly Burakov <anatoly.burakov at intel.com>; Gavin Hu <gavin.hu at arm.com>; David Christensen <drc at linux.vnet.ibm.com>; Ananyev, Konstantin <konstantin.ananyev at intel.com>; Pallavi Kadam <pallavi.kadam at intel.com>; Olivier Matz <olivier.matz at 6wind.com>; Gage Eads <gage.eads at intel.com>; Rao, Nikhil <nikhil.rao at intel.com>; Erik Gabriel Carrillo <erik.g.carrillo at intel.com>; Hemant Agrawal <hemant.agrawal at nxp.com>; Artem V. Andreev <artem.andreev at oktetlabs.ru>; Stephen Hemminger <sthemmin at microsoft.com>; Shahaf Shuler <shahafs at mellanox.com>; Wiles, Keith <keith.wiles at intel.com>; Jasvinder Singh <jasvinder.singh at intel.com>; Vladimir Medvedkin <vladimir.medvedkin at intel.com>; techboard at dpdk.org; Stephen Hemminger <stephen at networkplumber.org>; dave at barachs.net
Subject: Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem

On 2020-02-21 12:10, Thomas Monjalon wrote:
> 21/02/2020 11:30, Jerin Jacob:
>> On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk at gmail.com> wrote:
>>> On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas at monjalon.net> wrote:
>>> Thanks for starting this discussion now. It is an interesting 
>>> discussion.  Some thoughts below.
>>> We can decide based on community consensus and follow a single rule 
>>> across the components.
>> Thomas,
>> No feedback yet on the below questions.
> Indeed. I was waiting for opininons from others.
>> If there no consensus in the email, I would like to propose this 
>> topic to the 26th Feb TB meeting.
> I gave my opinion below.
> If a consensus cannot be reached, I agree with the request to the techboard.
>>>> 17/02/2020 08:19, Jerin Jacob:
>>>>> I got initial comments from Ray and Stephen on this RFC[1]. Thanks 
>>>>> for the comments.
>>>>> Is anyone else planning to have an architecture level or API usage 
>>>>> level review or any review of other top-level aspects?
>>>> If we add rte_graph to DPDK, we will have 2 similar libraries.
>>>> I already proposed several times to move rte_pipeline in a separate 
>>>> repository for two reasons:
>>>>          1/ it is acting at a higher API layer level
>>> We need to define what is the higher layer API. Is it processing beyond L2?
> My opinion is that any API which is implemented differently for 
> different hardware should be in DPDK.
> Hardware devices can offload protocol processing higher than L2, so L2 
> does not look to be a good limit from my point of view.
If you assume the capability of networking hardware will grow, and you want to unify different networking hardware with varying capabilities (and also include software-only implementations) under one API, then you might well end up growing DPDK into the software stack you mention below. Soft implementations of complex protocols will require operating system-like support services like timers, RCU, various lock-less data structures, deferred work mechanism, counter handling frameworks, control plane interfaces, etc. Coupling should always be avoided of course, but DPDK would inevitably no longer be a pick-and-choose smörgåsbord library - at least as long as the consumer wants to utilize this higher-layer functionality.

This would make DPDK more of a packet processing run-time or a special-purpose, networking operating system than the "a bunch of Ethernet drivers in user space" as it started out as.

I'm not saying that's a bad thing. In fact, I think it sounds like an interesting option, although also a very challenging one. From what I can see, DPDK has already set out along this route already. If this is a conscious decision or not, I don't know. Add to this, if Linux expands further with AF_XDP-like features, beyond simply packet I/O, it might not only try to take over DPDK's original concerns, but also more of the current ones.

>>> In the context of Graph library, it is a framework, not using any of 
>>> the substem API other than EAL and it is under lib/librte_graph.
>>> Nodes library using graph and other subsystem components such as 
>>> ethdev and it is under lib/lib_node/
>>> Another interesting question would what would be an issue in DPDK 
>>> supporting beyond L2. Or higher level protocols?
> Definitely higher than L2 is OK in DPDK as long as it is related to 
> hardware capabilities, not software stack (which can be a DPDK application).
>>>>          2/ there can be different solutions in this layer
>>> Is there any issue with that?
>>> There is overlap with the distributor library and eventdev as well.
>>> ethdev and SW traffic manager libraries as well. That list goes on.
> I don't know how much it is an issue.
> But I think it shows that at least one implementation is not generic enough.
>>>> I think 1/ was commonly agreed in the community.
>>>> Now we see one more proof of the reason 2/.
>>>> I believe it is time to move rte_pipeline (Packet Framework) in a 
>>>> separate repository, and welcome rte_graph as well in another 
>>>> separate repository.
>>> What would be gain out of this?
> The gain is to be clear about what should be the focus for 
> contributors working on the main DPDK repository.
> What is expected to be maintained, tested, etc.
>>> My concerns are:
>>> # Like packet-gen, The new code will be filled with unnecessary DPDK 
>>> version checks and unnecessary compatibility issues.
>>> # Anything is not in main dpdk repo, it is a second class citizen.
>>> # Customer has the pain to use two repos and two releases. 
>>> Internally, it can be two different repo but release needs to go 
>>> through one repo.
>>> If we are focusing ONLY on the driver API then how can DPDK grow 
>>> further? If linux kernel would be thought only have just the kernel 
>>> and networking/storage as different repo it would not have grown up?
> Linux kernel is selecting what can enter in the focus or not.
> And I wonder what is the desire of extending/growing the scope of a library?
>>> What is the real concern? Maintenance?
>>>> I think the original DPDK repository should focus on low-level 
>>>> features which offer hardware offloads and optimizations.
>>> The nodes can be vendor-specific to optimize the specific use cases.
>>> As I mentioned in the cover letter,
>>> "
>>> 2) Based on our experience, NPU HW accelerates are so different than 
>>> one vendor to another vendor. Going forward, We believe, API 
>>> abstraction may not be enough abstract the difference in HW. The 
>>> Vendor-specific nodes can abstract the HW differences and reuse generic the nodes as needed.
>>> This would help both the silicon vendors and DPDK end users.
>>> "
>>> Thoughts from other folks?
>>>> Consuming the low-level API in different abstractions, and building 
>>>> applications, should be done on top of dpdk.git.

More information about the dev mailing list