[dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem

Mattias Rönnblom mattias.ronnblom at ericsson.com
Fri Feb 21 16:38:54 CET 2020


On 2020-02-21 12:10, Thomas Monjalon wrote:
> 21/02/2020 11:30, Jerin Jacob:
>> On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk at gmail.com> wrote:
>>> On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas at monjalon.net> wrote:
>>> Thanks for starting this discussion now. It is an interesting
>>> discussion.  Some thoughts below.
>>> We can decide based on community consensus and follow a single rule
>>> across the components.
>> Thomas,
>>
>> No feedback yet on the below questions.
> Indeed. I was waiting for opininons from others.
>
>> If there no consensus in the email, I would like to propose this topic
>> to the 26th Feb TB meeting.
> I gave my opinion below.
> If a consensus cannot be reached, I agree with the request to the techboard.
>
>
>>>> 17/02/2020 08:19, Jerin Jacob:
>>>>> I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
>>>>> the comments.
>>>>>
>>>>> Is anyone else planning to have an architecture level or API usage
>>>>> level review or any review of other top-level aspects?
>>>> If we add rte_graph to DPDK, we will have 2 similar libraries.
>>>>
>>>> I already proposed several times to move rte_pipeline in a separate
>>>> repository for two reasons:
>>>>          1/ it is acting at a higher API layer level
>>> We need to define what is the higher layer API. Is it processing beyond L2?
> My opinion is that any API which is implemented differently
> for different hardware should be in DPDK.
> Hardware devices can offload protocol processing higher than L2,
> so L2 does not look to be a good limit from my point of view.
>
If you assume the capability of networking hardware will grow, and you 
want to unify different networking hardware with varying capabilities 
(and also include software-only implementations) under one API, then you 
might well end up growing DPDK into the software stack you mention 
below. Soft implementations of complex protocols will require operating 
system-like support services like timers, RCU, various lock-less data 
structures, deferred work mechanism, counter handling frameworks, 
control plane interfaces, etc. Coupling should always be avoided of 
course, but DPDK would inevitably no longer be a pick-and-choose 
smörgåsbord library - at least as long as the consumer wants to utilize 
this higher-layer functionality.

This would make DPDK more of a packet processing run-time or a 
special-purpose, networking operating system than the "a bunch of 
Ethernet drivers in user space" as it started out as.

I'm not saying that's a bad thing. In fact, I think it sounds like an 
interesting option, although also a very challenging one. From what I 
can see, DPDK has already set out along this route already. If this is a 
conscious decision or not, I don't know. Add to this, if Linux expands 
further with AF_XDP-like features, beyond simply packet I/O, it might 
not only try to take over DPDK's original concerns, but also more of the 
current ones.

>>> In the context of Graph library, it is a framework, not using any of
>>> the substem API
>>> other than EAL and it is under lib/librte_graph.
>>> Nodes library using graph and other subsystem components such as ethdev and
>>> it is under lib/lib_node/
>>>
>>>
>>> Another interesting question would what would be an issue in DPDK supporting
>>> beyond L2. Or higher level protocols?
> Definitely higher than L2 is OK in DPDK as long as it is related to hardware
> capabilities, not software stack (which can be a DPDK application).
>
>
>>>>          2/ there can be different solutions in this layer
>>> Is there any issue with that?
>>> There is overlap with the distributor library and eventdev as well.
>>> ethdev and SW traffic manager libraries as well. That list goes on.
> I don't know how much it is an issue.
> But I think it shows that at least one implementation is not generic enough.
>
>
>>>> I think 1/ was commonly agreed in the community.
>>>> Now we see one more proof of the reason 2/.
>>>>
>>>> I believe it is time to move rte_pipeline (Packet Framework)
>>>> in a separate repository, and welcome rte_graph as well in another
>>>> separate repository.
>>> What would be gain out of this?
> The gain is to be clear about what should be the focus for contributors
> working on the main DPDK repository.
> What is expected to be maintained, tested, etc.
>
>
>>> My concerns are:
>>> # Like packet-gen, The new code will be filled with unnecessary DPDK
>>> version checks
>>> and unnecessary compatibility issues.
>>> # Anything is not in main dpdk repo, it is a second class citizen.
>>> # Customer has the pain to use two repos and two releases. Internally,
>>> it can be two different
>>> repo but release needs to go through one repo.
>>>
>>> If we are focusing ONLY on the driver API then how can DPDK grow
>>> further? If linux kernel
>>> would be thought only have just the kernel and networking/storage as
>>> different repo it would
>>> not have grown up?
> Linux kernel is selecting what can enter in the focus or not.
> And I wonder what is the desire of extending/growing the scope of a library?
>
>
>>> What is the real concern? Maintenance?
>>>
>>>> I think the original DPDK repository should focus on low-level features
>>>> which offer hardware offloads and optimizations.
>>> The nodes can be vendor-specific to optimize the specific use cases.
>>> As I mentioned in the cover letter,
>>>
>>> "
>>> 2) Based on our experience, NPU HW accelerates are so different than one vendor
>>> to another vendor. Going forward, We believe, API abstraction may not be enough
>>> abstract the difference in HW. The Vendor-specific nodes can abstract the HW
>>> differences and reuse generic the nodes as needed.
>>> This would help both the silicon vendors and DPDK end users.
>>> "
>>>
>>> Thoughts from other folks?
>>>
>>>
>>>> Consuming the low-level API in different abstractions,
>>>> and building applications, should be done on top of dpdk.git.
>
>



More information about the dev mailing list