[dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem

Jerin Jacob jerinjacobk at gmail.com
Fri Feb 21 11:30:16 CET 2020


On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk at gmail.com> wrote:
>
> On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas at monjalon.net> wrote:
> >
> > Hi Jerin,
>
> Hi Thomas,
>
> Thanks for starting this discussion now. It is an interesting
> discussion.  Some thoughts below.
> We can decide based on community consensus and follow a single rule
> across the components.

Thomas,

No feedback yet on the below questions.

If there no consensus in the email, I would like to propose this topic
to the 26th Feb TB meeting.



>
> >
> > 17/02/2020 08:19, Jerin Jacob:
> > > I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
> > > the comments.
> > >
> > > Is anyone else planning to have an architecture level or API usage
> > > level review or any review of other top-level aspects?
> >
> > If we add rte_graph to DPDK, we will have 2 similar libraries.
> >
> > I already proposed several times to move rte_pipeline in a separate
> > repository for two reasons:
> >         1/ it is acting at a higher API layer level
>
> We need to define what is the higher layer API. Is it processing beyond L2?
>
> In the context of Graph library, it is a framework, not using any of
> the substem API
> other than EAL and it is under lib/librte_graph.
> Nodes library using graph and other subsystem components such as ethdev and
> it is under lib/lib_node/
>
>
> Another interesting question would what would be an issue in DPDK supporting
> beyond L2. Or higher level protocols?
>
>
> >         2/ there can be different solutions in this layer
>
> Is there any issue with that?
> There is overlap with the distributor library and eventdev as well.
> ethdev and SW traffic manager libraries as well. That list goes on.
>
> >
> > I think 1/ was commonly agreed in the community.
> > Now we see one more proof of the reason 2/.
> >
> > I believe it is time to move rte_pipeline (Packet Framework)
> > in a separate repository, and welcome rte_graph as well in another
> > separate repository.
>
> What would be gain out of this?
>
> My concerns are:
> # Like packet-gen, The new code will be filled with unnecessary DPDK
> version checks
> and unnecessary compatibility issues.
> # Anything is not in main dpdk repo, it is a second class citizen.
> # Customer has the pain to use two repos and two releases. Internally,
> it can be two different
> repo but release needs to go through one repo.
>
> If we are focusing ONLY on the driver API then how can DPDK grow
> further? If linux kernel
> would be thought only have just the kernel and networking/storage as
> different repo it would
> not have grown up?
>
> What is the real concern? Maintenance?
>
> > I think the original DPDK repository should focus on low-level features
> > which offer hardware offloads and optimizations.
>
> The nodes can be vendor-specific to optimize the specific use cases.
> As I mentioned in the cover letter,
>
> "
> 2) Based on our experience, NPU HW accelerates are so different than one vendor
> to another vendor. Going forward, We believe, API abstraction may not be enough
> abstract the difference in HW. The Vendor-specific nodes can abstract the HW
> differences and reuse generic the nodes as needed.
> This would help both the silicon vendors and DPDK end users.
> "
>
> Thoughts from other folks?
>
>
> > Consuming the low-level API in different abstractions,
> > and building applications, should be done on top of dpdk.git.
> >
> >


More information about the dev mailing list