[dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem

Jerin Jacob jerinjacobk at gmail.com
Fri Feb 21 16:56:44 CET 2020


On Fri, Feb 21, 2020 at 4:40 PM Thomas Monjalon <thomas at monjalon.net> wrote:
>
> 21/02/2020 11:30, Jerin Jacob:
> > On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk at gmail.com> wrote:
> > > On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas at monjalon.net> wrote:
> > > Thanks for starting this discussion now. It is an interesting
> > > discussion.  Some thoughts below.
> > > We can decide based on community consensus and follow a single rule
> > > across the components.
> >
> > Thomas,
> >
> > No feedback yet on the below questions.
>
> Indeed. I was waiting for opininons from others.

Me too.

>
> > If there no consensus in the email, I would like to propose this topic
> > to the 26th Feb TB meeting.
>
> I gave my opinion below.
> If a consensus cannot be reached, I agree with the request to the techboard.

OK.

>
>
> > > > 17/02/2020 08:19, Jerin Jacob:
> > > > > I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
> > > > > the comments.
> > > > >
> > > > > Is anyone else planning to have an architecture level or API usage
> > > > > level review or any review of other top-level aspects?
> > > >
> > > > If we add rte_graph to DPDK, we will have 2 similar libraries.
> > > >
> > > > I already proposed several times to move rte_pipeline in a separate
> > > > repository for two reasons:
> > > >         1/ it is acting at a higher API layer level
> > >
> > > We need to define what is the higher layer API. Is it processing beyond L2?
>
> My opinion is that any API which is implemented differently
> for different hardware should be in DPDK.

We need to define SIMD optimization(not HW specific to  but
architecture-specific)
treatment as well, as the graph and node library will have SIMD
optimization as well.

In general, by the above policy enforced, we need to split DPDK like below,
dpdk.git
----------
librte_compressdev
librte_bbdev
librte_eventdev
librte_pci
librte_rawdev
librte_eal
librte_security
librte_mempool
librte_mbuf
librte_cryptodev
librte_ethdev

other repo(s).
----------------
librte_cmdline
librte_cfgfile
librte_bitratestats
librte_efd
librte_latencystats
librte_kvargs
librte_jobstats
librte_gso
librte_gro
librte_flow_classify
librte_pipeline
librte_net
librte_metrics
librte_meter
librte_member
librte_table
librte_stack
librte_sched
librte_rib
librte_reorder
librte_rcu
librte_power
librte_distributor
librte_bpf
librte_ip_frag
librte_hash
librte_fib
librte_timer
librte_telemetry
librte_port
librte_pdump
librte_kni
librte_acl
librte_vhost
librte_ring
librte_lpm
librte_ipsec

> Hardware devices can offload protocol processing higher than L2,
> so L2 does not look to be a good limit from my point of view.

The node may use HW specific optimization if needed.


>
>
> > > In the context of Graph library, it is a framework, not using any of
> > > the substem API
> > > other than EAL and it is under lib/librte_graph.
> > > Nodes library using graph and other subsystem components such as ethdev and
> > > it is under lib/lib_node/
> > >
> > >
> > > Another interesting question would what would be an issue in DPDK supporting
> > > beyond L2. Or higher level protocols?
>
> Definitely higher than L2 is OK in DPDK as long as it is related to hardware
> capabilities, not software stack (which can be a DPDK application).

The software stack is a vague term. librte_ipsec could be a software stack.

>
>
> > > >         2/ there can be different solutions in this layer
> > >
> > > Is there any issue with that?
> > > There is overlap with the distributor library and eventdev as well.
> > > ethdev and SW traffic manager libraries as well. That list goes on.
>
> I don't know how much it is an issue.
> But I think it shows that at least one implementation is not generic enough.

I don't think, distributor lies there because of eventdev is not generic.
In fact, SW traffic manager is hooked to ethdev as well. It can work as both.

>
>
> > > > I think 1/ was commonly agreed in the community.
> > > > Now we see one more proof of the reason 2/.
> > > >
> > > > I believe it is time to move rte_pipeline (Packet Framework)
> > > > in a separate repository, and welcome rte_graph as well in another
> > > > separate repository.
> > >
> > > What would be gain out of this?
>
> The gain is to be clear about what should be the focus for contributors
> working on the main DPDK repository.

Not sure how it can defocus if there is another code in the repo.
If that case, the Linux kernel is not focused at all.

> What is expected to be maintained, tested, etc.

We need to maintain and test other code in OTHER dpdk repo as well.


>
>
> > > My concerns are:
> > > # Like packet-gen, The new code will be filled with unnecessary DPDK
> > > version checks
> > > and unnecessary compatibility issues.
> > > # Anything is not in main dpdk repo, it is a second class citizen.
> > > # Customer has the pain to use two repos and two releases. Internally,
> > > it can be two different
> > > repo but release needs to go through one repo.
> > >
> > > If we are focusing ONLY on the driver API then how can DPDK grow
> > > further? If linux kernel
> > > would be thought only have just the kernel and networking/storage as
> > > different repo it would
> > > not have grown up?
>
> Linux kernel is selecting what can enter in the focus or not.

Sorry. This sentence is not very clear to me.

> And I wonder what is the desire of extending/growing the scope of a library?

If the HW/Arch accelerated packet processing in the scope of DPDK this
library shall
come to that.

IMO, As long as there is maintainer, who can give pull request in time
and contribute to
the technical decision of the specific library, I think, that should be enough
to add in dpdk.git.

IMO, we can not get away from more contribution to dpdk. Assume, some set of
library goto pulled out main dpdk.git for some reason. One can still make
new releases say "dpdk-next" to including dpdk,git and various libraries.
Is that something, we are looking to enable as an end solution for
distros and/or
end-users.


>
>
> > > What is the real concern? Maintenance?
> > >
> > > > I think the original DPDK repository should focus on low-level features
> > > > which offer hardware offloads and optimizations.
> > >
> > > The nodes can be vendor-specific to optimize the specific use cases.
> > > As I mentioned in the cover letter,
> > >
> > > "
> > > 2) Based on our experience, NPU HW accelerates are so different than one vendor
> > > to another vendor. Going forward, We believe, API abstraction may not be enough
> > > abstract the difference in HW. The Vendor-specific nodes can abstract the HW
> > > differences and reuse generic the nodes as needed.
> > > This would help both the silicon vendors and DPDK end users.
> > > "
> > >
> > > Thoughts from other folks?
> > >
> > >
> > > > Consuming the low-level API in different abstractions,
> > > > and building applications, should be done on top of dpdk.git.
>
>
>


More information about the dev mailing list