[dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem

Thomas Monjalon thomas at monjalon.net
Fri Feb 21 17:14:22 CET 2020


21/02/2020 16:56, Jerin Jacob:
> On Fri, Feb 21, 2020 at 4:40 PM Thomas Monjalon <thomas at monjalon.net> wrote:
> > 21/02/2020 11:30, Jerin Jacob:
> > > On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk at gmail.com> wrote:
> > > > On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas at monjalon.net> wrote:
> > > > > If we add rte_graph to DPDK, we will have 2 similar libraries.
> > > > >
> > > > > I already proposed several times to move rte_pipeline in a separate
> > > > > repository for two reasons:
> > > > >         1/ it is acting at a higher API layer level
> > > >
> > > > We need to define what is the higher layer API. Is it processing beyond L2?
> >
> > My opinion is that any API which is implemented differently
> > for different hardware should be in DPDK.
> 
> We need to define SIMD optimization(not HW specific to  but
> architecture-specific)
> treatment as well, as the graph and node library will have SIMD
> optimization as well.

I think SIMD optimization is generic to any performance-related project,
not specific to DPDK.


> In general, by the above policy enforced, we need to split DPDK like below,
> dpdk.git
> ----------
> librte_compressdev
> librte_bbdev
> librte_eventdev
> librte_pci
> librte_rawdev
> librte_eal
> librte_security
> librte_mempool
> librte_mbuf
> librte_cryptodev
> librte_ethdev
> 
> other repo(s).
> ----------------
> librte_cmdline
> librte_cfgfile
> librte_bitratestats
> librte_efd
> librte_latencystats
> librte_kvargs
> librte_jobstats
> librte_gso
> librte_gro
> librte_flow_classify
> librte_pipeline
> librte_net
> librte_metrics
> librte_meter
> librte_member
> librte_table
> librte_stack
> librte_sched
> librte_rib
> librte_reorder
> librte_rcu
> librte_power
> librte_distributor
> librte_bpf
> librte_ip_frag
> librte_hash
> librte_fib
> librte_timer
> librte_telemetry
> librte_port
> librte_pdump
> librte_kni
> librte_acl
> librte_vhost
> librte_ring
> librte_lpm
> librte_ipsec

I think it is a fair conclusion of the scope I am arguing, yes.


> > Hardware devices can offload protocol processing higher than L2,
> > so L2 does not look to be a good limit from my point of view.
> 
> The node may use HW specific optimization if needed.

That's an interesting argument.


> > > > In the context of Graph library, it is a framework, not using any of
> > > > the substem API
> > > > other than EAL and it is under lib/librte_graph.
> > > > Nodes library using graph and other subsystem components such as ethdev and
> > > > it is under lib/lib_node/
> > > >
> > > >
> > > > Another interesting question would what would be an issue in DPDK supporting
> > > > beyond L2. Or higher level protocols?
> >
> > Definitely higher than L2 is OK in DPDK as long as it is related to hardware
> > capabilities, not software stack (which can be a DPDK application).
> 
> The software stack is a vague term. librte_ipsec could be a software stack.

I agree.


> > > > >         2/ there can be different solutions in this layer
> > > >
> > > > Is there any issue with that?
> > > > There is overlap with the distributor library and eventdev as well.
> > > > ethdev and SW traffic manager libraries as well. That list goes on.
> >
> > I don't know how much it is an issue.
> > But I think it shows that at least one implementation is not generic enough.
> 
> I don't think, distributor lies there because of eventdev is not generic.
> In fact, SW traffic manager is hooked to ethdev as well. It can work as both.
> >
> >
> > > > > I think 1/ was commonly agreed in the community.
> > > > > Now we see one more proof of the reason 2/.
> > > > >
> > > > > I believe it is time to move rte_pipeline (Packet Framework)
> > > > > in a separate repository, and welcome rte_graph as well in another
> > > > > separate repository.
> > > >
> > > > What would be gain out of this?
> >
> > The gain is to be clear about what should be the focus for contributors
> > working on the main DPDK repository.
> 
> Not sure how it can defocus if there is another code in the repo.
> If that case, the Linux kernel is not focused at all.

I see your point.


> > What is expected to be maintained, tested, etc.
> 
> We need to maintain and test other code in OTHER dpdk repo as well.

Yes but the ones responsible are not the same.


> > > > My concerns are:
> > > > # Like packet-gen, The new code will be filled with unnecessary DPDK
> > > > version checks
> > > > and unnecessary compatibility issues.
> > > > # Anything is not in main dpdk repo, it is a second class citizen.
> > > > # Customer has the pain to use two repos and two releases. Internally,
> > > > it can be two different
> > > > repo but release needs to go through one repo.
> > > >
> > > > If we are focusing ONLY on the driver API then how can DPDK grow
> > > > further? If linux kernel
> > > > would be thought only have just the kernel and networking/storage as
> > > > different repo it would
> > > > not have grown up?
> >
> > Linux kernel is selecting what can enter in the focus or not.
> 
> Sorry. This sentence is not very clear to me.

I mean not everything proposed to Linux community is merged.


> > And I wonder what is the desire of extending/growing the scope of a library?
> 
> If the HW/Arch accelerated packet processing in the scope of DPDK this
> library shall
> come to that.
> 
> IMO, As long as there is maintainer, who can give pull request in time
> and contribute to
> the technical decision of the specific library, I think, that should be enough
> to add in dpdk.git.

Yes, that's fair.


> IMO, we can not get away from more contribution to dpdk. Assume, some set of
> library goto pulled out main dpdk.git for some reason. One can still make
> new releases say "dpdk-next" to including dpdk,git and various libraries.
> Is that something, we are looking to enable as an end solution for
> distros and/or
> end-users.
> 
> 
> > > > What is the real concern? Maintenance?
> > > >
> > > > > I think the original DPDK repository should focus on low-level features
> > > > > which offer hardware offloads and optimizations.
> > > >
> > > > The nodes can be vendor-specific to optimize the specific use cases.
> > > > As I mentioned in the cover letter,
> > > >
> > > > "
> > > > 2) Based on our experience, NPU HW accelerates are so different than one vendor
> > > > to another vendor. Going forward, We believe, API abstraction may not be enough
> > > > abstract the difference in HW. The Vendor-specific nodes can abstract the HW
> > > > differences and reuse generic the nodes as needed.
> > > > This would help both the silicon vendors and DPDK end users.
> > > > "
> > > >
> > > > Thoughts from other folks?
> > > >
> > > >
> > > > > Consuming the low-level API in different abstractions,
> > > > > and building applications, should be done on top of dpdk.git.





More information about the dev mailing list