[dpdk-dev] [PATCH v14 00/23] Add DLB PMD

McDaniel, Timothy timothy.mcdaniel at intel.com
Sat Oct 31 23:25:01 CET 2020



> -----Original Message-----
> From: David Marchand <david.marchand at redhat.com>
> Sent: Saturday, October 31, 2020 5:16 PM
> To: McDaniel, Timothy <timothy.mcdaniel at intel.com>
> Cc: dev <dev at dpdk.org>; Carrillo, Erik G <erik.g.carrillo at intel.com>; Eads,
> Gage <gage.eads at intel.com>; Van Haaren, Harry
> <harry.van.haaren at intel.com>; Jerin Jacob Kollanukkaran
> <jerinj at marvell.com>; Thomas Monjalon <thomas at monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v14 00/23] Add DLB PMD
> 
> On Sat, Oct 31, 2020 at 7:16 PM Timothy McDaniel
> <timothy.mcdaniel at intel.com> wrote:
> >
> > The following patch series adds support for a new eventdev PMD. The DLB
> > PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
> > The DLB is a PCIe device that provides load-balanced, prioritized
> > scheduling of core-to-core communication. The device consists of
> > queues and arbiters that connect producer and consumer cores, and
> > implements load-balanced queueing features including:
> > - Lock-free multi-producer/multi-consumer operation.
> > - Multiple priority levels for varying traffic types.
> > - 'Direct' traffic (i.e. multi-producer/single-consumer)
> > - Simple unordered load-balanced distribution.
> > - Atomic lock-free load balancing across multiple consumers.
> > - Queue element reordering feature allowing ordered load-balanced
> >   distribution.
> >
> > The DLB hardware supports both load balanced and directed ports and
> > queues. Unlike other eventdev devices already in the repo,  not all
> > DLB ports and queues are equally capable. In particular, directed
> > ports are limited to a single link, and must be connected to a directed
> > queue.
> > Additionally, even though LDB ports may link multiple queues, the
> > number of queues that may be linked is limited by hardware. Another
> > difference is that DLB does not have a straightforward way of carrying
> > the flow_id in the queue elements (QE) that the hardware operates on.
> >
> > While reviewing the code, please be aware that this PMD has full
> > control over the DLB hardware. Intel will be extending the DLB PMD
> > in the future (not as part of this first series) with a mode that we
> > refer to as the bifurcated PMD. The bifurcated PMD communicates with a
> > kernel driver to configure the device, ports, and queues, and memory
> > maps device MMIO so datapath operations occur purely in user-space.
> >
> > The framework to support both the PF PMD and bifurcated PMD exists in
> > this patchset, and is why the iface.[ch] layer is present.
> >
> > Major changes in V14
> > ====================
> > - Fixed format errors in doc/api/doxy-api-index.md
> > - Delayed introduction of dlb2_consume_qe_immediate until
> >   add-dequeue-and-its-burst-variants.patch
> > - Delayed introduction of dlb2_construct_token_pop_qe until
> >   add-PMD-s-token-pop-public-interface.patch
> > - Delayed introduction of dlb_equeue_*_delayed until
> >   add dequeue and its burst variants.patch
> 
> I just sent a bunch of comments.
> I still see a build error with clang for unused stuff.
> 
> There is no point in sending a new series unless the clang build is
> fixed once and for all.
> 
> 
> I compared dlb and dlb2 code.
> I presume fixing bugs in the future will amount to double patches every time.
> 
> 
> But, on a positive note, France won against Ireland.
> 
> 
> --
> David Marchand

Where do I find the clang output? I followed the links in the 0-day email, and from patchwork clicking on the patches with the red failure indicator,
but none of those seemed to lead me to any clang error output. My build server does not have clang, which presents a problem
for me.  I've tried to use gcc -Wunused and it failed to catch anything. Is there a way for me to submit a clang job to a dpdk server.

Thanks,
Tim




More information about the dev mailing list