[dpdk-users] Test Pipeline application - lockup

Vijay S vsnv83 at gmail.com
Wed Dec 30 05:58:21 CET 2015

Hi Jasvinder,

On Tue, Dec 29, 2015 at 3:42 AM, Singh, Jasvinder <jasvinder.singh at intel.com
> wrote:

> Hi Vijay,
> >
> > I am running the test pipeline application using extendible bucket hash
> table
> > with 16 byte key size. I have made changes to the test pipeline
> application
> > such that for EVERY port instance there are 3 cores: Rx Core, Worker Core
> > and Tx Core instead of ALL ports mapping to a single Rx/Tx Core like in
> the
> > original implementation. This is being done to try and increase the
> overall
> > throughput of the application.
> >
> > In the app_main_loop_worker_pipeline_hash function, I am instantiating a
> > new pipeline instance for each port and stitching the rings_rx and
> rings_tx
> > accordingly. With these changes I am able to get a throughput of ~13Mpps
> on
> > one 10G port. But, when I send a high rate traffic (13Mpps each) from
> > the ports simultaneously, the traffic stops completely. Stopping the
> > restarting traffic doesn't seem to be helping. It almost looks like the
> pipeline
> > is locked up and not seeing any packets in the Rx rings.
> > Has anyone encountered this before ? Any idea how to debug/what could be
> > going wrong ? Appreciate your help !
> In pipeline Core, have you created separate table for each pipeline
> instance? Each pipeline instance should have input port, table and output
> port configured and linked properly to make passage for the packets.

Moving to multi-consumer ring apis and allocating rx mbuf per port fixed
the lockup issue. I am able to receive packets on all the ports without any
lockup. Thanks for your help.


More information about the users mailing list