[dpdk-dev] How to fight forwarding performance regression on large mempool sizes.

Venkatesan, Venky venky.venkatesan at intel.com
Thu Sep 19 21:43:22 CEST 2013


Dmitry, 
One other question - what version of DPDK are you doing on?  
-Venky

-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Robert Sanford
Sent: Thursday, September 19, 2013 12:40 PM
To: Dmitry Vyal
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] How to fight forwarding performance regression on large mempool sizes.

Hi Dmitry,

The biggest drop-off seems to be from size 128K to 256K. Are you using 1GB huge pages already (rather than 2MB)?

I would think that it would not use over 1GB until you ask for 512K mbufs or more.

--
Regards,
Robert



On Thu, Sep 19, 2013 at 3:50 AM, Dmitry Vyal <dmitryvyal at gmail.com> wrote:

> Good day everyone,
>
> While working on IP packet defragmenting I had to enlarge mempool 
> size. I did this to provide large enough time window for assembling a 
> fragment sequence. Unfortunately, I got a performance regression: if I 
> enlarge mempool size from 2**12 to 2**20 MBufs, packet performance for 
> not fragmented packets drops from ~8.5mpps to ~5.5mpps for single 
> core. I made only a single measure, so the data are noisy, but the trend is evident:
> SIZE 4096 - 8.47mpps
> SIZE 8192 - 8.26mpps
> SIZE 16384 - 8.29mpps
> SIZE 32768 - 8.31mpps
> SIZE 65536 - 8.12mpps
> SIZE 131072 - 7.93mpps
> SIZE 262144 - 6.22mpps
> SIZE 524288 - 5.72mpps
> SIZE 1048576 - 5.63mpps
>
> And I need even larger sizes.
>
> I want to ask for an advice, how best to tackle this? One way I'm 
> thinking about is to make two mempools, one large for fragments (we 
> may accumulate a big number of them) and one small for full packets, 
> we just forward them burst by burst. Is it possible to configure RSS 
> to distribute packets between queues according to this scheme? Perhaps, there are better ways?
>
> Thanks,
> Dmitry
>


More information about the dev mailing list