[dpdk-dev] [PATCH v2 0/3] Increase default RX/TX ring sizes

Adrien Mazarguil adrien.mazarguil at 6wind.com
Tue Jan 30 14:48:28 CET 2018


On Mon, Jan 29, 2018 at 04:25:00PM +0000, Bruce Richardson wrote:
> On Tue, Jan 16, 2018 at 02:13:19PM +0100, Adrien Mazarguil wrote:
> > Hi Kevin,
> > 
> > On Fri, Jan 12, 2018 at 10:48:43AM +0000, Kevin Laatz wrote:
> > > Increasing the RX/TX default ring size to 1024/1024 to accommodate for
> > > faster NICs. With the increase of number of PPS, a larger RX buffer is
> > > required in order to avoid packet loss. While a ring size of 128 may be
> > > large enough for 1G and possibly 10G NICs, this is not going to scale to
> > > small packet sizes at 25G and 40G line rates. As we are increasing the RX
> > > buffer size to 1024, we also need to increase the TX buffer size to ensure
> > > that the TX side does not become the bottleneck.
> > > 
> > > v2
> > >   - fixed typos in commit messages
> > >   - fixed typo in Cc email address
> > 
> > I agree with the above and this series contents but would like to comment
> > anyway.
> > 
> > Since typical TX/RX bursts are usually somewhere between 16 to 64 packets
> > depending on the application, increasing ring size instead of burst size to
> > keep up with packet rate may mean that software (PMD/application) is too
> > slow on the RX side or hardware is too slow on the TX side (rings always
> > full basically), and this is worked around by introducing latency to absorb
> > packet loss. This is not necessarily a good trade-off.
> 
> Well, if RX burst size of 64 is in use, the existing default of 128 is
> definely very much too low - though point taken about slowness of RX.

Agreed, I just wanted to stress that increasing TX/RX ring sizes may result
in rings still full, but thanks to their FIFO nature, now with increased
latency and resource consumption while still dropping packets in case of
HW/SW slowness. This is not the proper workaround for such a scenario (not
uncommon).

> > Granted the most appropriate burst/ring/threshold values always depend on
> > the application and underlying hardware, and each vendor is responsible for
> > documenting ideal values for typical applications by providing performance
> > results.
> 
> I actually think it probably depends a lot on the NIC hardware in
> question. What is the optimal size for an app with a 1G NIC will be
> different for a 25G or 100G NIC. I therefore think in a future release
> we should have an ethdev API to allow each driver to propose it's
> recommended ring sizes. The app can perhaps provide a burst size hint as
> parameter. What do you think?

Sounds like a good idea. It could also be implemented without hurting any
API by making 0 descriptors a special value for rte_eth_[rt]x_queue_setup(),
so that being lazy translates to optimized defaults at the cost of some
uncertainty regarding mbuf pool sizing.

PMDs that do not implement this would reject queue creation as they likely
already do.

> > My concern is that modifying defaults makes performance comparison
> > with past DPDK releases more difficult for existing automated tests
> > that do not provide ring size and other parameters. There should an
> > impact given that larger rings require more buffers, use more cache,
> > and access more memory in general.
> > 
> I'm actually not too concerned about that, as I would expect most
> serious performance comparisons to be done with individually tuned rx
> and tx ring size parameters. For zero loss throughput tests, larger ring
> sizes are needed for any of the NIC I've tested anyway.
> 
> Overall, I feel this change is long overdue.
> 
> Series Acked-by: Bruce Richardson <bruce.richardson at intel.com>

True, therefore:

Acked-by: Adrien Mazarguil <adrien.mazarguil at 6wind.com>

-- 
Adrien Mazarguil
6WIND


More information about the dev mailing list