[dpdk-dev] TX performance regression caused by the mbuf cachline split

Luke Gorrie luke at snabb.co
Mon May 11 11:13:03 CEST 2015


Hi Paul,

On 11 May 2015 at 02:14, Paul Emmerich <emmericp at net.in.tum.de> wrote:

> Another possible solution would be a more dynamic approach to mbufs:


Let me suggest a slightly more extreme idea for your consideration. This
method can easily do > 100 Mpps with one very lightly loaded core. I don't
know if it works for your application or not but I share it just in case.

Background: Load generators are specialist applications and can benefit
from specialist transmit mechanisms.

You can instruct the NIC to send up to 32K packets with one operation: load
the address of a descriptor list into the TDBA register (Transmit
Descriptor Base Address).

The descriptor list is a simple series of 64-bit values: addr0, flags0,
addr1, flags1, ... etc. It is easy to construct by hand.

The NIC can also be made to play the packets in a loop. You just have to
periodically reset the DMA cursor to make all the packets valid again. That
is a simple register poke: TDT = TDH-1.

We do this routinely when we want to generate a large amount of traffic
with few resources, typically when generating load using spare capacity of
a device under test. (I have sample code but it is not based on DPDK.)

If you want all of your packets to be unique then you have to be a bit more
clever. For example you could poll to see the DMA progress: let half the
packets be sent, then rewrite those while the other half are sent, and so
on. Kind of like the way video games tracked the progress of the display
scan beam to update parts of the frame buffer that were not being DMA'd.

This method may impose other limitations that are not acceptable for your
application of course. But if not then it can drastically reduce the number
of instructions and cache footprint required to generate load. You don't
have to touch mbufs or descriptors at all. You just update the payload and
update the DMA register every millisecond or so.

Cheers,
-Luke


More information about the dev mailing list