[dpdk-dev] IXGBE RX packet loss with 5+ cores

Venkatesan, Venky venky.venkatesan at intel.com
Tue Oct 13 17:34:48 CEST 2015



On 10/13/2015 7:47 AM, Sanford, Robert wrote:
>>>> [Robert:]
>>>> 1. The 82599 device supports up to 128 queues. Why do we see trouble
>>>> with as few as 5 queues? What could limit the system (and one port
>>>> controlled by 5+ cores) from receiving at line-rate without loss?
>>>>
>>>> 2. As far as we can tell, the RX path only touches the device
>>>> registers when it updates a Receive Descriptor Tail register (RDT[n]),
>>>> roughly every rx_free_thresh packets. Is there a big difference
>>>> between one core doing this and N cores doing it 1/N as often?
>>> [Stephen:]
>>> As you add cores, there is more traffic on the PCI bus from each core
>>> polling. There is a fix number of PCI bus transactions per second
>>> possible.
>>> Each core is increasing the number of useless (empty) transactions.
>> [Bruce:]
>> The polling for packets by the core should not be using PCI bandwidth
>> directly,
>> as the ixgbe driver (and other drivers) check for the DD bit being set on
>> the
>> descriptor in memory/cache.
> I was preparing to reply with the same point.
>
>>> [Stephen:] Why do you think adding more cores will help?
> We're using run-to-completion and sometimes spend too many cycles per pkt.
> We realize that we need to move to io+workers model, but wanted a better
> understanding of the dynamics involved here.
>
>> [Bruce:] However, using an increased number of queues can
>> use PCI bandwidth in other ways, for instance, with more queues you
>> reduce the
>> amount of descriptor coalescing that can be done by the NICs, so that
>> instead of
>> having a single transaction of 4 descriptors to one queue, the NIC may
>> instead
>> have to do 4 transactions each writing 1 descriptor to 4 different
>> queues. This
>> is possibly why sending all traffic to a single queue works ok - the
>> polling on
>> the other queues is still being done, but has little effect.
> Brilliant! This idea did not occur to me.
To add a little more detail - this ends up being both a bandwidth and a 
transaction bottleneck. Not only do you add an increased transaction 
count, you also add a huge amount of bandwidth overhead (each 16 byte 
descriptor is preceded by a PCI-E TLP which is about the same size). So 
what ends up happening in the case where the incoming packets are 
bifurcated to different queues (1 per queue) is that you have 2x the 
number of transactions (1 for the packet and one for the descriptor) and 
then we essentially double the bandwidth used because you now have the 
TLP overhead per descriptor write.

There is a second issue that also pops up when coalescing breaks down - 
testpmd essentially in iofwd mode simply transmits the number of packets 
it receives (i.e. Rx (n) -> Tx (n)). This means that the transmit side 
also suffers from writing one descriptor at a time for output (i.e. when 
the NIC pulls a descriptor cache line to transmit, it finds 1 valid 
descriptor). When a second descriptor is transmitted on the same it will 
again pull and find only one valid descriptor. That is another 2x 
increase in transaction count as well as PCI-E TLP overhead.

The third hit actually comes from the transmit side when transmitting 
one packet at a time. The last part of the transmit process is a MMIO 
write to the tail pointer. This is a costly operation (since it is a 
un-cacheable memory operation) in terms of cycles, not to mention again 
with heavy PCI-E overhead (TLP + 4 byte write) and increased transaction 
counts on PCI-E.

Hope that explains all the touch-points as to why you see the drop off 
in performance you see.
>
>
>
> --
> Thanks guys,
> Robert
>



More information about the dev mailing list