[dpdk-dev] eventdev: sw rx adapter enqueue caching

Elo, Matias (Nokia - FI/Espoo) matias.elo at nokia.com
Tue May 7 14:03:28 CEST 2019



On 7 May 2019, at 15:01, Mattias Rönnblom <hofors at lysator.liu.se<mailto:hofors at lysator.liu.se>> wrote:



On 2019-05-07 13:12, Honnappa Nagarahalli wrote:

Hi,

The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the rx
packet rate is slow this also adds a considerable amount of additional delay.

Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.

From what I understood from Matias Elo and also after a quick glance in the code, the unlucky packets will be buffered indefinitely, in case the system goes idle. This is totally unacceptable (both in production and validation), in my opinion, and should be filed as a bug.


Indeed, this is what happens. I’ll create a bug report to track this issue.




More information about the dev mailing list