[dpdk-dev] eventdev: sw rx adapter enqueue caching
Elo, Matias (Nokia - FI/Espoo)
matias.elo at nokia.com
Thu May 9 17:02:20 CEST 2019
Thanks, I’ve tested this patch and can confirm that it fixes the problem.
I didn’t do any performance comparison but at least with high packet rate rte_eth_rx_burst() should already return close to BATCH_SIZE packets, so the performance hit shouldn’t be that big.
-Matias
On 9 May 2019, at 14:24, Rao, Nikhil <nikhil.rao at intel.com<mailto:nikhil.rao at intel.com>> wrote:
-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Elo, Matias (Nokia -
FI/Espoo)
Sent: Tuesday, May 7, 2019 5:33 PM
To: Mattias Rönnblom <hofors at lysator.liu.se<mailto:hofors at lysator.liu.se>>
Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli at arm.com<mailto:Honnappa.Nagarahalli at arm.com>>;
dev at dpdk.org<mailto:dev at dpdk.org>; nd <nd at arm.com<mailto:nd at arm.com>>
Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
On 7 May 2019, at 15:01, Mattias Rönnblom
<hofors at lysator.liu.se<mailto:hofors at lysator.liu.se><mailto:hofors at lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the
rx
packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput
considerations. If there is a latency sensitive application, it might not want
to wait till 32 packets are received.
From what I understood from Matias Elo and also after a quick glance in the
code, the unlucky packets will be buffered indefinitely, in case the system
goes idle. This is totally unacceptable (both in production and validation), in
my opinion, and should be filed as a bug.
Indeed, this is what happens. I’ll create a bug report to track this issue.
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
More information about the dev
mailing list