[dpdk-dev] [PATCH v6 3/3] app/test: add allocator performance autotest

Aaron Conole aconole at redhat.com
Fri Oct 15 15:47:56 CEST 2021


Dmitry Kozlyuk <dkozlyuk at nvidia.com> writes:

>> This isn't really a test, imho.  There are no assert()s.  How does a developer who
>> tries to fix a bug in this area know what is acceptable?
>> 
>> Please switch the printf()s to RTE_LOG calls, and add some RTE_TEST_ASSERT
>> calls to enforce some time range at the least.
>> Otherwise this test will not really be checking the performance - just giving a
>> report somewhere.
>
> I just followed DPDK naming convention of test_xxx_perf.c / xxx_perf_autotest.
> They all should really be called benchmarks.

Agreed - they are not really tests and it makes me wonder why we label
them as such.  It will be confusing.  A developer who runs the perf test
suite will just see "OK" everywhere and assume that all the tests are
working - even if they introduce a performance regression.

Maybe it would make sense to relabel them (perf-benchmark or something),
so that there isn't an expectation that we have PASS / FAIL.  That's a
larger scope than this patch, though.

> They help developers to see how the code changes affect performance.
> I don't understand how this "perf test" is not in line with existing ones
> and where it should properly reside.
>
> I'm not totally opposed to replacing printf() with RTE_LOG(), but all other test use printf().
> The drawback of the change is inconsistency, what is the benefit?

RTE_LOG is captured in other places as well.  printf() depending on how
the test app is run might not go anywhere.  Also, at least the ipsec
perf test starts introducing RTE_LOG() calls - although even there they
use printf() for reports.

I guess it's very confusing to call all of these as 'test' since they
aren't.

But that's an aside, and I guess this is consistent with existing
_perf.c files.

>> Also, I don't understand the way the memset test works here.  You do one large
>> memset at the very beginning and then extrapolate the time it would take.  Does
>> that hold any value or should we do a memset in each iteration and enforce a
>> scaled time?
>
> As explained above, we don't need to enforce anything, we want a report.
> I've never seen a case with one NUMA node where memset() time would not scale linearly,
> but benchmarks should be precise so I'll change it to memset()'ing the allocated area, thanks. 



More information about the dev mailing list