[dpdk-dev] [PATCH v3] guides: add a guide for developing unit tests

Ferruh Yigit ferruh.yigit at intel.com
Mon May 31 17:17:57 CEST 2021


On 3/9/2021 3:57 PM, Aaron Conole wrote:
> The DPDK testing infrastructure includes a comprehensive set of
> libraries, utilities, and CI integrations for developers to test
> their code changes.  This isn't well documented, however.
> 
> Document the basics for adding a test suite to the infrastructure
> and enabling that test suite for continuous integration platforms
> so that newer developers can understand how to develop test suites
> and test cases.

+1 to adding this long missing documentation, thanks.

> 
> Reviewed-by: David Marchand <david.marchand at redhat.com>
> Signed-off-by: Aaron Conole <aconole at redhat.com>
> ---
> v0->v1: Added information for TEST_SKIPPED and details about generating
>         code coverage to help with ideas for writing unit test cases.
> v1->v2: Corrected some spelling, rephrased a bit after suggestions by
>         Ray.
> v2->v3: Rewrite the meson build block, updated the copyright section,
>         and change the title to be a bit nicer
> 
>  doc/guides/contributing/index.rst   |   1 +
>  doc/guides/contributing/testing.rst | 243 ++++++++++++++++++++++++++++
>  2 files changed, 244 insertions(+)
>  create mode 100644 doc/guides/contributing/testing.rst
> 
> diff --git a/doc/guides/contributing/index.rst b/doc/guides/contributing/index.rst
> index 2fefd91931..41909d949b 100644
> --- a/doc/guides/contributing/index.rst
> +++ b/doc/guides/contributing/index.rst
> @@ -14,6 +14,7 @@ Contributor's Guidelines
>      abi_versioning
>      documentation
>      patches
> +    testing
>      vulnerability
>      stable
>      cheatsheet
> diff --git a/doc/guides/contributing/testing.rst b/doc/guides/contributing/testing.rst
> new file mode 100644
> index 0000000000..0757d71ad0
> --- /dev/null
> +++ b/doc/guides/contributing/testing.rst
> @@ -0,0 +1,243 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright 2021 The DPDK contributors
> +
> +DPDK Testing Guidelines
> +=======================
> +
> +This document outlines the guidelines for running and adding new
> +tests to the in-tree DPDK test suites.
> +

I think both the section name (DPDK Testing Guidelines) and the file name
(testing.rst) is too broad comparing to what is described here.

What about using "DPDK Unit Test Guidelines", and 'unit_test.rst'?

> +The DPDK test suite model is loosely based on the xunit model, where
> +tests are grouped into test suites, and suites are run by runners.
> +For a basic overview, see the basic Wikipedia article on xunit:
> +`xUnit - Wikipedia <https://en.wikipedia.org/wiki/XUnit>`_.
> +
> +
> +Running a test
> +--------------
> +
> +DPDK tests are run via the main test runner, the `dpdk-test` app.
> +The `dpdk-test` app is a command-line interface that facilitates
> +running various tests or test suites.
> +
> +There are two modes of operation.  The first mode is as an interactive
> +command shell that allows launching specific test suites.  This is
> +the default operating mode of `dpdk-test` and can be done by::
> +
> +  $ ./build/app/test/dpdk-test --dpdk-options-here
> +  EAL: Detected 4 lcore(s)
> +  EAL: Detected 1 NUMA nodes
> +  EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
> +  EAL: Multi-process socket /run/user/26934/dpdk/rte/mp_socket
> +  EAL: Selected IOVA mode 'VA'
> +  EAL: Probing VFIO support...
> +  EAL: PCI device 0000:00:1f.6 on NUMA socket -1
> +  EAL:   Invalid NUMA socket, default to 0
> +  EAL:   probe driver: 8086:15d7 net_e1000_em
> +  APP: HPET is not enabled, using TSC as default timer
> +  RTE>>
> +
> +At the prompt, simply type the name of the test suite you wish to run
> +and it will execute.
> +
> +The second form is useful for a scripting environment, and is used by
> +the DPDK meson build system.  This mode is invoked by assigning a
> +specific test suite name to the environment variable `DPDK_TEST`
> +before invoking the `dpdk-test` command, such as::
> +
> +  $ DPDK_TEST=version_autotest ./build/app/test/dpdk-test --dpdk-options-here
> +  EAL: Detected 4 lcore(s)
> +  EAL: Detected 1 NUMA nodes
> +  EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
> +  EAL: Multi-process socket /run/user/26934/dpdk/rte/mp_socket
> +  EAL: Selected IOVA mode 'VA'
> +  EAL: Probing VFIO support...
> +  EAL: PCI device 0000:00:1f.6 on NUMA socket -1
> +  EAL:   Invalid NUMA socket, default to 0
> +  EAL:   probe driver: 8086:15d7 net_e1000_em
> +  APP: HPET is not enabled, using TSC as default timer
> +  RTE>>version_autotest
> +  Version string: 'DPDK 20.02.0-rc0'
> +  Test OK
> +  RTE>>$
> +

According code, it is also possible to run the unit test by providing its name
as argument to 'dpdk-test', like bellow. And benefit of this is, it is possible
to provide multiple tests at once and run them all, which can't be done via
'DPDK_TEST" env var. Should we document this too?

./build/app/test/dpdk-test version_autotest
EAL: Detected 96 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No free 2048 kB hugepages reported on node 0
EAL: No free 2048 kB hugepages reported on node 1
EAL: No available 2048 kB hugepages reported
APP: HPET is not enabled, using TSC as default timer
RTE>>version_autotest
Version string: 'DPDK 21.05.0'
Test OK


Another env variable it "DPDK_TEST_PARAMS" to provide more parameters, since
meson has args support for test exec, we are not using it in meson, but maybe
good to document it for the individual exec usage.


Also there is a 'RTE_TEST_RECURSIVE' env var used, I am not clear why it is for,
anyone knows the history on it?


> +The above shows running a specific test case.  On success, the return
> +code will be '0', otherwise it will be set to some error value (such
> +as '255').
> +

Also negative value

> +
> +Running all tests
> +-----------------
> +

What about "Running multiple test cases", instead of "all tests"?

Btw, it is possible to run single unit test with meson, perhaps this can be
documented in previous section:
"meson test -C build version_autotest"

Also another useful command is to list all test cases, may worth documenting
that too:
"meson test -C build --list"


We have a legacy runner as well, 'autotest.py', most probably it is outdated
now, if we don't document it we may consider removing it completely
(./app/test/autotest*.py):
"python ./app/test/autotest.py ./build/app/test/dpdk-test build version"

> +In order to allow developers to quickly execute all the standard
> +internal tests without needing to remember or look up each test suite
> +name, the build system includes a standard way of executing the
> +default test suites.  After building via `ninja`, the ``meson test``
> +command will execute the standard tests and report errors.
> +
> +There are four groups of default test suites.  The first group is
> +the **fast** test suite, which is the largest group of test cases.
> +These are the bulk of the unit tests to validate functional blocks.
> +The second group is the **perf** tests.  These test suites can take
> +longer to run and do performance evaluations.  The third group is
> +the **driver** test suite, which is mostly for special hardware
> +related testing (such as `cryptodev`).  The last group are the
> +**debug** tests.  These mostly are used to dump system information.
> +
> +The suites can be selected by adding the ``--suite`` option to the
> +``meson test`` command.  Ex: ``meson test --suite fast-tests``::
> +
> +  $ meson test -C build --suite fast-tests
> +  ninja: Entering directory `/home/aconole/git/dpdk/build'
> +  [2543/2543] Linking target app/test/dpdk-test.
> +  1/60 DPDK:fast-tests / acl_autotest          OK       3.17 s
> +  2/60 DPDK:fast-tests / bitops_autotest       OK       0.22 s
> +  3/60 DPDK:fast-tests / byteorder_autotest    OK       0.22 s
> +  4/60 DPDK:fast-tests / cmdline_autotest      OK       0.28 s
> +  5/60 DPDK:fast-tests / common_autotest       OK       0.57 s
> +  6/60 DPDK:fast-tests / cpuflags_autotest     OK       0.27 s
> +  ...
> +
> +
> +Adding test suites
> +------------------
> +

What about renaming 'test suites' to 'test cases', since 'test suites' is just a
group of test case and we can add single test case too.

> +To add a test suite to the DPDK test application, create a new test
> +file for that suite (ex: see *app/test/test_version.c* for the
> +``version_autotest`` test suite).  There are two important functions
> +for interacting with the test harness:
> +
> +  1. REGISTER_TEST_COMMAND(command_name, function_to_execute)
> +     Registers a test command with the name `command_name` and which
> +     runs the function `function_to_execute` when `command_name` is
> +     invoked.
> +
> +  2. unit_test_suite_runner(struct unit_test_suite \*)
> +     Returns a runner for a full test suite object, which contains
> +     a test suite name, setup, tear down, and vector of unit test
> +     cases.
> +

There are two different usage as "test suite" which is confusing I think,

1) First usage within dpdk-test app

a) A test command can be running single test_case, like
'version_autotest' -> 'test_version'

b) Or it can be a group of test cases, like:
'link_bonding_autotest' -> 'test_create_bonded_device',
			'test_create_bonded_device_with_invalid_params',
			'test_add_slave_to_bonded_device',
			'...'

The second one is called as unit_test_suite within the scope of the 'dpdk-test'
application, but at the end of the day it is a single command for test app.

First one has simple execution, but second one has setup/teardown as you already
described below.


2) Second usage with meson,

There is a next level of grouping in meson, as described in previous paragraph,
like "fast-test suite", "perf test suite" etc.., this is grouping of various
test commands.


I wonder if this two different "test suite" can be clarified more. Can we say
"meson test suite" to the meson one?


> +Each test suite has a setup and tear down function that runs at the
> +beginning and end of the test suite execution.  Each unit test has
> +a similar function for test case setup and tear down.
> +
> +Test cases are added to the `.unit_test_cases` element of the unit
> +test suite structure.  Ex:
> +
> +.. code-block:: c
> +   :linenos:
> +
> +   #include <time.h>
> +
> +   #include <rte_common.h>
> +   #include <rte_cycles.h>
> +   #include <rte_hexdump.h>
> +   #include <rte_random.h>
> +
> +   #include "test.h"
> +
> +   static int testsuite_setup(void) { return TEST_SUCCESS; }
> +   static void testsuite_teardown(void) { }
> +
> +   static int ut_setup(void) { return TEST_SUCCESS; }
> +   static void ut_teardown(void) { }
> +
> +   static int test_case_first(void) { return TEST_SUCCESS; }
> +
> +   static struct unit_test_suite example_testsuite = {
> +          .suite_name = "EXAMPLE TEST SUITE",
> +          .setup = testsuite_setup,
> +          .teardown = testsuite_teardown,
> +          .unit_test_cases = {
> +               TEST_CASE_ST(ut_setup, ut_teardown, test_case_first),
> +
> +               TEST_CASES_END(), /**< NULL terminate unit test array */
> +          },
> +   };
> +
> +   static int example_tests()
> +   {
> +       return unit_test_suite_runner(&example_testsuite);
> +   }
> +
> +   REGISTER_TEST_COMMAND(example_autotest, example_tests);
> +
> +The above code block is a small example that can be used to create a
> +complete test suite with test case.
> +
> +
> +Designing a test
> +----------------
> +
> +Test cases have multiple ways of indicating an error has occurred,
> +in order to reflect failure state back to the runner.  Using the
> +various methods of indicating errors can assist in not only validating
> +the requisite functionality is working, but also to help debug when
> +a change in environment or code has caused things to go wrong.
> +
> +The first way to indicate a generic error is by returning a test
> +result failure, using the *TEST_FAILED* error code.  This is the most
> +basic way of indicating that an error has occurred in a test routine.
> +It isn't very informative to the user, so it should really be used in
> +cases where the test has catastrophically failed.
> +
> +The preferred method of indicating an error is via the
> +`RTE_TEST_ASSERT` family of macros, which will immediately return
> +*TEST_FAILED* error condition, but will also log details about the
> +failure.  The basic form is:
> +
> +.. code-block:: c
> +
> +   RTE_TEST_ASSERT(cond, msg, ...)
> +
> +In the above macro, *cond* is the condition to evaluate to **true**.
> +Any generic condition can go here.  The *msg* parameter will be a
> +message to display if *cond* evaluates to **false**.  Some specialized
> +macros already exist.  See `lib/librte_eal/include/rte_test.h` for
> +a list of defined test assertions.
> +
> +Sometimes it is important to indicate that a test needs to be
> +skipped, either because the environment isn't able to support running
> +the test, or because some requisite functionality isn't available.  The
> +test suite supports returning a result of `TEST_SKIPPED` during test
> +case setup, or during test case execution to indicate that the
> +preconditions of the test aren't available.  Ex::
> +
> +  $ meson test -C build --suite fast-tests
> +  ninja: Entering directory `/home/aconole/git/dpdk/build
> +  [2543/2543] Linking target app/test/dpdk-test.
> +  1/60 DPDK:fast-tests / acl_autotest          OK       3.17 s
> +  2/60 DPDK:fast-tests / bitops_autotest       OK       0.22 s
> +  3/60 DPDK:fast-tests / byteorder_autotest    OK       0.22 s
> +  ...
> +  46/60 DPDK:fast-tests / ipsec_autotest       SKIP     0.22 s
> +  ...
> +
> +
> +Checking code coverage
> +----------------------
> +The meson build system supports generating a code coverage report
> +via the `-Db_coverage=true` option, in conjunction with a package
> +like **lcov**, to generate an HTML code coverage report.  Example::
> +
> +  $ meson setup build -Db_coverage=true
> +  $ meson test -C build --suite fast-tests
> +  $ ninja coverage-html -C build
> +
> +The above will generate an html report in the
> +`build/meson-logs/coveragereport/` directory that can be explored
> +for detailed code covered information.  This can be used to assist
> +in test development.
> +
> +
> +Adding a suite to the default
> +-----------------------------
> +

What is "the default"?

> +Adding to one of the default tests involves editing the appropriate
> +meson build file `app/test/meson.build` and adding the command to
> +the correct test suite class.  Once added, the new test suite will
> +be run as part of the appropriate class (fast, perf, driver, etc.).
> +

Here "meson test suite" referenced as 'class', lets figure out the naming for it
and use it consistently.

> +Some of these default test suites are run during continuous integration

Same comment for "default test suites" usage.

> +tests, making regression checking automatic for new patches submitted
> +to the project.
> 

Is there a rule to document how to decide to add (or not) a testcase to a meson
test suit?


More information about the dev mailing list