|SUCCESS| [RFC,2/2] dts: add latency coverage for cryptodev testing

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Wed Apr 29 00:26:43 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/163334

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-28

## Review: RFC PATCH series for cryptodev latency testing

Commit: 8ee2df9da125 (referenced in Fixes tag)
Author: Andrew Bailey <abailey at iol.unh.edu>
Subject: [RFC PATCH 1/2] dts: update parsing for cryptodev latency

This patch updates the regex pattern for parsing cryptodev latency burst
size attribute.

> diff --git a/dts/api/cryptodev/types.py b/dts/api/cryptodev/types.py
> index df73a86fa4..0942d08371 100644
> --- a/dts/api/cryptodev/types.py
> +++ b/dts/api/cryptodev/types.py
> @@ -69,7 +69,7 @@ class LatencyResults(CryptodevResults):
>      )
>      #: burst size ran with app
>      burst_size: int = field(
> -        metadata=TextParser.find_int(rf"Burst(?:.*\n\s+\d+\s+){2}?(?: size:\s+)?(\d+)"),
> +        metadata=TextParser.find_int(r"Burst size:\s+(\d+)"),
>      )

The simplified regex looks more straightforward. Does the new pattern
correctly handle all output formats from dpdk-test-cryptodev? The original
pattern appears to handle multiple line breaks and optional formatting.
Could there be cases where "Burst size:" is not present in the output?

[ ... ]

---

Commit: (second patch in series)
Author: Andrew Bailey <abailey at iol.unh.edu>
Subject: [RFC PATCH 2/2] dts: add latency coverage for cryptodev testing

This patch adds a test suite for cryptodev latency measurements.

> diff --git a/dts/tests/TestSuite_cryptodev_latency.py b/dts/tests/TestSuite_cryptodev_latency.py
> new file mode 100644
> index 0000000000..39c9ee02c4
> --- /dev/null
> +++ b/dts/tests/TestSuite_cryptodev_latency.py

[ ... ]

> +config_list: list[dict[str, int | float | str]] = [
> +    {"buff_size": 64, "avg_cycles": 9999.00, "avg_time_us": 9999.0},
> +    {"buff_size": 512, "avg_cycles": 9999.00, "avg_time_us": 9999.0},
> +    {"buff_size": 2048, "avg_cycles": 9999.00, "avg_time_us": 9999.0},
> +]

Does using placeholder values of 9999.00 make sense as default baselines?
Would these cause all tests to pass trivially until real baselines are
configured?

[ ... ]

> +    def _verify_latency(
> +        self,
> +        results: list[CryptodevResults],
> +        key: str,
> +    ) -> list[dict[str, int | float | str]]:
> +        result_list: list[dict[str, int | float | str]] = []
> +
> +        for result in results:
> +            # get the corresponding baseline for the current buffer size
> +            parameters: dict[str, int | float | str] = list(
> +                filter(
> +                    lambda x: x["buff_size"] == result.buffer_size,
> +                    self.latency_test_parameters[key],
> +                )
> +            )[0]

Does this code handle the case where no matching buffer size is found in
latency_test_parameters? The indexing [0] would raise IndexError if the
filter returns an empty list.

> +            test_result = True
> +            expected_cycles = parameters["avg_cycles"]
> +            expected_time_us = parameters["avg_time_us"]
> +            measured_time_delta = abs(
> +                (getattr(result, "avg_time_us") - expected_time_us) / expected_time_us
> +            )

Does this code handle the case where expected_time_us is zero? This would
cause a ZeroDivisionError.

> +            measured_cycles_delta = abs(
> +                (getattr(result, "avg_cycles") - expected_cycles) / expected_cycles
> +            )

Similarly, does this code handle the case where expected_cycles is zero?

[ ... ]

> +    @crypto_test
> +    def vdev_aesni_mb(self) -> None:
> +        """aesni_mb vdev latency test.
> +
> +        Steps:
> +            * Create a cryptodev instance with provided device type and buffer sizes.
> +        Verify:
> +            * The latency is below or within delta of provided baseline.
> +
> +        Raises:
> +            SkippedTestException: When configuration is not provided.
> +        """
> +        if "vdev_aesni_mb" not in self.latency_test_parameters:
> +            skip("test not configured")
> +        app = Cryptodev(

[ ... ]

> +        results = self._verify_latency(app.run_app(), "vdev_aesni_mb")
> +        self._print_stats(results)
> +        for result in results:
> +            verify(result["passed"] == "PASS", "Gbps fell below delta tolerance")

Does the error message "Gbps fell below delta tolerance" match the test
context? This appears to be a latency test measuring cycles and
microseconds, not throughput in Gbps.

> +    @crypto_test
> +    def vdev_aesni_mb_cipher_then_auth(self) -> None:
> +        """aesni_mb vdev cipher and auth latency test.

[ ... ]

> +        if "vdev_aesni_mb" not in self.latency_test_parameters:
> +            skip("test not configured")

Should this check for "vdev_aesni_mb_cipher_then_auth" instead of
"vdev_aesni_mb"? The function name and the configuration key appear to
be mismatched.

> +        app = Cryptodev(

[ ... ]

> +        results = self._verify_latency(app.run_app(), "vdev_aesni_mb_cipher_then_auth")

The _verify_latency call uses "vdev_aesni_mb_cipher_then_auth" as the key,
but the configuration check above looks for "vdev_aesni_mb". Does this
inconsistency cause the test to skip incorrectly or fail to find
configuration?


More information about the test-report mailing list