|SUCCESS| [1/1] ml/cnxk: support for 64-bit int type in metadata
dpdklab at iol.unh.edu
dpdklab at iol.unh.edu
Tue Mar 31 22:38:06 CEST 2026
Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162792
_AI Code Review_
=== Patch Review: series.patch (via Claude) ===
Review date: 2026-03-31
Reviewing patch: ml/cnxk: support for 64-bit int type in metadata
This patch adds support for 64-bit integer data types (INT64 and UINT64)
to the CNXK ML driver's model metadata handling.
> diff --git a/drivers/ml/cnxk/cn10k_ml_model.c b/drivers/ml/cnxk/cn10k_ml_model.c
> index 12a2dda800e..3eb64825d24 100644
> --- a/drivers/ml/cnxk/cn10k_ml_model.c
> +++ b/drivers/ml/cnxk/cn10k_ml_model.c
> @@ -33,6 +33,10 @@ cn10k_ml_io_type_map(uint8_t type)
> return RTE_ML_IO_TYPE_FP16;
> case 8:
> return RTE_ML_IO_TYPE_FP32;
> + case 9:
> + return RTE_ML_IO_TYPE_INT64;
> + case 10:
> + return RTE_ML_IO_TYPE_UINT64;
> }
>
> return RTE_ML_IO_TYPE_UNKNOWN;
Does this switch statement handle all metadata type values correctly? The
function cn10k_ml_io_type_map() appears to be missing a default case or
explicit handling when type values fall outside the expected range. What
happens if the metadata contains a type value greater than 10?
Also, does this function properly validate that RTE_ML_IO_TYPE_INT64 and
RTE_ML_IO_TYPE_UINT64 are defined in the DPDK ML API headers? Are these
new types being added to rte_mldev.h in a separate patch?
> diff --git a/drivers/ml/cnxk/cn10k_ml_model.h b/drivers/ml/cnxk/cn10k_ml_model.h
> index 6744175cd5e..746eed7c702 100644
> --- a/drivers/ml/cnxk/cn10k_ml_model.h
> +++ b/drivers/ml/cnxk/cn10k_ml_model.h
> @@ -224,12 +224,14 @@ struct cn10k_ml_model_metadata_input_section {
> /* Type of incoming input
> * 1 = INT8, 2 = UINT8, 3 = INT16, 4 = UINT16,
> * 5 = INT32, 6 = UINT32, 7 = FP16, 8 = FP32
> + * 9 = INT64, 10 = UINT64
> */
> uint8_t input_type;
>
> /* Type of input required by model
> * 1 = INT8, 2 = UINT8, 3 = INT16, 4 = UINT16,
> * 5 = INT32, 6 = UINT32, 7 = FP16, 8 = FP32
> + * 9 = INT64, 10 = UINT64
> */
> uint8_t model_input_type;
>
> @@ -282,12 +284,14 @@ struct cn10k_ml_model_metadata_output_section {
> /* Type of outgoing output
> * 1 = INT8, 2 = UINT8, 3 = INT16, 4 = UINT16
> * 5 = INT32, 6 = UINT32, 7 = FP16, 8 = FP32
> + * 9 = INT64, 10 = UINT64
> */
> uint8_t output_type;
>
> /* Type of output produced by model
> * 1 = INT8, 2 = UINT8, 3 = INT16, 4 = UINT16
> * 5 = INT32, 6 = UINT32, 7 = FP16, 8 = FP32
> + * 9 = INT64, 10 = UINT64
> */
> uint8_t model_output_type;
The comment updates are documentation-only and appear correct.
Overall concerns:
1. Are the new RTE_ML_IO_TYPE_INT64 and RTE_ML_IO_TYPE_UINT64 enum values
defined elsewhere in this patch series? The patch should either include
those definitions or have a dependency documented.
2. Does the driver properly handle the size calculations for 64-bit types
elsewhere in the codebase? Functions that compute buffer sizes or iterate
over tensor data may need updates to account for 8-byte integers.
3. Is there anywhere else in the driver that switches or checks on IO
types that might need updating? For example, data conversion functions,
size calculation helpers, or validation code?
More information about the test-report
mailing list