|SUCCESS| [1/1] ml/cnxk: enable data caching for all MRVL layers

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Tue Mar 31 19:55:22 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162790

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-03-31

Commit: 20260331085350.1105103-1-syalavarthi at marvell.com
Author: Srikanth Yalavarthi <syalavarthi at marvell.com>

ml/cnxk: enable data caching for all MRVL layers

Enabled data caching for all MRVL layers in TVM models.

> diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c
> index b30af7c7a44..628ff963c3c 100644
> --- a/drivers/ml/cnxk/cn10k_ml_ops.c
> +++ b/drivers/ml/cnxk/cn10k_ml_ops.c

[ ... ]

> @@ -997,13 +997,8 @@ cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name)
>  	if (ret < 0) {
>  		cn10k_ml_layer_stop(device, model_id, layer_name);
>  	} else {
> -		if (cn10k_mldev->cache_model_data) {
> -			if ((model->type == ML_CNXK_MODEL_TYPE_GLOW &&
> -			     model->subtype == ML_CNXK_MODEL_SUBTYPE_GLOW_MRVL) ||
> -			    (model->type == ML_CNXK_MODEL_TYPE_TVM &&
> -			     model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL))
> -				ret = cn10k_ml_cache_model_data(cnxk_mldev, layer);
> -		}
> +		if (cn10k_mldev->cache_model_data)
> +			ret = cn10k_ml_cache_model_data(cnxk_mldev, layer);

Does this code properly handle models that are not MRVL subtypes? The
original code limited caching to GLOW_MRVL and TVM_MRVL subtypes. Does
cn10k_ml_cache_model_data() correctly handle all model types and
subtypes now passed to it, or could this cause issues with models that
previously bypassed caching?

The commit message mentions "all MRVL layers in TVM models" but the code
change enables caching for all model types. Is the broader scope
intentional?

>  	}
>  
>  	return ret;


More information about the test-report mailing list