|SUCCESS| eal: fix core_index for non-EAL registered threads

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Fri Apr 24 23:33:16 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/163283

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-24

Commit: [git sha not available from patch file]
Author: Maxime Peim <maxime.peim at gmail.com>

Subject: eal: fix core_index for non-EAL registered threads

This patch addresses a performance regression where non-EAL threads
registered via rte_thread_register() are assigned a valid lcore_id but
their core_index remains at -1. This causes rte_lcore_index() to return
-1, forcing libraries to fall back to shared global paths with severe
contention under concurrent access.

The fix sets core_index to cfg->lcore_count during allocation and resets
it to -1 during release and error paths.

> diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
> index 39411f9370..ae085d73e4 100644
> --- a/lib/eal/common/eal_common_lcore.c
> +++ b/lib/eal/common/eal_common_lcore.c
> @@ -378,6 +378,7 @@ eal_lcore_non_eal_allocate(void)
>  	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>  		if (cfg->lcore_role[lcore_id] != ROLE_OFF)
>  			continue;
> +		lcore_config[lcore_id].core_index = cfg->lcore_count;
>  		cfg->lcore_role[lcore_id] = ROLE_NON_EAL;
>  		cfg->lcore_count++;
>  		break;

Does this assignment create an ABI compatibility issue? Specifically,
does assigning sequential indices to non-EAL threads break any existing
code that may expect core_index to remain -1 for ROLE_NON_EAL lcores?

In eal_lcore_non_eal_allocate(), can setting core_index before
cfg->lcore_count++ cause index collisions with existing EAL lcores? The
code appears to assume cfg->lcore_count is already incremented for all
previously allocated EAL lcores, but does this hold when non-EAL threads
are registered dynamically?

> @@ -399,6 +400,7 @@ eal_lcore_non_eal_allocate(void)
>  		}
>  		EAL_LOG(DEBUG, "Initialization refused for lcore %u.",
>  			lcore_id);
> +		lcore_config[lcore_id].core_index = -1;
>  		cfg->lcore_role[lcore_id] = ROLE_OFF;
>  		cfg->lcore_count--;
>  		lcore_id = RTE_MAX_LCORE;

On the error path, does decrementing cfg->lcore_count after setting
core_index to -1 leave cfg->lcore_count out of sync with the actual
number of assigned core_index values? If another non-EAL thread is
registered immediately after this failure, could it be assigned the same
core_index value that was just released?

> @@ -420,6 +422,7 @@ eal_lcore_non_eal_release(unsigned int lcore_id)
>  		goto out;
>  	TAILQ_FOREACH(callback, &lcore_callbacks, next)
>  		callback_uninit(callback, lcore_id);
> +	lcore_config[lcore_id].core_index = -1;
>  	cfg->lcore_role[lcore_id] = ROLE_OFF;
>  	cfg->lcore_count--;
>  out:

In eal_lcore_non_eal_release(), does decrementing cfg->lcore_count
without recalculating the core_index values for remaining lcores create
gaps in the index sequence? If an lcore with core_index N is released,
do all lcores with higher indices now have incorrect core_index values?

Does the current approach of using cfg->lcore_count as a monotonically
increasing index handle the case where lcores are released and
reallocated in arbitrary order? Should core_index be recalculated as a
dense 0-based index across all active lcores whenever the set of
registered lcores changes?


More information about the test-report mailing list