[dpdk-dev] [PATCH v2 2/2] app/testpmd: fix tunnel offload private items location

Ferruh Yigit ferruh.yigit at intel.com
Fri Apr 30 15:49:47 CEST 2021


On 4/25/2021 4:57 PM, Gregory Etelson wrote:
> Tunnel offload API requires application to query PMD for specific flow
> items and actions. Application uses these PMD specific elements to
> build flow rules according to the tunnel offload model.

Can you please give some samples what are "PMD specific elements" required to be
queried by application? To understand issue better.

> The model does not restrict private elements location in a flow rule,
> but the current MLX5 PMD implementation expected that tunnel offload
> rule will begin with PMD specific elements.

Why we need to refer the mlx5 pmd implementation in the testpmd patch? Is this
patch trying to align testpmd to the mlx5 implementation?

> The patch places tunnel offload private PMD flow elements between
> general RTE flow elements in a rule.
> 

Why?

Overall what was the problem, what was failing and what was its impact?

And how changing the private elements location in the flow rule solving the issue?

> Cc: stable at dpdk.org
> Fixes: 1b9f274623b8 ("app/testpmd: add commands for tunnel offload")
> 
> Signed-off-by: Gregory Etelson <getelson at nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo at nvidia.com>
> ---
>  app/test-pmd/config.c | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 40b2b29725..1520b8193f 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1664,7 +1664,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
>  		     aptr->type != RTE_FLOW_ACTION_TYPE_END;
>  		     aptr++, num_actions++);
>  		pft->actions = malloc(
> -				(num_actions +  pft->num_pmd_actions) *
> +				(num_actions +  pft->num_pmd_actions + 1) *
>  				sizeof(actions[0]));
>  		if (!pft->actions) {
>  			rte_flow_tunnel_action_decap_release(
> @@ -1672,9 +1672,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
>  					pft->num_pmd_actions, &error);
>  			return NULL;
>  		}
> -		rte_memcpy(pft->actions, pft->pmd_actions,
> +		pft->actions[0].type = RTE_FLOW_ACTION_TYPE_VOID;
> +		rte_memcpy(pft->actions + 1, pft->pmd_actions,
>  			   pft->num_pmd_actions * sizeof(actions[0]));
> -		rte_memcpy(pft->actions + pft->num_pmd_actions, actions,
> +		rte_memcpy(pft->actions + pft->num_pmd_actions + 1, actions,
>  			   num_actions * sizeof(actions[0]));
>  	}
>  	if (tunnel_ops->items) {
> @@ -1692,7 +1693,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
>  		for (iptr = pattern, num_items = 1;
>  		     iptr->type != RTE_FLOW_ITEM_TYPE_END;
>  		     iptr++, num_items++);
> -		pft->items = malloc((num_items + pft->num_pmd_items) *
> +		pft->items = malloc((num_items + pft->num_pmd_items + 1) *
>  				    sizeof(pattern[0]));
>  		if (!pft->items) {
>  			rte_flow_tunnel_item_release(
> @@ -1700,9 +1701,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
>  					pft->num_pmd_items, &error);
>  			return NULL;
>  		}
> -		rte_memcpy(pft->items, pft->pmd_items,
> +		pft->items[0].type = RTE_FLOW_ITEM_TYPE_VOID;
> +		rte_memcpy(pft->items + 1, pft->pmd_items,
>  			   pft->num_pmd_items * sizeof(pattern[0]));
> -		rte_memcpy(pft->items + pft->num_pmd_items, pattern,
> +		rte_memcpy(pft->items + pft->num_pmd_items + 1, pattern,
>  			   num_items * sizeof(pattern[0]));
>  	}
>  
> 



More information about the dev mailing list