Failed to install QUEUE action using async API on ConnectX-6 NIC

Dariusz Sosnowski dsosnowski at nvidia.com
Wed Jun 12 18:08:14 CEST 2024


Hi,

> From: Tao Li <byteocean at hotmail.com> 
> Sent: Wednesday, June 12, 2024 16:45
> To: users at dpdk.org
> Cc: tao.li06 at sap.com
> Subject: Failed to install QUEUE action using async API on ConnectX-6 NIC
> 
> Hi all,
> 
> I am using the async API to install flow rules to perform the QUEUE action to capture packets matching a certain pattern for processing by a DPDK application. The ConnectX-6 NIC is configured in multiport e-switch mode, as outlined in the documentation (https://doc.dpdk.org/guides/nics/mlx5.html#multiport-e-switch). Currently, I am facing an issue where I cannot create the corresponding templates for this purpose. The command to start test-pmd and create pattern and action templates are as follows:
> 
> <Command to start test-pmd>
> sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=2,representor=pf0-1vf0 -- -i --rxq=1 --txq=1 --flow-isolate-all
> </Command to start test-pmd>
> 
> <Not working test-pmd commands>
> port stop all
> flow configure 0 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 1 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 2 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 3 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> port start all
> 
> flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth type is 0x86dd  / end
> flow actions_template 0 create ingress  actions_template_id 10  template queue / end mask queue index 0xffff / end
> flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10
> flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth type is 0x86dd  / end actions queue index 0 / end
> flow push 0 queue 0
> </Not working test-pmd commands>
> 
> The error encountered during the execution of the above test-pmd commands is:
> 
> <Encounted error>
> mlx5_net: [mlx5dr_action_print_combo]: Invalid action_type sequence
> mlx5_net: [mlx5dr_action_print_combo]: TIR
> mlx5_net: [mlx5dr_matcher_check_and_process_at]: Invalid combination in action template
> mlx5_net: [mlx5dr_matcher_bind_at]: Invalid at 0
> </Encounted error>
> 
> Upon closer inspection of the driver code in DPDK 23.11 (also  the latest DPDK main branch), it appears that the error is due to the fact that MLX5DR_ACTION_TYP_TIR is not listed as a valid action in the MLX5DR_TABLE_TYPE_FDB field. If the following patch is applied, the error is resolved, and the DPDK application is able to capture matching packets:
> 
> <patch to apply>
> diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
> index 862ee3e332..c444ec761e 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_action.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_action.c
> @@ -85,6 +85,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
>                 BIT(MLX5DR_ACTION_TYP_VPORT) |
>                 BIT(MLX5DR_ACTION_TYP_DROP) |
>                 BIT(MLX5DR_ACTION_TYP_DEST_ROOT) |
> +               BIT(MLX5DR_ACTION_TYP_TIR) |
>                 BIT(MLX5DR_ACTION_TYP_DEST_ARRAY),
>                 BIT(MLX5DR_ACTION_TYP_LAST),
>         }, 
> </patch to apply>
> I would greatly appreciate it if anyone could provide insight into whether this behavior is intentional or if it is a bug in the driver. Many thanks in advance.

The fact that it works with this code change is not an intended behavior and we do not support using QUEUE and RSS actions on transfer flow tables.

Also, there's another issue with table and actions template attributes:

- table is using transfer,
- actions template is using ingress.

Using them together is incorrect.
In the upcoming DPDK release, we are adding additional validations which would guard against that.

With your configuration, it is enough that you create an ingress flow table on port 0,
which will contain a flow rule matching IPv6 traffic and forwarding it to a queue on port 0.

By default, any traffic which is not explicitly dropped or forwarded in E-Switch, will be handled by ingress flow rules of the port on which this packet was received.
Since you're running with flow isolation enabled, this means that traffic will go to kernel interface, unless you explicitly match it on ingress.

> 
> Best regards,
> Tao

Best regards,
Dariusz Sosnowski


More information about the users mailing list