Finer matching granularity with async template API
Asaf Penso
asafp at nvidia.com
Thu Mar 21 20:18:54 CET 2024
BTW,
In the non working example I see ipv6 / ipv4 / ICMP. Was this your intention or did you mean ipv6 / ICMP?
Regards,
Asaf Penso
________________________________
From: Asaf Penso <asafp at nvidia.com>
Sent: Thursday, March 21, 2024 9:17:04 PM
To: Tao Li <byteocean at hotmail.com>; users at dpdk.org <users at dpdk.org>
Subject: Re: Finer matching granularity with async template API
Hello Tao,
What is the output / error message you get?
Regards,
Asaf Penso
________________________________
From: Tao Li <byteocean at hotmail.com>
Sent: Thursday, March 21, 2024 5:44:00 PM
To: users at dpdk.org <users at dpdk.org>
Subject: Finer matching granularity with async template API
Hi all,
I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following:
<Not working test-pmd commands>
port stop all
flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0 # PF0
flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0 # PF1V0
port start all
set verbose 1
flow pattern_template 0 create transfer relaxed no pattern_template_id 10 template represented_port ethdev_port_id is 0 / eth / ipv6 / ipv4 / icmp / end
set raw_decap 0 eth / ipv6 / end_set
set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set
flow actions_template 0 create transfer actions_template_id 10 template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 / represented_port / end
flow template_table 0 create group 0 priority 0 transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10
flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth / ipv6 / ipv4 / icmp / end actions raw_decap index 0 / raw_encap index 0 / represented_port ethdev_port_id 3 / end
flow push 0 queue 0
</Not working test-pmd commands>
Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump.
<Working test-pmd commands>
…
flow pattern_template 0 create transfer relaxed no pattern_template_id 10 template represented_port ethdev_port_id is 0 / eth / ipv6 / end
…
flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth / ipv6 / end actions raw_decap index 0 / raw_encap index 0 / represented_port ethdev_port_id 3 / end
…
</Working test-pmd commands>
Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance.
Best regards,
Tao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/users/attachments/20240321/acd3a498/attachment.htm>
More information about the users
mailing list