From tao.li06 at sap.com Wed Nov 5 12:22:54 2025 From: tao.li06 at sap.com (Li, Tao) Date: Wed, 5 Nov 2025 11:22:54 +0000 Subject: Failed to install synchronous rte flow rules to match IP in IP packets since DPDK 24.11.2 In-Reply-To: References: Message-ID: Hello, Issue recap from the last mail: Since DPDK 24.11.2, installing synchronous rte flow rules to match IP in IP packets (packets with pattern of eth / ipv6 / ipv4 or eth / ipv4 / ipv4) does not work anymore. Recently I took a closer look at this issue and related PR [1], and find that the attached changes can help to enable IPinIP header matching in the synchronous mode regardless this pattern exists in the outer header or inner header. The reasoning is as follows. Before the change proposed in [1], the condition, ?`l3_tunnel_detection == l3_tunnel_inner`,? adds one extra flag (MLX5_FLOW_LAYER_IPIP or MLX5_FLOW_LAYER_IPV6_ENCAP) to ?item_flags?? if this pattern exists as part of the inner header. The added extra flag later is used by ?mlx5_flow_validate_item_ipv6? or ?mlx5_flow_dv_validate_item_ipv4? to reject IPinIP encapsulation in another tunnel. To achieve the purpose of explicitly allowing inner IPinIP matching and outer IPinIP matching as well, the more direct way is simply removing the statement of adding this ?trick? flag for the inner header case, so that it can pass the validation check. And it also seems more reasonable to set ?tunnel? to 1 if the l3 inner tunnel is detected. With this change, it is possible to run the following commands without the mentioned error in the previous email: ``` sudo ./dpdk-testpmd -a 0000:3b:00.0,class=rxq_cqe_comp_en=0,rx_vec_en=1,representor=pf[0]vf[0-3] -a 0000:3b:00.1,class=rxq_cqe_comp_en=0,rx_vec_en=1 -- -i --rxq=1 --txq=1 --flow-isolate-all flow create 0 ingress pattern eth / ipv6 proto is 0x0004 / end actions queue index 0 / end or flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv6 proto is 0x0004 / end actions queue index 0 / end ``` I would appreciate more expert feedback on this finding. Thanks. [1] https://github.com/DPDK/dpdk-stable/commit/116949c7a7b780f147613068cbbd6257e6053654 diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7b9e501..29c919a 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7930,8 +7930,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, /* * explicitly allow inner IPIP match */ - if (l3_tunnel_detection == l3_tunnel_outer) { - item_flags |= l3_tunnel_flag; + if (l3_tunnel_detection == l3_tunnel_inner) { tunnel = 1; } ret = mlx5_flow_dv_validate_item_ipv4(dev, items, @@ -7957,8 +7956,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, /* * explicitly allow inner IPIP match */ - if (l3_tunnel_detection == l3_tunnel_outer) { - item_flags |= l3_tunnel_flag; + if (l3_tunnel_detection == l3_tunnel_inner) { tunnel = 1; } ret = mlx5_flow_validate_item_ipv6(dev, items, Best regards, Tao Li From: Li, Tao Date: Wednesday, 28. May 2025 at 10:04 To: users Subject: Failed to install synchronous rte flow rules to match IP in IP packets since DPDK 24.11.2 Hello All, We are running software components that use synchronous rte flow rules to match IP in IP packets. That means, we are matching packets with pattern of eth / ipv6 / ipv4 using rte flow rules. This approach works until DPDK 24.11.2. After investigation, we discover that, this patching commit [1] breaks matching of the above described header pattern, as it seems to only consider IP in IP tunneling coexisting with VXLAN encapsulation. To reproduce the error, the following testpmd commands can be used. ``` sudo ./dpdk-testpmd -a 0000:3b:00.0,class=rxq_cqe_comp_en=0,rx_vec_en=1,representor=pf[0]vf[0-3] -a 0000:3b:00.1,class=rxq_cqe_comp_en=0,rx_vec_en=1 -- -i --rxq=1 --txq=1 --flow-isolate-all flow create 0 ingress pattern eth / ipv6 proto is 0x0004 / end actions queue index 0 / end or flow create 0 ingress pattern eth / ipv4 proto is 0x0004 / end actions queue index 0 / end ``` and the following error will be emitted: ``` port_flow_complain(): Caught PMD error type 13 (specific pattern item): cause: 0x7ffc9943af78, multiple tunnel not supported: Invalid argument ``` It would be appreciated to know if it is intended behavior or negative side effect of the mentioned DPDK patch commit. Would it be possible to again support IP in IP encapsulation for the outer headers? [1] https://github.com/DPDK/dpdk-stable/commit/116949c7a7b780f147613068cbbd6257e6053654 Best regards, Tao Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From getelson at nvidia.com Wed Nov 5 12:50:02 2025 From: getelson at nvidia.com (Gregory Etelson) Date: Wed, 5 Nov 2025 11:50:02 +0000 Subject: Failed to install synchronous rte flow rules to match IP in IP packets since DPDK 24.11.2 In-Reply-To: References: Message-ID: Hello, That is a known issue in MLX5 PMD. We work on a fix. Regards, Gregory ________________________________ From: Li, Tao Sent: Wednesday, November 5, 2025 13:22 To: users Cc: Gregory Etelson ; Dariusz Sosnowski Subject: Re: Failed to install synchronous rte flow rules to match IP in IP packets since DPDK 24.11.2 External email: Use caution opening links or attachments Hello, Issue recap from the last mail: Since DPDK 24.11.2, installing synchronous rte flow rules to match IP in IP packets (packets with pattern of eth / ipv6 / ipv4 or eth / ipv4 / ipv4) does not work anymore. Recently I took a closer look at this issue and related PR [1], and find that the attached changes can help to enable IPinIP header matching in the synchronous mode regardless this pattern exists in the outer header or inner header. The reasoning is as follows. Before the change proposed in [1], the condition, ?`l3_tunnel_detection == l3_tunnel_inner`,? adds one extra flag (MLX5_FLOW_LAYER_IPIP or MLX5_FLOW_LAYER_IPV6_ENCAP) to ?item_flags?? if this pattern exists as part of the inner header. The added extra flag later is used by ?mlx5_flow_validate_item_ipv6? or ?mlx5_flow_dv_validate_item_ipv4? to reject IPinIP encapsulation in another tunnel. To achieve the purpose of explicitly allowing inner IPinIP matching and outer IPinIP matching as well, the more direct way is simply removing the statement of adding this ?trick? flag for the inner header case, so that it can pass the validation check. And it also seems more reasonable to set ?tunnel? to 1 if the l3 inner tunnel is detected. With this change, it is possible to run the following commands without the mentioned error in the previous email: ``` sudo ./dpdk-testpmd -a 0000:3b:00.0,class=rxq_cqe_comp_en=0,rx_vec_en=1,representor=pf[0]vf[0-3] -a 0000:3b:00.1,class=rxq_cqe_comp_en=0,rx_vec_en=1 -- -i --rxq=1 --txq=1 --flow-isolate-all flow create 0 ingress pattern eth / ipv6 proto is 0x0004 / end actions queue index 0 / end or flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv6 proto is 0x0004 / end actions queue index 0 / end ``` I would appreciate more expert feedback on this finding. Thanks. [1] https://github.com/DPDK/dpdk-stable/commit/116949c7a7b780f147613068cbbd6257e6053654 diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7b9e501..29c919a 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7930,8 +7930,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, /* * explicitly allow inner IPIP match */ - if (l3_tunnel_detection == l3_tunnel_outer) { - item_flags |= l3_tunnel_flag; + if (l3_tunnel_detection == l3_tunnel_inner) { tunnel = 1; } ret = mlx5_flow_dv_validate_item_ipv4(dev, items, @@ -7957,8 +7956,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, /* * explicitly allow inner IPIP match */ - if (l3_tunnel_detection == l3_tunnel_outer) { - item_flags |= l3_tunnel_flag; + if (l3_tunnel_detection == l3_tunnel_inner) { tunnel = 1; } ret = mlx5_flow_validate_item_ipv6(dev, items, Best regards, Tao Li From: Li, Tao Date: Wednesday, 28. May 2025 at 10:04 To: users Subject: Failed to install synchronous rte flow rules to match IP in IP packets since DPDK 24.11.2 Hello All, We are running software components that use synchronous rte flow rules to match IP in IP packets. That means, we are matching packets with pattern of eth / ipv6 / ipv4 using rte flow rules. This approach works until DPDK 24.11.2. After investigation, we discover that, this patching commit [1] breaks matching of the above described header pattern, as it seems to only consider IP in IP tunneling coexisting with VXLAN encapsulation. To reproduce the error, the following testpmd commands can be used. ``` sudo ./dpdk-testpmd -a 0000:3b:00.0,class=rxq_cqe_comp_en=0,rx_vec_en=1,representor=pf[0]vf[0-3] -a 0000:3b:00.1,class=rxq_cqe_comp_en=0,rx_vec_en=1 -- -i --rxq=1 --txq=1 --flow-isolate-all flow create 0 ingress pattern eth / ipv6 proto is 0x0004 / end actions queue index 0 / end or flow create 0 ingress pattern eth / ipv4 proto is 0x0004 / end actions queue index 0 / end ``` and the following error will be emitted: ``` port_flow_complain(): Caught PMD error type 13 (specific pattern item): cause: 0x7ffc9943af78, multiple tunnel not supported: Invalid argument ``` It would be appreciated to know if it is intended behavior or negative side effect of the mentioned DPDK patch commit. Would it be possible to again support IP in IP encapsulation for the outer headers? [1] https://github.com/DPDK/dpdk-stable/commit/116949c7a7b780f147613068cbbd6257e6053654 Best regards, Tao Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From thenveer.poolakkanni at iwave-global.com Thu Nov 13 14:22:38 2025 From: thenveer.poolakkanni at iwave-global.com (Thenveer Poolakkanni) Date: Thu, 13 Nov 2025 13:22:38 +0000 Subject: Testing DPDK with Mellanox (MLX) Driver on Windows 10 Message-ID: Hi team, I?ve set up DPDK on Windows 10 following the official guide: https://doc.dpdk.org/guides-25.07/windows_gsg/index.html The setup works fine, but I need to test DPDK with a Mellanox (MLX) driver. Could you please confirm: 1. Is the Mellanox (MLX5) PMD supported on Windows 10? 2. If yes, what are the steps to bind the NIC and run DPDK applications? 3. Any specific WinOF2 or driver version requirements? Kindly provide your guidance on these points. Thank you for your time and support. Regards, Thenveer -------------- next part -------------- An HTML attachment was scrubbed... URL: From thenveer.poolakkanni at iwave-global.com Fri Nov 14 11:12:52 2025 From: thenveer.poolakkanni at iwave-global.com (Thenveer Poolakkanni) Date: Fri, 14 Nov 2025 10:12:52 +0000 Subject: Assistance with DPDK on Windows In-Reply-To: <2455856.MHSsGVy7CF@thomas> References: <85f9def5-7f88-47d1-9dc8-524b060c54e5@gmail.com> <2455856.MHSsGVy7CF@thomas> Message-ID: Hi Team, I have successfully compiled and built DPDK on a Windows system and am using a custom driver in conjunction with DPDK. I am able to write to the registers, and the driver can also read from them. Additionally, I have successfully loaded the netuio driver onto my network interfaces (01:00.0 and 01:00.1). However, when running the DPDK application, I encounter the following error: "Invalid memory, No probed Ethernet devices." The issue is detailed further in the screenshot below. [cid:94365efe-78e5-4ef2-9d9a-b49188e35759] Could you please assist in identifying the cause of this error and advise on potential steps to resolve it? Thank you in advance for your support. Best regards, Thenveer ________________________________ From: Thomas Monjalon Sent: Wednesday, November 5, 2025 3:55 PM To: andremue at linux.microsoft.com ; Dmitry Kozlyuk Cc: Thenveer Poolakkanni ; dev at dpdk.org ; Chaturbhuja Nath Prabhu ; Ayshathul Thuhara Subject: Re: Assistance with DPDK on Windows 05/11/2025 08:36, Dmitry Kozlyuk: > Hi Thenveer, > > > On 11/5/25 10:18, Thenveer Poolakkanni wrote: > > 1. Is it possible to execute or use dpdk-devbind.py on Windows to > > verify the device binding status? > > No. Please use Device Manager for now. > > > 2. On Windows, the build generates .dll files (e.g., > > rte_net_driver.dll) instead of .so files (e.g., librte_net_driver.so > > in Linux). How can we use or execute these .dll files in the same way > > we use .so files when running DPDK applications? > > > The application must link the needed libraries and PMDs when building. > These files must be in PATH or in the working directory when running the > application. Loading additional PMDs with "-d" EAL option is not yet > implemented. If you have a custom PMD you have to build it as a part of > DPDK and link to your application. > > > P. S. Questions like these belong to users at dpdk.org. Keeping in dev@ to > avoid breaking threads. This question is probably showing a lack of documentation. Please can we have this gap solved with a patch? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 24340 bytes Desc: image.png URL: From wantmac.bingbing91 at gmail.com Fri Nov 21 15:17:44 2025 From: wantmac.bingbing91 at gmail.com (Wan Bingbing) Date: Fri, 21 Nov 2025 22:17:44 +0800 Subject: Fwd: Question: PMD behaviour without hugepages in DPDK 23.07 (XDP PMD in Kubernetes) In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: Wan Bingbing Date: Fri, 21 Nov 2025 at 22:15 Subject: Question: PMD behaviour without hugepages in DPDK 23.07 (XDP PMD in Kubernetes) To: Cc: chenxiemin at gmail.com Dear DPDK team, We are currently deploying an application utilizing Seastar and DPDK (version 23.07) with the XDP PMD inside a Kubernetes environment. Due to infrastructure restrictions, we cannot allocate hugepages on the Kubernetes nodes. In this environment, DPDK fails to initialize the XDP PMD because hugepage-backed memory cannot be created. Although the EAL is not started with the --no-huge parameter, no hugepages are available to DPDK, preventing the PMD from sending or receiving packets. We are aware of the known issue documented in the DPDK release notes ("PMD does not work with --no-huge EAL command line parameter"), which explains that PMDs rely on hugepage-backed memory because DPDK does not store the necessary physical/IOVA information for memory allocated via malloc/mmap. Given this dependency, we have the following questions: 1. Is there a currently supported method to run a hardware PMD (specifically the XDP PMD) without hugepages, or is hugepage-backed memory strictly required for all hardware PMDs? 2. Are there any ongoing efforts or future plans to support PMD operation in a non-hugepage mode? 3. If this limitation is architectural and not planned to be addressed, could you please confirm this? This confirmation would allow us to evaluate alternative approaches, such as using AF_XDP without DPDK or relying on the Seastar native stack. Any guidance or recommendations from the maintainers on how to proceed would be greatly appreciated. Thank you for your time. Best regards, Wan Bingbing -------------- next part -------------- An HTML attachment was scrubbed... URL: From madhukar.mythri at gmail.com Wed Nov 26 11:13:53 2025 From: madhukar.mythri at gmail.com (madhukar mythri) Date: Wed, 26 Nov 2025 15:43:53 +0530 Subject: Hot-plug issue with Netvsc PMD on Azure Message-ID: Hi, On Azure cloud, DPDK application with Netvsc PMD works well on SRIOV VF device(Accelerated Networking enabled) VM. However, as part of hot-plug testing, when we removed the SRIOV VF device(by disabling the Accelerated networking option), all the traffic works well on synthetic device(netvsc). But, when we added back the SRIOV VF device(by enabling the Accelerated Networking option), still the traffic flows on Synthetic device only and not on SRIOV VF. We could see in the DPDK logs that the SRIOV VF got added and detected well by Netvsc PMD. But, Rx/Tx traffic does not flow on VF devices and thus performance is low. Currently, we need to reboot the VM to get the traffic onto the SRIOV VF device. Has anyone faced a similar issue? Is this the expected behaviour or any known limitation on Netvsc PMD ? Thanks, Madhuker. -------------- next part -------------- An HTML attachment was scrubbed... URL: