mlx5: is GTP encapsulation possible using the rte_flow api?
Maayan Kashani
mkashani at nvidia.com
Mon Jul 22 07:43:47 CEST 2024
++ at Gregory Etelson
Regards,
Maayan Kashani
> -----Original Message-----
> From: László Molnár <laszlo.molnar at ericsson.com>
> Sent: Wednesday, 8 May 2024 18:24
> To: users at dpdk.org
> Subject: mlx5: is GTP encapsulation possible using the rte_flow api?
>
> Hi All,
>
> I wonder whether it would be possible to implement HW accelerated GTP
> encapsulation (as a first step) functionality using a Bluefield 2 NIC and the
> rte_flow API?
>
> The encapsulation would need to work between different ports using hairpin
> queues.
>
> Let's say I already have the rules in dpdk-testpmd that remove the original
> ETH header using raw_decap, and add the new ETH/IP/UDP/GTP using
> raw_encap.
>
> Now I would need to update some header fields (payload length for ipv4,
> udp, gtp). I would use "modify_field op add", but I found no way I can access
> the payload length field for UDP and GTP.
>
> For example, when I try to access the UDP payload length field by using
> "dst_type udp_port_src dst_offset 32" in the "modify_field" action, I get a
> "destination offset is too big: Invalid argument" error.
>
> This seems to be caused by a check in the mlx5 driver, which is a bit surprising
> as the documentation in rte_flow.rst (DPDK version 24.03) says that:
>
> ``offset`` allows going past the specified packet field boundary to
> copy a field to an arbitrary place in a packet,
>
> Is this just a driver limitation or an HW limitation? Or could a flex item solve
> this?
>
> Thanks, Laszlo
More information about the users
mailing list