[dpdk-dev] [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions

Adrien Mazarguil adrien.mazarguil at 6wind.com
Wed Oct 10 14:02:07 CEST 2018


Sorry if I'm a bit late to the discussion, please see below.

On Wed, Oct 10, 2018 at 09:00:52AM +0000, Ori Kam wrote:
<snip>
> > On 10/7/2018 1:57 PM, Ori Kam wrote:
> > This series implement the generic L2/L3 tunnel encapsulation actions
> > and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> > 
> > Currenlty the encap/decap actions only support encapsulation
> > of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> > the inner packet has a valid Ethernet header, while L3 encapsulation
> > is where the inner packet doesn't have the Ethernet header).
> > In addtion the parameter to to the encap action is a list of rte items,
> > this results in 2 extra translation, between the application to the action
> > and from the action to the NIC. This results in negetive impact on the
> > insertion performance.

Not sure it's a valid concern since in this proposal, PMD is still expected
to interpret the opaque buffer contents regardless for validation and to
convert it to its internal format.

Worse, it will require a packet parser to iterate over enclosed headers
instead of a list of convenient rte_flow_whatever objects. It won't be
faster without the convenience of pointers to properly aligned structures
that only contain relevant data fields.

> > Looking forward there are going to be a need to support many more tunnel
> > encapsulations. For example MPLSoGRE, MPLSoUDP.
> > Adding the new encapsulation will result in duplication of code.
> > For example the code for handling NVGRE and VXLAN are exactly the same,
> > and each new tunnel will have the same exact structure.
> > 
> > This series introduce a generic encapsulation for L2 tunnel types, and
> > generic encapsulation for L3 tunnel types. In addtion the new
> > encapsulations commands are using raw buffer inorder to save the
> > converstion time, both for the application and the PMD.

>From a usability standpoint I'm not a fan of the current interface to
perform NVGRE/VXLAN encap, however this proposal adds another layer of
opaqueness in the name of making things more generic than rte_flow already
is.

Assuming they are not to be interpreted by PMDs, maybe there's a case for
prepending arbitrary buffers to outgoing traffic and removing them from
incoming traffic. However this feature should not be named "generic tunnel
encap/decap" as it's misleading.

Something like RTE_FLOW_ACTION_TYPE_HEADER_(PUSH|POP) would be more
appropriate. I think on the "pop" side, only the size would matter.

Another problem is that you must not require actions to rely on specific
pattern content:

 [...]
 *
 * Decapsulate outer most tunnel from matched flow.
 *
 * The flow pattern must have a valid tunnel header
 */
 RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP,

For maximum flexibility, all actions should be usable on their own on empty
pattern. On the other hand, you can document undefined behavior when
performing some action on traffic that doesn't contain something.

Reason is that invalid traffic may have already been removed by other flow
rules (or whatever happens) before such a rule is reached; it's a user's
responsibility to provide such guarantee.

When parsing an action, a PMD is not supposed to look at the pattern. Action
list should contain all the needed info, otherwise it means the API is badly
defined.

I'm aware the above makes it tough to implement something like
RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP as defined in this series, but that's
unfortunately why I think it must not be defined like that.

My opinion is that the best generic approach to perform encap/decap with
rte_flow would use one dedicated action per protocol header to
add/remove/modify. This is the suggestion I originally made for
VXLAN/NVGRE [2] and this is one of the reasons the order of actions now
matters [3].

Remember that whatever is provided, be it an opaque buffer (like you did), a
separate list of items (like VXLAN/NVGRE) or the rte_flow action list itself
(what I'm suggesting to do), PMDs have to process it. There will be a CPU
cost. Keep in mind odd use cases that involve QinQinQinQinQ.

> > I like the idea to generalize encap/decap actions. It makes a bit harder
> > for reader to find which encap/decap actions are supported in fact,
> > but it changes nothing for automated usage in the code - just try it
> > (as a generic way used in rte_flow).
> > 
> 
> Even now the user doesn't know which encapsulation is supported since
> it is PMD and sometime kernel related. On the other end it simplify adding
> encapsulation to specific costumers with some time just FW update.

Except for raw push/pop of uninterpreted headers, tunnel encapsulations not
explicitly supported by rte_flow shouldn't be possible. Who will expect
something that isn't defined by the API to work and rely on it in their
application? I don't see it happening.

Come on, adding new encap/decap actions to DPDK is shouldn't be such a pain
that the only alternative is a generic API to work around me :)

> > Arguments about a way of encap/decap headers specification (flow items
> > vs raw) sound sensible, but I'm not sure about it.
> > It would be simpler if the tunnel header is added appended or removed
> > as is, but as I understand it is not true. For example, IPv4 ID will be
> > different in incoming packets to be decapsulated and different values
> > should be used on encapsulation. Checksums will be different (but
> > offloaded in any case).
> > 
> 
> I'm not sure I understand your comment. 
> Decapsulation is independent of encapsulation, for example if we decap 
> L2 tunnel type then there is no parameter at all the NIC just removes 
> the outer layers.

According to the pattern? As described above, you can't rely on that.
Pattern does not necessarily match the full stack of outer layers.

Decap action must be able to determine what to do on its own, possibly in
conjunction with other actions in the list but that's all.

> > Current way allows to specify which fields do not matter and which one
> > must match. It allows to say that, for example, VNI match is sufficient
> > to decapsulate.
> > 
> 
> The encapsulation according to definition, is a list of headers that should 
> encapsulate the packet. So I don't understand your comment about matching
> fields. The matching is based on the flow and the encapsulation is just data
> that should be added on top of the packet.
> 
> > Also arguments assume that action input is accepted as is by the HW.
> > It could be true, but could be obviously false and HW interface may
> > require parsed input (i.e. driver must parse the input buffer and extract
> > required fields of packet headers).
> > 
> 
> You are correct there some PMD even Mellanox (for the E-Switch) require to parsed input
> There is no driver that knows rte_flow structure so in any case there should be 
> Some translation between the encapsulation data and the NIC data.
> I agree that writing the code for translation can be harder in this approach,
> but the code is only written once is the insertion speed is much higher this way.

Avoiding code duplication enough of a reason to do something. Yes NVGRE and
VXLAN encap/decap should be redefined because of that. But IMO, they should
prepend a single VXLAN or NVGRE header and be followed by other actions that
in turn prepend a UDP header, an IPv4/IPv6 one, any number of VLAN headers
and finally an Ethernet header.

> Also like I said some Virtual Switches are already store this data in raw buffer 
> (they update only specific fields) so this will also save time for the application when
> creating a rule.
> 
> > So, I'd say no. It should be better motivated if we change existing
> > approach (even advertised as experimental).
> 
> I think the reasons I gave are very good motivation to change the approach
> please also consider that there is no implementation yet that supports the
> old approach.

Well, although the existing API made this painful, I did submit one [4] and
there's an updated version from Slava [5] for mlx5.

> while we do have code that uses the new approach.

If you need the ability to prepend a raw buffer, please consider a different
name for the related actions, redefine them without reliance on specific
pattern items and leave NVGRE/VXLAN encap/decap as is for the time
being. They can deprecated anytime without ABI impact.

On the other hand if that raw buffer is to be interpreted by the PMD for
more intelligent tunnel encap/decap handling, I do not agree with the
proposed approach for usability reasons.

[2] [PATCH v3 2/4] ethdev: Add tunnel encap/decap actions
    https://mails.dpdk.org/archives/dev/2018-April/096418.html

[3] ethdev: alter behavior of flow API actions
    https://git.dpdk.org/dpdk/commit/?id=cc17feb90413

[4] net/mlx5: add VXLAN encap support to switch flow rules
    https://mails.dpdk.org/archives/dev/2018-August/110598.html

[5] net/mlx5: e-switch VXLAN flow validation routine
    https://mails.dpdk.org/archives/dev/2018-October/113782.html

-- 
Adrien Mazarguil
6WIND


More information about the dev mailing list