[dpdk-dev] [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions

Ori Kam orika at mellanox.com
Wed Oct 10 15:17:01 CEST 2018


Hi
PSB.

> -----Original Message-----
> From: Adrien Mazarguil <adrien.mazarguil at 6wind.com>
> Sent: Wednesday, October 10, 2018 3:02 PM
> To: Ori Kam <orika at mellanox.com>
> Cc: Andrew Rybchenko <arybchenko at solarflare.com>; Ferruh Yigit
> <ferruh.yigit at intel.com>; stephen at networkplumber.org; Declan Doherty
> <declan.doherty at intel.com>; dev at dpdk.org; Dekel Peled
> <dekelp at mellanox.com>; Thomas Monjalon <thomas at monjalon.net>; Nélio
> Laranjeiro <nelio.laranjeiro at 6wind.com>; Yongseok Koh
> <yskoh at mellanox.com>; Shahaf Shuler <shahafs at mellanox.com>
> Subject: Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation
> actions
> 
> Sorry if I'm a bit late to the discussion, please see below.
> 
> On Wed, Oct 10, 2018 at 09:00:52AM +0000, Ori Kam wrote:
> <snip>
> > > On 10/7/2018 1:57 PM, Ori Kam wrote:
> > > This series implement the generic L2/L3 tunnel encapsulation actions
> > > and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> > >
> > > Currenlty the encap/decap actions only support encapsulation
> > > of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> > > the inner packet has a valid Ethernet header, while L3 encapsulation
> > > is where the inner packet doesn't have the Ethernet header).
> > > In addtion the parameter to to the encap action is a list of rte items,
> > > this results in 2 extra translation, between the application to the action
> > > and from the action to the NIC. This results in negetive impact on the
> > > insertion performance.
> 
> Not sure it's a valid concern since in this proposal, PMD is still expected
> to interpret the opaque buffer contents regardless for validation and to
> convert it to its internal format.
> 
This is the action to take, we should assume
that the pattern is valid and not parse it at all.
Another issue, we have a lot of complains about the time we take 
for validation, I know that currently we must validate the rule when creating it,
but this can change, why should a rule that was validate and the only change
is the IP dest of the encap data?
virtual switch after creating the first flow are just modifying it so why force
them into revalidating it? (but this issue is a different topic)

> Worse, it will require a packet parser to iterate over enclosed headers
> instead of a list of convenient rte_flow_whatever objects. It won't be
> faster without the convenience of pointers to properly aligned structures
> that only contain relevant data fields.
>
Also in the rte_item we are not aligned so there is no difference in performance,
between the two approaches, In the rte_item actually we have unused pointer which
are just a waste.
Also needs to consider how application are using it. They are already have it in raw buffer
so it saves the conversation time for the application.
 
> > > Looking forward there are going to be a need to support many more tunnel
> > > encapsulations. For example MPLSoGRE, MPLSoUDP.
> > > Adding the new encapsulation will result in duplication of code.
> > > For example the code for handling NVGRE and VXLAN are exactly the same,
> > > and each new tunnel will have the same exact structure.
> > >
> > > This series introduce a generic encapsulation for L2 tunnel types, and
> > > generic encapsulation for L3 tunnel types. In addtion the new
> > > encapsulations commands are using raw buffer inorder to save the
> > > converstion time, both for the application and the PMD.
> 
> From a usability standpoint I'm not a fan of the current interface to
> perform NVGRE/VXLAN encap, however this proposal adds another layer of
> opaqueness in the name of making things more generic than rte_flow already
> is.
> 
I'm sorry but I don't understand why it is more opaqueness, as I see it is very simple
just give the encapsulation data and that's it. For example on system that support number of
encapsulations they don't need to call to a different function just to change the buffer.

> Assuming they are not to be interpreted by PMDs, maybe there's a case for
> prepending arbitrary buffers to outgoing traffic and removing them from
> incoming traffic. However this feature should not be named "generic tunnel
> encap/decap" as it's misleading.
> 
> Something like RTE_FLOW_ACTION_TYPE_HEADER_(PUSH|POP) would be
> more
> appropriate. I think on the "pop" side, only the size would matter.
> 
Maybe the name can be change but again the application does encapsulation so it will
be more intuitive for it.

> Another problem is that you must not require actions to rely on specific
> pattern content:
> 
I don't think this can be true anymore since for example what do you expect
to happen when you place an action for example modify ip to packet with no ip?
This may raise issues in the NIC.
Same goes for decap after the flow is in the NIC he must assume that he can remove otherwise 
really unexpected beaver can accord.

>  [...]
>  *
>  * Decapsulate outer most tunnel from matched flow.
>  *
>  * The flow pattern must have a valid tunnel header
>  */
>  RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP,
> 
> For maximum flexibility, all actions should be usable on their own on empty
> pattern. On the other hand, you can document undefined behavior when
> performing some action on traffic that doesn't contain something.
> 

Like I said and like it is already defined for VXLAN_enacp  we must know
the pattern otherwise the rule can be declined in Kernel / crash when trying to decap 
packet without outer tunnel.

> Reason is that invalid traffic may have already been removed by other flow
> rules (or whatever happens) before such a rule is reached; it's a user's
> responsibility to provide such guarantee.
> 
> When parsing an action, a PMD is not supposed to look at the pattern. Action
> list should contain all the needed info, otherwise it means the API is badly
> defined.
> 
> I'm aware the above makes it tough to implement something like
> RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP as defined in this series, but that's
> unfortunately why I think it must not be defined like that.
> 
> My opinion is that the best generic approach to perform encap/decap with
> rte_flow would use one dedicated action per protocol header to
> add/remove/modify. This is the suggestion I originally made for
> VXLAN/NVGRE [2] and this is one of the reasons the order of actions now
> matters [3].
I agree that your approach make a lot of sense, but there are number of issues with it
* it is harder and takes more time from the application point of view.
* it is slower when compared to the raw buffer. 

> 
> Remember that whatever is provided, be it an opaque buffer (like you did), a
> separate list of items (like VXLAN/NVGRE) or the rte_flow action list itself
> (what I'm suggesting to do), PMDs have to process it. There will be a CPU
> cost. Keep in mind odd use cases that involve QinQinQinQinQ.
> 
> > > I like the idea to generalize encap/decap actions. It makes a bit harder
> > > for reader to find which encap/decap actions are supported in fact,
> > > but it changes nothing for automated usage in the code - just try it
> > > (as a generic way used in rte_flow).
> > >
> >
> > Even now the user doesn't know which encapsulation is supported since
> > it is PMD and sometime kernel related. On the other end it simplify adding
> > encapsulation to specific costumers with some time just FW update.
> 
> Except for raw push/pop of uninterpreted headers, tunnel encapsulations not
> explicitly supported by rte_flow shouldn't be possible. Who will expect
> something that isn't defined by the API to work and rely on it in their
> application? I don't see it happening.
> 
Some of our customers are working with private tunnel type, and they can configure it using kernel 
or just new FW this is a real use case.

> Come on, adding new encap/decap actions to DPDK is shouldn't be such a pain
> that the only alternative is a generic API to work around me :)
> 

Yes but like I said when a costumer asks for a ecnap and I can give it to him why wait for the DPDK next release?

> > > Arguments about a way of encap/decap headers specification (flow items
> > > vs raw) sound sensible, but I'm not sure about it.
> > > It would be simpler if the tunnel header is added appended or removed
> > > as is, but as I understand it is not true. For example, IPv4 ID will be
> > > different in incoming packets to be decapsulated and different values
> > > should be used on encapsulation. Checksums will be different (but
> > > offloaded in any case).
> > >
> >
> > I'm not sure I understand your comment.
> > Decapsulation is independent of encapsulation, for example if we decap
> > L2 tunnel type then there is no parameter at all the NIC just removes
> > the outer layers.
> 
> According to the pattern? As described above, you can't rely on that.
> Pattern does not necessarily match the full stack of outer layers.
> 
> Decap action must be able to determine what to do on its own, possibly in
> conjunction with other actions in the list but that's all.
> 
Decap removes the outer headers.
Some tunnels don't have inner L2 and it must be added after the decap
this is what L3 decap means, and the user must supply the valid L2 header.

> > > Current way allows to specify which fields do not matter and which one
> > > must match. It allows to say that, for example, VNI match is sufficient
> > > to decapsulate.
> > >
> >
> > The encapsulation according to definition, is a list of headers that should
> > encapsulate the packet. So I don't understand your comment about matching
> > fields. The matching is based on the flow and the encapsulation is just data
> > that should be added on top of the packet.
> >
> > > Also arguments assume that action input is accepted as is by the HW.
> > > It could be true, but could be obviously false and HW interface may
> > > require parsed input (i.e. driver must parse the input buffer and extract
> > > required fields of packet headers).
> > >
> >
> > You are correct there some PMD even Mellanox (for the E-Switch) require to
> parsed input
> > There is no driver that knows rte_flow structure so in any case there should
> be
> > Some translation between the encapsulation data and the NIC data.
> > I agree that writing the code for translation can be harder in this approach,
> > but the code is only written once is the insertion speed is much higher this
> way.
> 
> Avoiding code duplication enough of a reason to do something. Yes NVGRE and
> VXLAN encap/decap should be redefined because of that. But IMO, they should
> prepend a single VXLAN or NVGRE header and be followed by other actions that
> in turn prepend a UDP header, an IPv4/IPv6 one, any number of VLAN headers
> and finally an Ethernet header.
> 
> > Also like I said some Virtual Switches are already store this data in raw buffer
> > (they update only specific fields) so this will also save time for the application
> when
> > creating a rule.
> >
> > > So, I'd say no. It should be better motivated if we change existing
> > > approach (even advertised as experimental).
> >
> > I think the reasons I gave are very good motivation to change the approach
> > please also consider that there is no implementation yet that supports the
> > old approach.
> 
> Well, although the existing API made this painful, I did submit one [4] and
> there's an updated version from Slava [5] for mlx5.
> 
> > while we do have code that uses the new approach.
> 
> If you need the ability to prepend a raw buffer, please consider a different
> name for the related actions, redefine them without reliance on specific
> pattern items and leave NVGRE/VXLAN encap/decap as is for the time
> being. They can deprecated anytime without ABI impact.
> 
> On the other hand if that raw buffer is to be interpreted by the PMD for
> more intelligent tunnel encap/decap handling, I do not agree with the
> proposed approach for usability reasons.
> 
> [2] [PATCH v3 2/4] ethdev: Add tunnel encap/decap actions
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> pdk.org%2Farchives%2Fdev%2F2018-
> April%2F096418.html&data=02%7C01%7Corika%40mellanox.com%7C7b9
> 9c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b%
> 7C0%7C0%7C636747697489048905&sdata=prABlYixGAkdnyZ2cetpgz5%2F
> vkMmiC66T3ZNE%2FewkQ4%3D&reserved=0
> 
> [3] ethdev: alter behavior of flow API actions
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk
> .org%2Fdpdk%2Fcommit%2F%3Fid%3Dcc17feb90413&data=02%7C01%7C
> orika%40mellanox.com%7C7b99c5f781424ba7950608d62ea83efa%7Ca652971
> c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636747697489058915&sdata
> =VavsHXeQ3SgMzaTlklBWdkKSEBjELMp9hwUHBlLQlVA%3D&reserved=0
> 
> [4] net/mlx5: add VXLAN encap support to switch flow rules
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> pdk.org%2Farchives%2Fdev%2F2018-
> August%2F110598.html&data=02%7C01%7Corika%40mellanox.com%7C7b
> 99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b
> %7C0%7C0%7C636747697489058915&sdata=lpfDWp9oBN8AFNYZ6VL5BjI
> 38SDFt91iuU7pvhbC%2F0E%3D&reserved=0
> 
> [5] net/mlx5: e-switch VXLAN flow validation routine
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> pdk.org%2Farchives%2Fdev%2F2018-
> October%2F113782.html&data=02%7C01%7Corika%40mellanox.com%7C7
> b99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461
> b%7C0%7C0%7C636747697489058915&sdata=8GCbYk6uB2ahZaHaqWX4
> OOq%2B7ZLwxiApcs%2FyRAT9qOw%3D&reserved=0
> 
> --
> Adrien Mazarguil
> 6WIND


More information about the dev mailing list