[dpdk-dev] [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions

Ori Kam orika at mellanox.com
Thu Oct 11 10:48:05 CEST 2018


Hi Adrian,

Thanks for your comments please see my answer below and inline.

Due to a very short time limit and the fact that we have more than
4 patches that are based on this we need to close it fast.

As I can see there are number of options:
* the old approach that neither of us like. And which mean that for 
   every tunnel we create a new command.
* My proposed suggestion as is. Which is easier for at least number of application
   to implement and faster in most cases.
* My suggestion with different name, but then we need to find also a name
   for the decap and also a name for decap_l3. This approach is also problematic
   since we have 2 API that are doing the same thig. For example in test-pmd encap
   vxlan in which API shell we use?
* Combine between my suggestion and the current one by replacing the raw
   buffer with list of items. Less code duplication easier on the validation ( that 
   don't think we need to validate the encap data) but we loss insertion rate.
* your suggestion of  list of action that each action is one item. Main problem
   is speed.  Complexity form the application side and time to implement.

> -----Original Message-----
> From: Adrien Mazarguil <adrien.mazarguil at 6wind.com>
> Sent: Wednesday, October 10, 2018 7:10 PM
> To: Ori Kam <orika at mellanox.com>
> Cc: Andrew Rybchenko <arybchenko at solarflare.com>; Ferruh Yigit
> <ferruh.yigit at intel.com>; stephen at networkplumber.org; Declan Doherty
> <declan.doherty at intel.com>; dev at dpdk.org; Dekel Peled
> <dekelp at mellanox.com>; Thomas Monjalon <thomas at monjalon.net>; Nélio
> Laranjeiro <nelio.laranjeiro at 6wind.com>; Yongseok Koh
> <yskoh at mellanox.com>; Shahaf Shuler <shahafs at mellanox.com>
> Subject: Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation
> actions
> 
> On Wed, Oct 10, 2018 at 01:17:01PM +0000, Ori Kam wrote:
> <snip>
> > > -----Original Message-----
> > > From: Adrien Mazarguil <adrien.mazarguil at 6wind.com>
> <snip>
> > > On Wed, Oct 10, 2018 at 09:00:52AM +0000, Ori Kam wrote:
> > > <snip>
> > > > > On 10/7/2018 1:57 PM, Ori Kam wrote:
> <snip>
> > > > > In addtion the parameter to to the encap action is a list of rte items,
> > > > > this results in 2 extra translation, between the application to the action
> > > > > and from the action to the NIC. This results in negetive impact on the
> > > > > insertion performance.
> > >
> > > Not sure it's a valid concern since in this proposal, PMD is still expected
> > > to interpret the opaque buffer contents regardless for validation and to
> > > convert it to its internal format.
> > >
> > This is the action to take, we should assume
> > that the pattern is valid and not parse it at all.
> > Another issue, we have a lot of complains about the time we take
> > for validation, I know that currently we must validate the rule when creating
> it,
> > but this can change, why should a rule that was validate and the only change
> > is the IP dest of the encap data?
> > virtual switch after creating the first flow are just modifying it so why force
> > them into revalidating it? (but this issue is a different topic)
> 
> Did you measure what proportion of time is spent on validation when creating
> a flow rule?
> 
> Based on past experience with mlx4/mlx5, creation used to involve a number
> of expensive system calls while validation was basically a single logic loop
> checking individual items/actions while performing conversion to HW
> format (mandatory for creation). Context switches related to kernel
> involvement are the true performance killers.
> 

I'm sorry to say I don't have the numbers, but I can tell you
that in the new API in most cases there will be just one system call.
In addition any extra time is a wasted time, again this is a request we got from number
of customers.

> I'm not sure this is a valid argument in favor of this approach since flow
> rule validation still needs to happen regardless.
> 
> By the way, applications are not supposed to call rte_flow_validate() before
> rte_flow_create(). The former can be helpful in some cases (e.g. to get a
> rough idea of PMD capabilities during initialization) but they should in
> practice only rely on rte_flow_create(), then fall back to software
> processing if that fails.
> 
First I don't think we need to validate the encapsulation data if the data is wrong 
then there will the packet will be dropped. Just like you are saying with the restrication
of the flow items it is the responsibility of the application.
Also I said there is a demand for costumers and there is no reason not to do it
but in any case this is not relevant for the current patch.

> > > Worse, it will require a packet parser to iterate over enclosed headers
> > > instead of a list of convenient rte_flow_whatever objects. It won't be
> > > faster without the convenience of pointers to properly aligned structures
> > > that only contain relevant data fields.
> > >
> > Also in the rte_item we are not aligned so there is no difference in
> performance,
> > between the two approaches, In the rte_item actually we have unused
> pointer which
> > are just a waste.
> 
> Regarding unused pointers: right, VXLAN/NVGRE encap actions shouldn't have
> relied on _pattern item_ structures, the room for their "last" pointer is
> arguably wasted. On the other hand, the "mask" pointer allows masking
> relevant fields that matter to the application (e.g. source/destination
> addresses as opposed to IPv4 length, version and other irrelevant fields for
> encap).
> 
At least according to my testing the NIC can't uses masks and and it is working based
on the offloading configured to any packet (like checksum )

> Not sure why you think it's not aligned. We're comparing an array of
> rte_flow_item objects with raw packet data. The latter requires
> interpretation of each protocol header to jump to the next offset. This is
> more complex on both sides: to build such a buffer for the application, then
> to have it processed by the PMD.
> 
Maybe I missing something but the in a buffer approach likely all the data will be in the
cache and will if allocated will also be aligned.  On the other hand the rte_items
also are not guarantee to be in the same cache line each access to item may result
in a cache miss. Also accessing individual members are just as accessing them in
raw buffer.

> > Also needs to consider how application are using it. They are already have it
> in raw buffer
> > so it saves the conversation time for the application.
> 
> I don't think so. Applications typically know where some traffic is supposed
> to go and what VNI it should use. They don't have a prefabricated packet
> handy to prepend to outgoing traffic. If that was the case they'd most
> likely do so themselves through a extra packet segment and not bother with
> PMD offloads.
> 
Contrail V-Router has such a buffer and it just changes the specific fields.
This is one of the thing we wants to offload, from my last check also OVS uses
such buffer.

> <snip>
> > > From a usability standpoint I'm not a fan of the current interface to
> > > perform NVGRE/VXLAN encap, however this proposal adds another layer of
> > > opaqueness in the name of making things more generic than rte_flow
> already
> > > is.
> > >
> > I'm sorry but I don't understand why it is more opaqueness, as I see it is very
> simple
> > just give the encapsulation data and that's it. For example on system that
> support number of
> > encapsulations they don't need to call to a different function just to change
> the buffer.
> 
> I'm saying it's opaque from an API standpoint if you expect the PMD to
> interpret that buffer's contents in order to prepend it in a smart way.
> 
> Since this generic encap does not support masks, there is no way for an
> application to at least tell a PMD what data matters and what doesn't in the
> provided buffer. This means invalid checksums, lengths and so on must be
> sent as is to the wire. What's the use case for such a behavior?
> 
The NIC treats the packet as normal packet that goes throw all normal offloading.


> > > Assuming they are not to be interpreted by PMDs, maybe there's a case for
> > > prepending arbitrary buffers to outgoing traffic and removing them from
> > > incoming traffic. However this feature should not be named "generic tunnel
> > > encap/decap" as it's misleading.
> > >
> > > Something like RTE_FLOW_ACTION_TYPE_HEADER_(PUSH|POP) would be
> > > more
> > > appropriate. I think on the "pop" side, only the size would matter.
> > >
> > Maybe the name can be change but again the application does encapsulation
> so it will
> > be more intuitive for it.
> >
> > > Another problem is that you must not require actions to rely on specific
> > > pattern content:
> > >
> > I don't think this can be true anymore since for example what do you expect
> > to happen when you place an action for example modify ip to packet with no
> ip?
> >
> > This may raise issues in the NIC.
> > Same goes for decap after the flow is in the NIC he must assume that he can
> remove otherwise
> > really unexpected beaver can accord.
> 
> Right, that's why it must be documented as undefined behavior. The API is
> not supposed to enforce the relationship. A PMD may require the presence of
> some pattern item in order to perform some action, but this is a PMD
> limitation, not a limitation of the API itself.
> 

Agree

> <snip>
> > For maximum flexibility, all actions should be usable on their own on empty
> > > pattern. On the other hand, you can document undefined behavior when
> > > performing some action on traffic that doesn't contain something.
> > >
> >
> > Like I said and like it is already defined for VXLAN_enacp  we must know
> > the pattern otherwise the rule can be declined in Kernel / crash when trying
> to decap
> > packet without outer tunnel.
> 
> Right, PMD limitation then. You are free to document it in the PMD.
> 

Agree

> <snip>
> > > My opinion is that the best generic approach to perform encap/decap with
> > > rte_flow would use one dedicated action per protocol header to
> > > add/remove/modify. This is the suggestion I originally made for
> > > VXLAN/NVGRE [2] and this is one of the reasons the order of actions now
> > > matters [3].
> >
> > I agree that your approach make a lot of sense, but there are number of
> issues with it
> > * it is harder and takes more time from the application point of view.
> > * it is slower when compared to the raw buffer.
> 
> I'm convinced of the opposite :) We could try to implement your raw buffer
> approach as well as mine in testpmd (one action per layer, not the current
> VXLAN/NVGRE encap mess mind you) in order to determine which is the most
> convenient on the application side.
> 

There are 2 different implementations one for test-pmd and one for normal application.
Writing the code in test-pmd in raw buffer is simpler but less flexible
writing the code in a real application I think is simpler in the buffer approach.
Since they already have a buffer.

> <snip>
> > > Except for raw push/pop of uninterpreted headers, tunnel encapsulations
> not
> > > explicitly supported by rte_flow shouldn't be possible. Who will expect
> > > something that isn't defined by the API to work and rely on it in their
> > > application? I don't see it happening.
> > >
> > Some of our customers are working with private tunnel type, and they can
> configure it using kernel
> > or just new FW this is a real use case.
> 
> You can already use negative types to quickly address HW and
> customer-specific needs by the way. Could this [6] perhaps address the
> issue?
> 
> PMDs can expose public APIs. You could devise one that spits new negative
> item/action types based on some data, to be subsequently used by flow
> rules with that PMD only.
> 
> > > Come on, adding new encap/decap actions to DPDK is shouldn't be such a
> pain
> > > that the only alternative is a generic API to work around me :)
> > >
> >
> > Yes but like I said when a costumer asks for a ecnap and I can give it to him
> why wait for the DPDK next release?
> 
> I don't know, is rte_flow held to a special standard compared to other DPDK
> features in this regard? Engineering patches can always be provided,
> backported and whatnot.
> 
> Customer applications will have to be modified and recompiled to benefit
> from any new FW capabilities regardless, it's extremely unlikely to be just
> a matter of installing a new FW image.
> 

In some cases this is what's happen 😊

> <snip>
> > > Pattern does not necessarily match the full stack of outer layers.
> > >
> > > Decap action must be able to determine what to do on its own, possibly in
> > > conjunction with other actions in the list but that's all.
> > >
> > Decap removes the outer headers.
> > Some tunnels don't have inner L2 and it must be added after the decap
> > this is what L3 decap means, and the user must supply the valid L2 header.
> 
> My point is that any data required to perform decap must be provided by the
> decap action itself, not through a pattern item, whose only purpose is to
> filter traffic and may not be present. Precisely what you did for L3 decap.
> 
Agree we remove the limitation and just say unpredicted result may accord.

> <snip>
> > > > I think the reasons I gave are very good motivation to change the
> approach
> > > > please also consider that there is no implementation yet that supports the
> > > > old approach.
> > >
> > > Well, although the existing API made this painful, I did submit one [4] and
> > > there's an updated version from Slava [5] for mlx5.
> > >
> > > > while we do have code that uses the new approach.
> > >
> > > If you need the ability to prepend a raw buffer, please consider a different
> > > name for the related actions, redefine them without reliance on specific
> > > pattern items and leave NVGRE/VXLAN encap/decap as is for the time
> > > being. They can deprecated anytime without ABI impact.
> > >
> > > On the other hand if that raw buffer is to be interpreted by the PMD for
> > > more intelligent tunnel encap/decap handling, I do not agree with the
> > > proposed approach for usability reasons.
> 
> I'm still not convinced by your approach. If these new actions *must* be
> included unmodified right now to prevent some customer cataclysm, then fine
> as an experiment but please leave VXLAN/NVGRE encaps alone for the time
> being.
> 
> > > [2] [PATCH v3 2/4] ethdev: Add tunnel encap/decap actions
> > >
> > >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > > pdk.org%2Farchives%2Fdev%2F2018-
> > >
> April%2F096418.html&data=02%7C01%7Corika%40mellanox.com%7C7b9
> > >
> 9c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b%
> > >
> 7C0%7C0%7C636747697489048905&sdata=prABlYixGAkdnyZ2cetpgz5%2F
> > > vkMmiC66T3ZNE%2FewkQ4%3D&reserved=0
> > >
> > > [3] ethdev: alter behavior of flow API actions
> > >
> > >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk
> > >
> .org%2Fdpdk%2Fcommit%2F%3Fid%3Dcc17feb90413&data=02%7C01%7C
> > >
> orika%40mellanox.com%7C7b99c5f781424ba7950608d62ea83efa%7Ca652971
> > >
> c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636747697489058915&sdata
> > >
> =VavsHXeQ3SgMzaTlklBWdkKSEBjELMp9hwUHBlLQlVA%3D&reserved=0
> > >
> > > [4] net/mlx5: add VXLAN encap support to switch flow rules
> > >
> > >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > > pdk.org%2Farchives%2Fdev%2F2018-
> > >
> August%2F110598.html&data=02%7C01%7Corika%40mellanox.com%7C7b
> > >
> 99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b
> > >
> %7C0%7C0%7C636747697489058915&sdata=lpfDWp9oBN8AFNYZ6VL5BjI
> > > 38SDFt91iuU7pvhbC%2F0E%3D&reserved=0
> > >
> > > [5] net/mlx5: e-switch VXLAN flow validation routine
> > >
> > >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > > pdk.org%2Farchives%2Fdev%2F2018-
> > >
> October%2F113782.html&data=02%7C01%7Corika%40mellanox.com%7C7
> > >
> b99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461
> > >
> b%7C0%7C0%7C636747697489058915&sdata=8GCbYk6uB2ahZaHaqWX4
> > > OOq%2B7ZLwxiApcs%2FyRAT9qOw%3D&reserved=0
> 
> [6] "9.2.9. Negative types"
> 
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk
> .org%2Fguides-18.08%2Fprog_guide%2Frte_flow.html%23negative-
> types&data=02%7C01%7Corika%40mellanox.com%7C52a7b66d888f47a02
> fa308d62ecae971%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63
> 6747846398519627&sdata=Rn1s5FgQB8pSgjLvs3K2M4rX%2BVbK5Txi59iy
> Q%2FbsUqQ%3D&reserved=0
> 
> On an unrelated note, is there a way to prevent Outlook from mangling URLs
> on your side? (those emea01.safelinks things)
> 
I  will try to find a solution. I didn't find one so far.

> --
> Adrien Mazarguil
> 6WIND


More information about the dev mailing list