[dpdk-dev] [PATCH] add flow shared action API

Jerin Jacob jerinjacobk at gmail.com
Wed Jul 8 13:00:44 CEST 2020


On Wed, Jul 8, 2020 at 3:17 PM Ori Kam <orika at mellanox.com> wrote:
>
> Hi Jerin
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk at gmail.com>
> >
> > On Wed, Jul 8, 2020 at 2:33 AM Ori Kam <orika at mellanox.com> wrote:
> > >
> > > Hi Jerin,
> >
> > Hi Ori,
> >
> > >
> > > > -----Original Message-----
> [..nip ..]
>
> > > > > > I think, simple API change would be to accommodate "rte_shared_ctx
> > > > > > *ctx, rte_flow_action *action" modes
> > > > > > without introducing the emulation for one or other mode, will be.
> > > > > >
> > > > > > enum rte_flow_action_update_type {
> > > > > >               RTE_FLOW_ACTION_UPDATE_TYPE_SHARED_ACTION,
> > > > > >               RTE_FLOW_ACTION_UPDATE_TYPE_ACTION,
> > > > > > };
> > > > > >
> > > > > > struct rte_flow_action_update_type_param {
> > > > > >          enum rte_flow_action_update_type type;
> > > > > >          union {
> > > > > >                      struct rte_flow_action_update_type_shared_action_param
> > {
> > > > > >                                 rte_shared_ctx *ctx;
> > > > > >                       } shared_action;
> > > > > >                       struct
> > rte_flow_action_update_type_shared_action_param {
> > > > > >                                 rte_flow *flow,
> > > > > >                                  rte_flow_action *action;
> > > > > >                       } action;
> > > > > >          }
> > > > > > }
> > > > > >
> > > > > Thank you for the idea but I fall to see how your suggested API is simpler
> > than
> > > > the one suggested by me?
> > > >
> > > > My thought process with the below-proposed API[1] is that It is
> > > > dictates "_shared_action_" in API name as
> > > > well as arguments. I just thought of expressing it as either-or case
> > > > hence I thought [2] is better. i.e The PMD does not support
> > > > shared_action, not even need to create one to use
> > > > rte_flow_action_update() to avoid the confusion. Thoughts?
> > > >
> > > > [1]
> > > > rte_flow_shared_action_update(uint16_port port, rte_shared_ctx *ctx,
> > > > rte_flow_action *action, error)
> > > >
> > > > [2]
> > > > rte_flow_action_update(uint16_port port, struct
> > > > rte_flow_action_update_type_param  *param, error)
> > > >
> > > Let me see if I understand you correctly, your suggestion is to allow
> > > the application to change one action in one flow, but instead of creating
> > > the context the application will just supply the rte_flow and the new actions
> > > do I understand correctly?
> >
> > Yes.
> >
> > >
> > > If so this it is a nice idea, but there are some issues with it,
> > > 1. The PMD must save the flow which will result in memory consumption.
> >
> > struct rte_flow * driver handle any way store that information to as
> > it would be needed
> > for other rte_flow related APIs.
> >
> The driver for example in Mellanox case save only the pointer to the flow so it can
> destroy it on request. It can't remove it and add it again since it is missing critical
> info. To save the info will cost memory. I assume this goes to other drivers.
> There is no real need to save anything but the flow pointer.

If driver _need_ this feature, then the driver needs to store some
info in the flow object,
In our case, it will be the index of the MCAM action, etc.
Yes. it up to driver the how they want to support it.


>
> >
> > > 2. Assume that two flows are using the same RSS action for example, so the
> > PMD
> > > reuse the RSS object he created for the first flow also for the second. Now
> > changing
> > > this RSS flow may result in also changing the second flow. (this can be solved
> > by always
> > > creating new action)
> >
> > It is not resuing the action, it more of updating the action. So the
> > above said issue won't happen.
> > It is removing the need for  call `rte_flow_destroy()` and call
> > `rte_flow_create()` if only action
> > needs to update for THE given flow. That's it.
> >
> Again this means that the driver must save all flows so it will waste memory.
> also this doesn’t save any time, since the application can just do it the same way
> as the PMD there is no value to do it inside the PMD.

Why driver need to save all the flow, We need to save all the flow ONLY when
we need to emulate the shared context. We need to save only additional
info like,
MCAM index where the action was allocated or such, so that it can be replaced.


>
> >
> > > 3. It doesn't handle the main use case that the application wants to change
> > number of
> > > flows at the same time, which is the idea of this feature.
> >
> > We discussed this in detail and tried approach for the common code to
> > make everything
> > as shared action. Andrey quickly realizes that it is difficult without
> > HW support.
> Like everything in RTE flow this is all about HW support
> If the HW doesn’t support it don't do it.
>
> >
> > >
> > > I also think that all PMD that support option 2 can  support option 1 since
> > > they can save in the ctx a list of flows and simply apply them again. So by
> > > definition if PMD supports [2] it also support [1] while the other
> > > way is not correct since it forces the PMD to save flows which like I said
> > waste memory.
> >
> > If we use "rte_flow_shared_action_update(uint16_port port,
> > rte_shared_ctx *ctx,  rte_flow_action *action, error)",
> > What would be ctx value for the HW does not support a shared context?
> > That's is the only reason for
> > my proposal.  I understand, your concern about supporting two modes in
> > PMD, I don't think,
> > PMD needs to support RTE_FLOW_ACTION_UPDATE_TYPE_ACTION if
> > RTE_FLOW_ACTION_UPDATE_TYPE_SHARED_ACTION supported.
> >
> This is the beauty of it ctx is opaque so the PMD can have what ever it wants to have. Just like
> an rte_flow that each PMD have different fields and usage.
>
> > >
> > > I suggest that we will go with option [1], and if needed in the future we will
> > update the code.
> > > using option [2] will result in dead code since at least for the current time no
> > PMD will implement this
> > > API.
> >
> > We are planning to update our PMD to support this once API is finalized.
> >
> Great very happy to hear that.
> This means that we should push this feature even faster
>
> > >
> > > I can suggest one more thing maybe to change the name from shared_ctx to
> > just ctx
> > > which implicitly mean it can be shared but not a must. What do you think?
> > (but again
> > > I think by definition if a PMD can implement number [2] it can also
> > implement it to number
> > > of flows using API [2].
> >
> > Just void *type is fine too, but we need an argument for type to cast
> > it in application and/or driver.
> >
> Like said above this is opaque so the PMD knows what to expect.

Driver side it is OKAY but how the application can update it? The PMD does not
need or does not have a shared object.

Otherway to ask is, Could you have share the API call sequence using
"rte_flow_shared_action_update(uint16_port port, rte_shared_ctx *ctx,
rte_flow_action *action, error)"

to enable support for the following category of HW as I mentioned earlier.
- The HW has "pattern" and "action" mapped to different HW objects and
action can be updated any time without destroying and create.(a,k,a
Does not have shared HW object)


>
> >  enum rte_flow_action_update_type {
> >               RTE_FLOW_ACTION_UPDATE_TYPE_SHARED_ACTION,
> >               RTE_FLOW_ACTION_UPDATE_TYPE_ACTION,
> >  };
> >
>
> > >
> > > > > In my suggestion the PMD simply needs to check if the new action and
> > > > change the
> > > > > context and to that action, or just change parameters in the action, if it is
> > the
> > > > same action.
> > > > >
> > > > > Let's go with the original patch API modified to support like you requested
> > > > also changing the action,
> > > > > based on my comments.
> > > > >
> > > > > > rte_flow_action_update(uint16_port port, struct
> > > > > > rte_flow_action_update_type_param  *param, error)
> > > > > >
> > > > > > >
> > > [..nip..]
> > >
> > > Best,
> > > Ori


More information about the dev mailing list