[dpdk-dev] [PATCH v5 1/2] ethdev: add flow shared action API

Ori Kam orika at nvidia.com
Wed Oct 7 15:01:11 CEST 2020


Hi Andrey,

> -----Original Message-----
> From: dev <dev-bounces at dpdk.org> On Behalf Of Andrey Vesnovaty
> Sent: Wednesday, October 7, 2020 3:56 PM
> Subject: [dpdk-dev] [PATCH v5 1/2] ethdev: add flow shared action API
> 
> This commit introduces extension of DPDK flow action API enabling
> sharing of single rte_flow_action in multiple flows. The API intended for
> PMDs, where multiple HW offloaded flows can reuse the same HW
> essence/object representing flow action and modification of such an
> essence/object affects all the rules using it.
> 
> Motivation and example
> ===
> Adding or removing one or more queues to RSS used by multiple flow rules
> imposes per rule toll for current DPDK flow API; the scenario requires
> for each flow sharing cloned RSS action:
> - call `rte_flow_destroy()`
> - call `rte_flow_create()` with modified RSS action
> 
> API for sharing action and its in-place update benefits:
> - reduce the overhead of multiple RSS flow rules reconfiguration
> - optimize resource utilization by sharing action across multiple
>   flows
> 
> Change description
> ===
> 
> Shared action
> ===
> In order to represent flow action shared by multiple flows new action
> type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
> rte_flow_action_type`).
> Actually the introduced API decouples action from any specific flow and
> enables sharing of single action by its handle across multiple flows.
> 
> Shared action create/use/destroy
> ===
> Shared action may be reused by some or none flow rules at any given
> moment, i.e. shared action reside outside of the context of any flow.
> Shared action represent HW resources/objects used for action offloading
> implementation.
> API for shared action create (see `rte_flow_shared_action_create()`):
> - should allocate HW resources and make related initializations required
>   for shared action implementation.
> - make necessary preparations to maintain shared access to
>   the action resources, configuration and state.
> API for shared action destroy (see `rte_flow_shared_action_destroy()`)
> should release HW resources and make related cleanups required for shared
> action implementation.
> 
> In order to share some flow action reuse the handle of type
> `struct rte_flow_shared_action` returned by
> rte_flow_shared_action_create() as a `conf` field of
> `struct rte_flow_action` (see "example" section).
> 
> If some shared action not used by any flow rule all resources allocated
> by the shared action can be released by rte_flow_shared_action_destroy()
> (see "example" section). The shared action handle passed as argument to
> destroy API should not be used any further i.e. result of the usage is
> undefined.
> 
> Shared action re-configuration
> ===
> Shared action behavior defined by its configuration can be updated via
> rte_flow_shared_action_update() (see "example" section). The shared
> action update operation modifies HW related resources/objects allocated
> on the action creation. The number of operations performed by the update
> operation should not depend on the number of flows sharing the related
> action. On return of shared action update API action behavior should be
> according to updated configuration for all flows sharing the action.
> 
> Shared action query
> ===
> Provide separate API to query shared action state (see
> rte_flow_shared_action_update()). Taking a counter as an example: query
> returns value aggregating all counter increments across all flow rules
> sharing the counter. This API doesn't query shared action configuration
> since it is controlled by rte_flow_shared_action_create() and
> rte_flow_shared_action_update() APIs and no supposed to change by other
> means.
> 
> PMD support
> ===
> The support of introduced API is pure PMD specific design and
> responsibility for each action type (see struct rte_flow_ops).
> 
> testpmd
> ===
> In order to utilize introduced API testpmd cli may implement following
> extension
> create/update/destroy/query shared action accordingly
> 
> flow shared_action (port) create {action_id (id)} (action) / end
> flow shared_action (port) update (id) (action) / end
> flow shared_action (port) destroy action_id (id) {action_id (id) [...]}
> flow shared_action (port) query (id)
> 
> testpmd example
> ===
> 
> configure rss to queues 1 & 2
> 
> > flow shared_action 0 create action_id 100 rss queues 1 2 end / end
> 
> create flow rule utilizing shared action
> 
> > flow create 0 ingress \
>     pattern eth dst is 0c:42:a1:15:fd:ac / ipv6 / tcp / end \
>   actions shared 100 / end
> 
> add 2 more queues
> 
> > flow shared_action 0 modify 100 rss queues 1 2 3 4 end / end
> 
> example
> ===
> 
> struct rte_flow_action actions[2];
> struct rte_flow_action action;
> /* skipped: initialize action */
> struct rte_flow_shared_action *handle = rte_flow_shared_action_create(
> 					port_id, &action, &error);
> actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
> actions[0].conf = handle;
> actions[1].type = RTE_FLOW_ACTION_TYPE_END;
> /* skipped: init attr0 & pattern0 args */
> struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
> 					actions, error);
> /* create more rules reusing shared action */
> struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
> 					actions, error);
> /* skipped: for flows 2 till N */
> struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
> 					actions, error);
> /* update shared action */
> struct rte_flow_action updated_action;
> /*
>  * skipped: initialize updated_action according to desired action
>  * configuration change
>  */
> rte_flow_shared_action_update(port_id, handle, &updated_action, error);
> /*
>  * from now on all flows 1 till N will act according to configuration of
>  * updated_action
>  */
> /* skipped: destroy all flows 1 till N */
> rte_flow_shared_action_destroy(port_id, handle, error);
> 
> Signed-off-by: Andrey Vesnovaty <andreyv at nvidia.com>
> ---
> 2.26.2


Acked-by: Ori Kam <orika at nvidia.com>
Thanks,
Ori



More information about the dev mailing list