[dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values in flow rule types

Andrew Rybchenko arybchenko at solarflare.com
Thu Sep 17 17:18:01 CEST 2020


On 9/17/20 10:56 AM, Gregory Etelson wrote:
>> On 9/16/20 8:21 PM, Gregory Etelson wrote:
>>> From: Gregory Etelson
>>> Sent: Tuesday, September 15, 2020 13:27
>>> To: Andrew Rybchenko <arybchenko at solarflare.com>; Ajit Khaparde
>>> <ajit.khaparde at broadcom.com>
>>> Cc: dpdk-dev <dev at dpdk.org>; Matan Azrad <matan at nvidia.com>; Raslan
>>> Darawsheh <rasland at nvidia.com>; Ori Kam <orika at nvidia.com>; Gregory
>>> Etelson <getelson at mellanox.com>; Ori Kam <orika at mellanox.com>;
>>> NBU-Contact-Thomas Monjalon <thomas at monjalon.net>; Ferruh Yigit
>>> <ferruh.yigit at intel.com>
>>> Subject: RE: [dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values
>>> in flow rule types
>>>
>>> Subject: Re: [dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values
>>> in flow rule types On 9/15/20 7:36 AM, Ajit Khaparde wrote:
>>> On Tue, Sep 8, 2020 at 1:16 PM Gregory Etelson
>> <mailto:getelson at nvidia.com> wrote:
>>> From: Gregory Etelson <mailto:getelson at mellanox.com>
>>>
>>> RTE flow items & actions use positive values in item & action type.
>>> Negative values are reserved for PMD private types. PMD items &
>>> actions usually are not exposed to application and are not used to
>>> create RTE flows.
>>>
>>> The patch allows applications with access to PMD flow items & actions
>>> ability to integrate RTE and PMD items & actions and use them to
>>> create flow rule.
>>> While we are reviewing this, some quick comment/questions..
>>>
>>> Doesn't this go against the above "PMD items & actions usually are not
>>> exposed to application and are not used to create RTE flows."?
>>> Why would an application try to use PMD specific private types?
>>> Isn't this contrary to having a standard API?
>>>
>>> +1
>>>
>>> I would like to clarify the purpose and use of private elements patch.
>>> That patch is prerequisite for  [PATCH v2 2/4] ethdev: tunnel offload model
>> patch.
>>> The tunnel offload API provides unified hardware independent model to
>>> offload tunneled packets, match on packet headers in hardware and to
>> restore outer headers of partially offloaded packets.
>>> The model implementation depends on hardware capabilities. For
>>> example,  if hardware supports inner nat, it can do nat first and
>>> postpone decap to the end, while other hardware that cannot do inner
>>> nat must decap first and run nat actions afterwards. Such hardware has
>>> to save outer header in some hardware context, register or memory, for
>> application to restore a packet later, if needed. Also, in this case the exact
>> solution depends on PMD because of limited number of hardware contexts.
>>> Although application working with DKDK can implement all these
>>> requirements with existing flow rules API, it will have to address each
>> hardware specifications separately.
>>> To solve this limitation we selected design where application quires
>>> PMD for actions, or items, that are optimal for a hardware that PMD
>>> represents. Result can be a mixture of RTE and PMD private elements -
>>> it's up to PMD implementation. Application passes these elements back to
>> PMD as a flow rule recipe that's already optimal for underlying hardware.
>>> If PMD has private elements in such rule items or actions, these private
>> elements must not be rejected by RTE layer.
>>>
>>> I hope it helps to understand what this model is trying to achieve.
>>> Did that clarify your concerns ?
>>
>> There is a very simple question which I can't answer after reading it.
>> Why these PMD specific actions and items do not bind application to a
>> specific vendor. If it binds, it should be clearly stated in the description. If no,
>> I'd like to understand why since opaque actions/items are not really well
>> defined and hardly portable across vendors.
> 
> Tunnel Offload API does not bind application to a vendor.
> One of the main goals of that model is to provide application with vendor/hardware independent solution.
> PMD transfer to application an array of items. Application passes that array back to PMD as opaque data,
> in rte_flow_create(), without reviewing the array content. Therefore, if there are internal PMD actions in the array,
> they have no effect on application.
> Consider the following application code example:
> 
> /* get PMD actions that implement tunnel offload */
> rte_tunnel_decap_set(&tunnel, &pmd_actions, pmd_actions_num, error);
> 
> /* compile an array of actions to create flow rule */
> memcpy(actions, pmd_actions,  pmd_actions_num * sizeof(actions[0]));
> memcpy(actions + pmd_actions_num, app_actions, app_actions_num * sizeof(actions[0]));
> 
> /* create flow rule*/
> rte_flow_create(port_id, attr, pattern, actions, error);
> 
> vendor A provides pmd_actions_A = {va1, va2 …. vaN}
> vendor B provides pmd_actions_B = {vb1}
> Regardless of pmd_actions content, application code will not change.
> However, each PMD will receive exact, hardware related, actions for tunnel offload.
> 

Many thanks for explanations. I got it. I'll wait for the next version
to take a look at code once again.


More information about the dev mailing list