[dpdk-dev] [PATCH v2 2/2] net/mlx5: add flow sync API

Ferruh Yigit ferruh.yigit at intel.com
Fri Oct 30 09:59:12 CET 2020


On 10/30/2020 5:37 AM, Bing Zhao wrote:
> Hi Ferruh,
> Thanks for your comments.
> PSB
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit at intel.com>
>> Sent: Friday, October 30, 2020 6:43 AM
>> To: Slava Ovsiienko <viacheslavo at nvidia.com>; Bing Zhao
>> <bingz at nvidia.com>; Matan Azrad <matan at nvidia.com>; Ori Kam
>> <orika at nvidia.com>
>> Cc: dev at dpdk.org; Raslan Darawsheh <rasland at nvidia.com>
>> Subject: Re: [dpdk-dev] [PATCH v2 2/2] net/mlx5: add flow sync API
>>
>> External email: Use caution opening links or attachments
>>
>>
>> On 10/27/2020 3:42 PM, Slava Ovsiienko wrote:
>>> Hi, Bing
>>>
>>> Release notes / mlx5 features documentation update?
>>> Beside this:
>>> Acked-by: Viacheslav Ovsiienko <viacheslavo at nvidia.com>
>>>
>>>> -----Original Message-----
>>>> From: Bing Zhao <bingz at nvidia.com>
>>>> Sent: Tuesday, October 27, 2020 16:47
>>>> To: Slava Ovsiienko <viacheslavo at nvidia.com>; Matan Azrad
>>>> <matan at nvidia.com>; Ori Kam <orika at nvidia.com>
>>>> Cc: dev at dpdk.org; Raslan Darawsheh <rasland at nvidia.com>
>>>> Subject: [PATCH v2 2/2] net/mlx5: add flow sync API
>>>>
>>>> When creating a flow, the rule itself might not take effort
>>>> immediately once the function call returns with success. It would
>>>> take some time to let the steering synchronize with the hardware.
>>>>
>>>> If the application wants the packet to be sent to hit the flow
>> after
>>>> it is created, this flow sync API can be used to clear the
>> steering
>>>> HW cache to enforce next packet hits the latest rules.
>>>>
>>>> For TX, usually the NIC TX domain and/or the FDB domain should be
>>>> synchronized depends in which domain the flow is created.
>>>>
>>>> The application could also try to synchronize the NIC RX and/or
>> the
>>>> FDB domain for the ingress packets.
>>>>
>>>> Signed-off-by: Bing Zhao <bingz at nvidia.com>
>>>> Acked-by: Ori Kam <orika at nvidia.com>
>>
>> <...>
>>
>>>> @@ -8169,3 +8179,17 @@ int mlx5_alloc_tunnel_hub(struct
>>>> mlx5_dev_ctx_shared *sh)
>>>>               mlx5_free(thub);
>>>>       return err;
>>>>    }
>>>> +
>>>> +int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains) {
>>>> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>>> +    const struct mlx5_flow_driver_ops *fops;
>>>> +    int ret;
>>>> +    struct rte_flow_attr attr = { .transfer = 0 };
>>>> +
>>>> +    fops = flow_get_drv_ops(flow_get_drv_type(dev, &attr));
>>>> +    ret = fops->sync_domain(dev, domains,
>>>> MLX5DV_DR_DOMAIN_SYNC_FLAGS_HW);
>>>> +    if (ret > 0)
>>>> +            ret = -ret;
>>>> +    return ret;
>>>> +}
>>
>> This is causing build error in the travis [1], I guess this is
>> related to the rdma-core version, is the
>> 'MLX5DV_DR_DOMAIN_SYNC_FLAGS_HW' check required in the header like
>> other usages?
>>
>> Also 'MLX5DV_' macros seems used in 'mlx5_flow_dv.c', is it expected
>> to use it in this file? (just high-level observation, no idea on
>> details.)
> 
> I send a fix for this already yesterday, and the issue should be solved.
> That fix could be squashed.
> http://patches.dpdk.org/patch/82652/
> 

Got it, let me squash it in the next-net, and I will test again.

>>
>> [1]
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftr
>> avis-
>> ci.org%2Fgithub%2Fferruhy%2Fdpdk%2Fjobs%2F740008969&data=04%7C01
>> %7Cbingz%40nvidia.com%7C59346fa41fce41e08cce08d87c5c1667%7C43083d157
>> 27340c1b7db39efd9ccc17a%7C0%7C0%7C637396082238282132%7CUnknown%7CTWF
>> pbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI
>> 6Mn0%3D%7C1000&sdata=HGY7vbWWR5ZdIikv39IzZAYcdsJq1FvGjuonClJo%2B
>> Pc%3D&reserved=0
>>
>>
>>
>> btw, I just recognized that the patch acked with exception, is the
>> documentation requested above (with ack) provided?
> 
> This is a quite simple internal API. The usage and the information are listed in the API doxygen comments.
> Do I need to list it in the doc?
> 

That is the request from @Slava above.



More information about the dev mailing list