[PATCH v7 05/12] net/nfp: add flower PF setup and mempool init logic
Ferruh Yigit
ferruh.yigit at xilinx.com
Tue Sep 6 12:18:39 CEST 2022
On 9/6/2022 9:45 AM, Chaoyong He wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit at xilinx.com>
>> Sent: Monday, September 5, 2022 11:42 PM
>> To: Chaoyong He <chaoyong.he at corigine.com>; dev at dpdk.org
>> Cc: oss-drivers <oss-drivers at corigine.com>; Niklas Soderlund
>> <niklas.soderlund at corigine.com>
>> Subject: Re: [PATCH v7 05/12] net/nfp: add flower PF setup and mempool
>> init logic
>>
>> On 8/12/2022 11:22 AM, Chaoyong He wrote:
>>> Adds the vNIC initialization logic for the flower PF vNIC. The flower
>>> firmware exposes this vNIC for the purposes of fallback traffic in the
>>> switchdev use-case. The logic of setting up this vNIC is similar to
>>> the logic seen in nfp_net_init() and nfp_net_start().
>>>
>>> Adds minimal dev_ops for this PF device. Because the device is being
>>> exposed externally to DPDK it should also be configured using DPDK
>>> helpers like rte_eth_configure(). For these helpers to work the flower
>>> logic needs to implements a minimal set of dev_ops. The Rx and Tx
>>> logic for this vNIC will be added in a subsequent commit.
>>>
>>> OVS expects incoming packets coming into the OVS datapath to be
>>> allocated from a mempool that contains objects of type "struct
>>> dp_packet". For the PF handling the slowpath into OVS it should use a
>>> mempool that is compatible with OVS. This commit adds the logic to
>>> create the OVS compatible mempool. It adds certain OVS specific
>>> structs to be able to instantiate the mempool.
>>>
>>
>> Can you please elaborate what is OVS compatible mempool?
>>
>> <...>
>>
>>> +static inline struct nfp_app_flower * nfp_app_flower_priv_get(struct
>>> +nfp_pf_dev *pf_dev) {
>>> + if (pf_dev == NULL)
>>> + return NULL;
>>> + else if (pf_dev->app_id != NFP_APP_FLOWER_NIC)
>>> + return NULL;
>>> + else
>>> + return (struct nfp_app_flower *)pf_dev->app_priv; }
>>> +
>>
>> What do you think to unify functions to get private data, instead of having a
>> function for each FW, it can be possible to have single one?
>>
>
> At first, we use two macros for this, and Andrew advice change them to functions.
> ```
> #define NFP_APP_PRIV_TO_APP_NIC(app_priv)\
> ((struct nfp_app_nic *)app_priv)
>
> #define NFP_APP_PRIV_TO_APP_FLOWER(app_priv)\
> ((struct nfp_app_flower *)app_priv)
> ```
> So your advice is we unify the functions into:
> ```
> static inline struct nfp_app_nic *
> nfp_app_priv_get(struct nfp_pf_dev *pf_dev)
> {
> if (pf_dev == NULL)
> return NULL;
> else if (pf_dev->app_id == NFP_APP_CORE_NIC ||
> pf_dev->app_id == NFP_APP_FLOWER_NIC)
> return pf_dev->app_priv;
> else
> return NULL;
> }
> ```
> and convert the pointer type at where this function been called?
Since return pointer types are different, it should return "void *",
```
static inline void *
nfp_app_priv_get(struct nfp_pf_dev *pf_dev)
{
if (pf_dev == NULL)
return NULL;
else if (pf_dev->app_id == NFP_APP_CORE_NIC ||
pf_dev->app_id == NFP_APP_FLOWER_NIC)
return pf_dev->app_priv;
else
return NULL;
}
```
And when assigning a pointer from "void *", no explicit cast is required.
```
struct nfp_app_flower *app_flower;
app_flower = nfp_app_priv_get(pf_dev);
```
I think this is better to have single function, instead of different
helper function for each FW, but I would like to get @Andrew's comment too.
Btw, since 'nfp_app_nic*_priv_get' return 'NULL' now, should callers
check for NULL, this may introduce too many checks, and if checks are
not necessary, what is the benefit of the function against macro?
More information about the dev
mailing list