[dpdk-dev] [RFC V1] examples/l3fwd-power: fix memory leak for rte_pci_device

lihuisong (C) lihuisong at huawei.com
Fri Oct 8 08:26:06 CEST 2021


在 2021/9/30 15:50, Thomas Monjalon 写道:
> 30/09/2021 08:28, Huisong Li:
>> Hi. Thomas
>>
>> I've summed up our previous discussion.
>>
>> Can you look at the final proposal again?
>>
>> Do you think we should deal with the problem better?
> I don't understand what is the final proposal.
Sorry.

The last idea we discussed was:

As you mentioned, if we do not want the user to free rte_pci_device and 
we want rte_pci_device

to be freed in time. Can we add a code logic calculating the number of 
ports under a PCI address

and calling rte_dev_remove() in rte_eth_dev_close() to free 
rte_pci_device and delete it from rte_pci_bus?

If we do, we may need to make some extra work, otherwise some 
applications, such as OVS-DPDK, will

fail due to a second call to rte_dev_remove().


The method of releasing rte_pci_device in OVS-DPDK is as follows:

It calls dev_close() first, and then check whether all ports under the 
PCI address are closed

to free rte_pci_device by calling rte_dev_remove().

If it's not clear enough, please take a look at the discussion in our 
email line. Thanks.😁
>
>
>> 在 2021/9/27 9:44, Huisong Li 写道:
>>> 在 2021/9/27 3:16, Thomas Monjalon 写道:
>>>> 26/09/2021 14:20, Huisong Li:
>>>>> 在 2021/9/18 16:46, Thomas Monjalon 写道:
>>>>>> 18/09/2021 05:24, Huisong Li:
>>>>>>> 在 2021/9/17 20:50, Thomas Monjalon 写道:
>>>>>>>> 17/09/2021 04:13, Huisong Li:
>>>>>>>>> How should PMD free it? What should we do? Any good suggestions?
>>>>>>>> Check that there is no other port sharing the same PCI device,
>>>>>>>> then call the PMD callback for rte_pci_remove_t.
>>>>>>> For primary and secondary processes, their rte_pci_device is
>>>>>>> independent.
>>>>>> Yes it requires to free on both primary and secondary.
>>>>>>
>>>>>>> Is this for a scenario where there are multiple representor ports
>>>>>>> under
>>>>>>> the same PCI address in the same processe?
>>>>>> A PCI device can have multiple physical or representor ports.
>>>>> Got it.
>>>>>>>>> Would it be more appropriate to do this in rte_eal_cleanup() if it
>>>>>>>>> cann't be done in the API above?
>>>>>>>> rte_eal_cleanup is a last cleanup for what was not done earlier.
>>>>>>>> We could do that but first we should properly free devices when
>>>>>>>> closed.
>>>>>>>>
>>>>>>> Totally, it is appropriate that rte_eal_cleanup is responsible for
>>>>>>> releasing devices under the pci bus.
>>>>>> Yes, but if a device is closed while the rest of the app keep running,
>>>>>> we should not wait to free it.
>>>>>    From this point of view, it seems to make sense. However,
>>>>> according to
>>>>> the OVS-DPDK
>>>>>
>>>>> usage, it calls dev_close() first, and then check whether all ports
>>>>> under the PCI address are
>>>>>
>>>>> closed to free rte_pci_device by calling rte_dev_remove().
>>>>>
>>>>>
>>>>> If we do not want the user to be aware of this, and we want
>>>>> rte_pci_device to be freed
>>>>>
>>>>> in a timely manner. Can we add a code logic calculating the number of
>>>>> ports under a PCI address
>>>>>
>>>>> and calling rte_dev_remove() to rte_eth_dev_close() to free
>>>>> rte_pci_device and delete it from rte_pci_bus?
>>>>>
>>>>> If we do, we may need to make some extra work, otherwise some
>>>>> applications, such as OVS-DPDK, will
>>>>>
>>>>> fail due to a second call to rte_dev_remove().
>>>> I don't understand the proposal.
>>>> Please could explain again the code path?
>>> 1. This RFC patch intended to free rte_pci_device in DPDK app by calling
>>>
>>> rte_dev_remove() after calling dev_close().
>>>
>>> 2. For the above-mentioned usage in OVS-DPDK, please see function
>>>
>>> netdev_dpdk_destruct() in lib/netdev-dpdk.c.
>>>
>>> 3. Later, you suggest that the release of rte_pci_device should be done
>>>
>>> in the dev_close() API, not in the rte_eal_init() which is not real-time.
>>>
>>> To sum up, the above proposal comes out.
>>>
>>>> It may deserve a separate mail thread.
>>>>
>>>>
>>>> .
>>> .
>
>
>
>
> .
>


More information about the dev mailing list