[dpdk-dev] [PATCH 0/2] Multiple Pktmbuf mempool support

Hemant Agrawal hemant.agrawal at nxp.com
Tue Oct 10 16:21:01 CEST 2017


On 10/10/2017 7:45 PM, Thomas Monjalon wrote:
> 25/09/2017 12:24, Olivier MATZ:
>> Hi Hemant,
>>
>> On Fri, Sep 22, 2017 at 12:43:36PM +0530, Hemant Agrawal wrote:
>>> On 7/4/2017 5:52 PM, Hemant Agrawal wrote:
>>>> This patch is in addition to the patch series[1] submitted by
>>>> Santosh to allow application to set mempool handle.
>>>>
>>>> The existing pktmbuf pool create api only support the internal use
>>>> of "CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS", which assumes that the HW
>>>> can only support one type of mempool for packet mbuf.
>>>>
>>>> There are multiple additional requirements.
>>>>
>>>> 1. The platform independent image detects the underlying bus,
>>>> based on the bus and resource detected, it will dynamically select
>>>> the default mempool. This need not to have the application knowlege.
>>>> e.g. DPAA2 and DPAA are two different NXP platforms, based on the
>>>> underlying platform the default ops for mbuf can be dpaa or dpaa2.
>>>> Application should work seemlessly whether it is running on dpaa or dpaa2.
>>>>
>>>> 2.Platform support more than one type of mempool for pktmbuf,
>>>> depend on the availability of resource, the driver can decide one
>>>> of the mempool for the current packet mbuf request.
>>>>
>>>> 3. In case of where application is providing the mempool, as proposed
>>>> in [1], the check preference logic will be bypassed and application
>>>> config will take priority.
>>>>
>>>> [1]Allow application set mempool handle
>>>> http://dpdk.org/ml/archives/dev/2017-June/067022.html
>>>>
>>>> Hemant Agrawal (2):
>>>>   mempool: check the support for the given mempool
>>>>   mbuf: add support for preferred mempool list
>>>>
>>>>  config/common_base                   |  2 ++
>>>>  lib/librte_mbuf/rte_mbuf.c           | 28 +++++++++++++++++++++++-----
>>>>  lib/librte_mempool/rte_mempool.h     | 24 ++++++++++++++++++++++++
>>>>  lib/librte_mempool/rte_mempool_ops.c | 32 ++++++++++++++++++++++++++++++++
>>>>  4 files changed, 81 insertions(+), 5 deletions(-)
>>>>
>>>
>>> Hi Olivier,
>>> 	Any opinion on this patchset?
>>
>> Sorry for the lack of feedback, for some reason I missed the initial
>> mails.
>>
>> I don't quite like the idea of having hardcoded config:
>>  CONFIG_RTE_MBUF_BACKUP_MEMPOOL_OPS_1=""
>>  CONFIG_RTE_MBUF_BACKUP_MEMPOOL_OPS_2=""
>>
>> Also, I'm a bit reserved about rte_mempool_ops_check_support(): it can
>> return "supported" but the creation of the pool can still fail later due
>> to the creation parameters (element count/size, mempool flags, ...).
>>
>> The ordering preference of these mempool ops may also depend on the
>> configuration (enabled ports for instance) or user choices.
>>
>> Let me propose you another approach to (I hope) solve your issue, based
>> on Santosh's patches [1].
>>
>> We can introduce a new helper that could be used by applications to
>> dynamically select the best mempool ops. It could be something similar
>> to the pseudo-code I've written in [3].
>>
>>   // return an array pool ops name, ordered by preference
>>   pool_ops = get_ordered_pool_ops_list()
>>
>> Then try to create the first pool, if it fails, try the next, until
>> it is succesful.
>>
>> That said, it is difficult to take the good decision inside
>> rte_pktmbuf_pool_create() because we don't have all the information.
>> Sergio and Jerin suggested to introduce a new argument ops_name to
>> rte_pktmbuf_pool_create() [2]. From what I remember, this is also
>> something you were in favor of, right?
>>
>> In that case, we can move the difficult task of choosing the
>> right mempool inside the application.
>>
>> Comments?
>
> I guess this discussion is obsolete since Santosh commit?
> 	http://dpdk.org/commit/a103a97e
>
>
Not yet.  On the basis of our discussion with Olivier and Santosh 
commit, we are working out a way, which solves our issue.

We will close on it in next few days.


More information about the dev mailing list