[dpdk-dev] [RFC PATCH 0/6] mempool: add bucket mempool driver

Andrew Rybchenko arybchenko at solarflare.com
Wed Jan 17 16:03:11 CET 2018


Hi Olivier,

first of all many thanks for the review. See my replies/comments below.
Also I'll reply to the the specific patch mails as well.

On 12/14/2017 04:36 PM, Olivier MATZ wrote:
> Hi Andrew,
>
> Please find some comments about this patchset below.
> I'll also send some comments as replies to the specific patch.
>
> On Fri, Nov 24, 2017 at 04:06:25PM +0000, Andrew Rybchenko wrote:
>> The patch series adds bucket mempool driver which allows to allocate
>> (both physically and virtually) contiguous blocks of objects and adds
>> mempool API to do it. It is still capable to provide separate objects,
>> but it is definitely more heavy-weight than ring/stack drivers.
>>
>> The target usecase is dequeue in blocks and enqueue separate objects
>> back (which are collected in buckets to be dequeued). So, the memory
>> pool with bucket driver is created by an application and provided to
>> networking PMD receive queue. The choice of bucket driver is done using
>> rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
>> block allocation should report the bucket driver as the only supported
>> and preferred one.
> So, you are planning to use this driver for a future/existing PMD?

Yes, we're going to use it in the sfc PMD in the case of dedicated FW
variant which utilizes the bucketing.

> Do you have numbers about the performance gain, in which conditions,
> etc... ? And are there conditions where there is a performance loss ?

Our idea here is to use it together HW/FW which understand the bucketing.
It adds some load on CPU to track buckets, but block/bucket dequeue allows
to compensate it. We'll try to prepare performance figures when we have
solution close to final. Hopefully pretty soon.

>> The number of objects in the contiguous block is a function of bucket
>> memory size (.config option) and total element size.
> The size of the bucket memory is hardcoded to 32KB.
> Why this value ?

It is just an example. In fact we test mainly with 64K and 128K.

> Won't that be an issue if the user wants to use larger objects?

Ideally it should be start-time configurable, but it requires a way
to specify driver-specific parameters passed to mempool on allocation.
Right now we decided to keep the task for the future since there is
no clear understanding on how it should look like.
If you have ideas, please, share, we would be thankful.

>> As I understand it breaks ABI so it requires 3 acks in accordance with
>> policy, deprecation notice and mempool shared library version bump.
>> If there is a way to avoid ABI breakage, please, let us know.
> If my understanding is correct, the ABI breakage is caused by the
> addition of the new block dequeue operation, right?

Yes and we'll have more ops to make population of objects customizable.

Thanks,
Andrew.


More information about the dev mailing list