[dpdk-dev] [PATCH v3 0/6] mempool: add bucket driver
Andrew Rybchenko
arybchenko at solarflare.com
Wed Apr 25 18:32:16 CEST 2018
The initial patch series [1] (RFCv1 is [2]) is split into two to simplify
processing. It is the second part which relies on the first one [3]
which is already applied.
The patch series adds bucket mempool driver which allows to allocate
(both physically and virtually) contiguous blocks of objects and adds
mempool API to do it. It is still capable to provide separate objects,
but it is definitely more heavy-weight than ring/stack drivers.
The driver will be used by the future Solarflare driver enhancements
which allow to utilize physical contiguous blocks in the NIC firmware.
The target usecase is dequeue in blocks and enqueue separate objects
back (which are collected in buckets to be dequeued). So, the memory
pool with bucket driver is created by an application and provided to
networking PMD receive queue. The choice of bucket driver is done using
rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
block allocation should report the bucket driver as the only supported
and preferred one.
Introduction of the contiguous block dequeue operation is proven by
performance measurements using autotest with minor enhancements:
- in the original test bulks are powers of two, which is unacceptable
for us, so they are changed to multiple of contig_block_size;
- the test code is duplicated to support plain dequeue and
dequeue_contig_blocks;
- all the extra test variations (with/without cache etc) are eliminated;
- a fake read from the dequeued buffer is added (in both cases) to
simulate mbufs access.
start performance test for bucket (without cache)
mempool_autotest cache= 0 cores= 1 n_get_bulk= 15 n_put_bulk= 1 n_keep= 30 Srate_persec= 111935488
mempool_autotest cache= 0 cores= 1 n_get_bulk= 15 n_put_bulk= 1 n_keep= 60 Srate_persec= 115290931
mempool_autotest cache= 0 cores= 1 n_get_bulk= 15 n_put_bulk= 15 n_keep= 30 Srate_persec= 353055539
mempool_autotest cache= 0 cores= 1 n_get_bulk= 15 n_put_bulk= 15 n_keep= 60 Srate_persec= 353330790
mempool_autotest cache= 0 cores= 2 n_get_bulk= 15 n_put_bulk= 1 n_keep= 30 Srate_persec= 224657407
mempool_autotest cache= 0 cores= 2 n_get_bulk= 15 n_put_bulk= 1 n_keep= 60 Srate_persec= 230411468
mempool_autotest cache= 0 cores= 2 n_get_bulk= 15 n_put_bulk= 15 n_keep= 30 Srate_persec= 706700902
mempool_autotest cache= 0 cores= 2 n_get_bulk= 15 n_put_bulk= 15 n_keep= 60 Srate_persec= 703673139
mempool_autotest cache= 0 cores= 4 n_get_bulk= 15 n_put_bulk= 1 n_keep= 30 Srate_persec= 425236887
mempool_autotest cache= 0 cores= 4 n_get_bulk= 15 n_put_bulk= 1 n_keep= 60 Srate_persec= 437295512
mempool_autotest cache= 0 cores= 4 n_get_bulk= 15 n_put_bulk= 15 n_keep= 30 Srate_persec= 1343409356
mempool_autotest cache= 0 cores= 4 n_get_bulk= 15 n_put_bulk= 15 n_keep= 60 Srate_persec= 1336567397
start performance test for bucket (without cache + contiguous dequeue)
mempool_autotest cache= 0 cores= 1 n_get_bulk= 15 n_put_bulk= 1 n_keep= 30 Crate_persec= 122945536
mempool_autotest cache= 0 cores= 1 n_get_bulk= 15 n_put_bulk= 1 n_keep= 60 Crate_persec= 126458265
mempool_autotest cache= 0 cores= 1 n_get_bulk= 15 n_put_bulk= 15 n_keep= 30 Crate_persec= 374262988
mempool_autotest cache= 0 cores= 1 n_get_bulk= 15 n_put_bulk= 15 n_keep= 60 Crate_persec= 377316966
mempool_autotest cache= 0 cores= 2 n_get_bulk= 15 n_put_bulk= 1 n_keep= 30 Crate_persec= 244842496
mempool_autotest cache= 0 cores= 2 n_get_bulk= 15 n_put_bulk= 1 n_keep= 60 Crate_persec= 251618917
mempool_autotest cache= 0 cores= 2 n_get_bulk= 15 n_put_bulk= 15 n_keep= 30 Crate_persec= 751226060
mempool_autotest cache= 0 cores= 2 n_get_bulk= 15 n_put_bulk= 15 n_keep= 60 Crate_persec= 756233010
mempool_autotest cache= 0 cores= 4 n_get_bulk= 15 n_put_bulk= 1 n_keep= 30 Crate_persec= 462068120
mempool_autotest cache= 0 cores= 4 n_get_bulk= 15 n_put_bulk= 1 n_keep= 60 Crate_persec= 476997221
mempool_autotest cache= 0 cores= 4 n_get_bulk= 15 n_put_bulk= 15 n_keep= 30 Crate_persec= 1432171313
mempool_autotest cache= 0 cores= 4 n_get_bulk= 15 n_put_bulk= 15 n_keep= 60 Crate_persec= 1438829771
The number of objects in the contiguous block is a function of bucket
memory size (.config option) and total element size. In the future
additional API with possibility to pass parameters on mempool allocation
may be added.
It breaks ABI since changes rte_mempool_ops. The ABI version is already
bumped in [4].
I've double-checked that mempool_autotest and mempool_perf_autotest
work fine if EAL argument --mbuf-pool-ops-name=bucket is used.
mempool_perf_autotest as is for bucket shows less rate than ring_mp_mc
since test dequeue bulk sizes are not aligned to contintiguous block size
and bucket driver is optimized for contiguous blocks allocation (or at
least allocation in bulks multiple by contiguous block size).
However, real usage of the bucket driver even without contiguous block
dequeue (transmit only benchmark which simply generates traffic) shows
better packet rate. It looks like it is because the driver is
stack-based (per lcore without locks/barriers) and it improves cache
hit (working memory is smaller since it is a subset of the mempool
instead of entire mempool when some objects do not fit into mempool cache).
Unfortunately I've not finalized yet patches which allow to repeat above
measurements (done using hacks).
The driver is required for [5].
[1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
[2] https://dpdk.org/ml/archives/dev/2017-November/082335.html
[3] https://dpdk.org/ml/archives/dev/2018-April/097354.html
[4] https://dpdk.org/ml/archives/dev/2018-April/097352.html
[5] https://dpdk.org/ml/archives/dev/2018-April/098089.html
v2 -> v3:
- rebase
- align rte_mempool_info structure size to avoid ABI breakages in a
number of cases when something relative small added
- fix bug in get_count because of not counted objects in the
adaptation rings
- squash __mempool_generic_get_contig_blocks() into
rte_mempool_get_contig_blocks()
- fix typo in documentation
v1 -> v2:
- just rebase
RFCv2 -> v1:
- rebased on top of [3]
- cleanup deprecation notice when it is done
- mark a new API experimental
- move contig blocks dequeue debug checks/processing to the library function
- add contig blocks get stats
- add release notes
RFCv1 -> RFCv2:
- change info API to get information from driver required to
API user to know contiguous block size
- use SPDX tags
- avoid all objects affinity to single lcore
- fix bucket get_count
- fix NO_CACHE_ALIGN case in bucket mempool
Andrew Rybchenko (1):
doc: advertise bucket mempool driver
Artem V. Andreev (5):
mempool/bucket: implement bucket mempool manager
mempool: implement abstract mempool info API
mempool: support block dequeue operation
mempool/bucket: implement block dequeue operation
mempool/bucket: do not allow one lcore to grab all buckets
MAINTAINERS | 9 +
config/common_base | 2 +
doc/guides/rel_notes/deprecation.rst | 7 -
doc/guides/rel_notes/release_18_05.rst | 10 +-
drivers/mempool/Makefile | 1 +
drivers/mempool/bucket/Makefile | 27 +
drivers/mempool/bucket/meson.build | 9 +
drivers/mempool/bucket/rte_mempool_bucket.c | 628 +++++++++++++++++++++
.../mempool/bucket/rte_mempool_bucket_version.map | 4 +
lib/librte_mempool/Makefile | 1 +
lib/librte_mempool/meson.build | 2 +
lib/librte_mempool/rte_mempool.c | 39 ++
lib/librte_mempool/rte_mempool.h | 171 ++++++
lib/librte_mempool/rte_mempool_ops.c | 16 +
lib/librte_mempool/rte_mempool_version.map | 8 +
mk/rte.app.mk | 1 +
16 files changed, 927 insertions(+), 8 deletions(-)
create mode 100644 drivers/mempool/bucket/Makefile
create mode 100644 drivers/mempool/bucket/meson.build
create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map
--
2.14.1
More information about the dev
mailing list