[dpdk-dev] [PATCH v3 5/6] mempool: introduce block size align flag
Santosh Shukla
santosh.shukla at caviumnetworks.com
Thu Jul 20 15:47:58 CEST 2017
Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.
Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
If this flag is set:
- Align object start address to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
sure that requested 'n' object gets correctly populated. Example:
- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz
aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
memzone area x. So total number of the object which can fit in the
pool area is n-1, Which is incorrect behavior.
Therefore we request one additional object (/block_sz area) from memzone
when F_BLK_SZ_ALIGNED flag is set.
Signed-off-by: Santosh Shukla <santosh.shukla at caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob at caviumnetworks.com>
---
v1 -- v2:
- patch description changed.
- Removed elseif brakcet mix
- removed sanity check for alignment
- removed extra var delta
- Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
Refer v1 review comment [1].
[1] http://dpdk.org/dev/patchwork/patch/25605/
lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
lib/librte_mempool/rte_mempool.h | 1 +
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 19e5e6ddf..7610f0d1f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
*/
size_t
rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
- __rte_unused const struct rte_mempool *mp)
+ const struct rte_mempool *mp)
{
size_t obj_per_page, pg_num, pg_sz;
+ if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+ /* alignment need one additional object */
+ elt_num += 1;
+
if (total_elt_sz == 0)
return 0;
@@ -265,13 +269,16 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
ssize_t
rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
- uint32_t pg_shift, __rte_unused const struct rte_mempool *mp)
+ uint32_t pg_shift, const struct rte_mempool *mp)
{
uint32_t elt_cnt = 0;
phys_addr_t start, end;
uint32_t paddr_idx;
size_t pg_sz = (size_t)1 << pg_shift;
+ if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+ /* alignment need one additional object */
+ elt_num += 1;
/* if paddr is NULL, assume contiguous memory */
if (paddr == NULL) {
@@ -389,7 +396,10 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
memhdr->free_cb = free_cb;
memhdr->opaque = opaque;
- if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+ if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+ /* align object start address to a multiple of total_elt_sz */
+ off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+ else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
else
off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index a4bfdb56e..d7c2416f4 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -266,6 +266,7 @@ struct rte_mempool {
#define MEMPOOL_F_POOL_CREATED 0x0010 /**< Internal: pool is created. */
#define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
+#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align obj start address to total elem size */
/**
* @internal When debug is enabled, store some statistics.
--
2.11.0
More information about the dev
mailing list