[PATCH v2] lib/hash: new feature adding existing key
Stephen Hemminger
stephen at networkplumber.org
Fri Oct 4 00:37:21 CEST 2024
On Fri, 16 Feb 2024 13:43:52 +0100
Thomas Monjalon <thomas at monjalon.net> wrote:
> Any review please?
> If maintainers agree with the idea, we should announce the ABI change.
>
>
> 23/10/2023 10:29, Abdullah Ömer Yamaç:
> > From: Abdullah Ömer Yamaç <omer.yamac at ceng.metu.edu.tr>
> >
> > In some use cases inserting data with the same key shouldn't be
> > overwritten. We use a new flag in this patch to disable overwriting
> > data for the same key.
> >
> > Signed-off-by: Abdullah Ömer Yamaç <omer.yamac at ceng.metu.edu.tr>
> >
> > ---
> > Cc: Yipeng Wang <yipeng1.wang at intel.com>
> > Cc: Sameh Gobriel <sameh.gobriel at intel.com>
> > Cc: Bruce Richardson <bruce.richardson at intel.com>
> > Cc: Vladimir Medvedkin <vladimir.medvedkin at intel.com>
> > Cc: David Marchand <david.marchand at redhat.com>
> > ---
> > lib/hash/rte_cuckoo_hash.c | 10 +++++++++-
> > lib/hash/rte_cuckoo_hash.h | 2 ++
> > lib/hash/rte_hash.h | 4 ++++
> > 3 files changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
> > index 19b23f2a97..fe8f21bee4 100644
> > --- a/lib/hash/rte_cuckoo_hash.c
> > +++ b/lib/hash/rte_cuckoo_hash.c
> > @@ -32,7 +32,8 @@
> > RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY | \
> > RTE_HASH_EXTRA_FLAGS_EXT_TABLE | \
> > RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL | \
> > - RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF)
> > + RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF | \
> > + RTE_HASH_EXTRA_FLAGS_DISABLE_UPDATE_EXISTING_KEY)
> >
> > #define FOR_EACH_BUCKET(CURRENT_BKT, START_BUCKET) \
> > for (CURRENT_BKT = START_BUCKET; \
> > @@ -148,6 +149,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
> > unsigned int readwrite_concur_support = 0;
> > unsigned int writer_takes_lock = 0;
> > unsigned int no_free_on_del = 0;
> > + unsigned int no_update_data = 0;
> > uint32_t *ext_bkt_to_free = NULL;
> > uint32_t *tbl_chng_cnt = NULL;
> > struct lcore_cache *local_free_slots = NULL;
> > @@ -216,6 +218,9 @@ rte_hash_create(const struct rte_hash_parameters *params)
> > no_free_on_del = 1;
> > }
> >
> > + if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_DISABLE_UPDATE_EXISTING_KEY)
> > + no_update_data = 1;
> > +
> > /* Store all keys and leave the first entry as a dummy entry for lookup_bulk */
> > if (use_local_cache)
> > /*
> > @@ -428,6 +433,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
> > h->ext_table_support = ext_table_support;
> > h->writer_takes_lock = writer_takes_lock;
> > h->no_free_on_del = no_free_on_del;
> > + h->no_update_data = no_update_data;
> > h->readwrite_concur_lf_support = readwrite_concur_lf_support;
> >
> > #if defined(RTE_ARCH_X86)
> > @@ -707,6 +713,8 @@ search_and_update(const struct rte_hash *h, void *data, const void *key,
> > k = (struct rte_hash_key *) ((char *)keys +
> > bkt->key_idx[i] * h->key_entry_size);
> > if (rte_hash_cmp_eq(key, k->key, h) == 0) {
> > + if (h->no_update_data == 1)
> > + return -EINVAL;
This is buggy, the caller assumes -1 on error in several places.
See:
ret = search_and_update(h, data, key, prim_bkt, sig);
if (ret != -1) {
__hash_rw_writer_unlock(h);
*ret_val = ret;
return 1;
}
These paths would exercised when table had to expand.
Also any new functionality like this would need tests in functional test.
More information about the dev
mailing list