[RFC 3/3] hash: add support for common small key sizes

Mattias Rönnblom hofors at lysator.liu.se
Fri Aug 22 09:19:45 CEST 2025


On 2025-08-21 22:35, Stephen Hemminger wrote:
> Add new compare functions for common small key sizes.
> 
> Bugzilla ID: 1775
> Suggested-by: Morten Brørup <mb at smartsharesystems.com>
> Reported-by: Mattias Rönnblom <mattias.ronnblom at ericsson.com>
> 
> Signed-off-by: Stephen Hemminger <stephen at networkplumber.org>
> ---
>   lib/hash/rte_cuckoo_hash.c | 54 ++++++++++++++++++++++++++++++++++++++
>   1 file changed, 54 insertions(+)
> 
> diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
> index 3212695d92..825889c320 100644
> --- a/lib/hash/rte_cuckoo_hash.c
> +++ b/lib/hash/rte_cuckoo_hash.c
> @@ -49,6 +49,11 @@ RTE_LOG_REGISTER_DEFAULT(hash_logtype, INFO);
>    */
>   enum cmp_jump_table_case {
>   	KEY_CUSTOM = 0,
> +	KEY_2_BYTES,
> +	KEY_4_BYTES,
> +	KEY_6_BYTES,
> +	KEY_8_BYTES,
> +	KEY_12_BYTES,
>   	KEY_16_BYTES,
>   	KEY_32_BYTES,
>   	KEY_48_BYTES,
> @@ -86,6 +91,50 @@ rte_hash_k32_cmp_eq(const void *key1, const void *key2, size_t key_len)
>   }
>   #endif
>   
> +static inline int
> +rte_hash_k2_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> +	const uint16_t *k1 = key1;
> +	const uint16_t *k2 = key2;
> +

What we do now is to require the keys are 16-bit aligned (which wasn't 
the case before).

You could

uint16_t k1;
memcpy(&k1, key1, sizeof(uint16_t));
instead.

Would generate the same code, but be safe from any future alignment issues.

Anyway, maybe it's safe to assume the keys are aligned, so this is not 
an issue.

> +	return k1[0] ^ k2[0];
> +}

Haven't you implemented "neq" rather than "eq" here? If the keys are 
equal, the result is 0. Should be != 0.

Would it be worth adding a comment like "use XOR to make this 
branch-free"? It may not be obvious to all readers.

That said, I’m not sure this trick will actually change the generated 
object code - especially if the result of the eq function is still used 
in a conditional afterward. Anyway, keeping it seems like a good 
conservative approach.

> +
> +static inline int
> +rte_hash_k4_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> +	const uint32_t *k1 = key1;
> +	const uint32_t *k2 = key2;
> +
> +	return k1[0] ^ k2[0];
> +}
> +
> +static inline int
> +rte_hash_k6_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> +	const uint16_t *k1 = key1;
> +	const uint16_t *k2 = key2;
> +
> +	return (k1[0] ^ k2[0]) | (k1[1] ^ k2[1]) | (k1[2] ^ k2[2]);
> +}
> +
> +static inline int
> +rte_hash_k8_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> +	const uint64_t *k1 = key1;
> +	const uint64_t *k2 = key2;
> +
> +	return k1[0] ^ k2[0];
> +}
> +
> +static inline int
> +rte_hash_k12_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> +	const uint32_t *k1 = key1;
> +	const uint32_t *k2 = key2;
> +
> +	return (k1[0] ^ k2[0]) | (k1[1] ^ k2[1]) | (k1[2] ^ k2[2]);
> +}
>   
>   static inline int
>   rte_hash_k48_cmp_eq(const void *key1, const void *key2, size_t key_len)
> @@ -228,6 +277,11 @@ void rte_hash_set_cmp_func(struct rte_hash *h, rte_hash_cmp_eq_t func)
>    */
>   static const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
>   	[KEY_CUSTOM] = NULL,
> +	[KEY_2_BYTES] = rte_hash_k2_cmp_eq,
> +	[KEY_4_BYTES] = rte_hash_k4_cmp_eq,
> +	[KEY_6_BYTES] = rte_hash_k6_cmp_eq,
> +	[KEY_8_BYTES] = rte_hash_k8_cmp_eq,
> +	[KEY_12_BYTES] = rte_hash_k12_cmp_eq,
>   	[KEY_16_BYTES] = rte_hash_k16_cmp_eq,
>   	[KEY_32_BYTES] = rte_hash_k32_cmp_eq,
>   	[KEY_48_BYTES] = rte_hash_k48_cmp_eq,



More information about the dev mailing list