|SUCCESS| [2/2] ip_frag: randomize hash seed
dpdklab at iol.unh.edu
dpdklab at iol.unh.edu
Thu Apr 9 02:13:38 CEST 2026
Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162949
_AI Code Review_
=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-09
Reviewing patches from git commit series for DPDK submission.
git sha: first patch (1/2) not specified
Author: Stephen Hemminger <stephen at networkplumber.org>
Subject: [PATCH 1/2] ip_frag: fix unsafe TAILQ usage
This patch fixes unsafe list traversal in
rte_ip_frag_table_del_expired_entries() where the next pointer was being
accessed after TAILQ_REMOVE(). The solution uses RTE_TAILQ_FOREACH_SAFE()
and also refactors the loop logic.
> diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
> index ee9aa93027..79ac45289b 100644
> --- a/lib/ip_frag/rte_ip_frag_common.c
> +++ b/lib/ip_frag/rte_ip_frag_common.c
> @@ -135,18 +135,18 @@ rte_ip_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
> struct rte_ip_frag_death_row *dr, uint64_t tms)
> {
> uint64_t max_cycles;
> - struct ip_frag_pkt *fp;
> + struct ip_frag_pkt *fp, *tmp;
>
> max_cycles = tbl->max_cycles;
>
> - TAILQ_FOREACH(fp, &tbl->lru, lru)
> - if (max_cycles + fp->start < tms) {
> - /* check that death row has enough space */
> - if (RTE_IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >=
> - fp->last_idx)
> - ip_frag_tbl_del(tbl, dr, fp);
> - else
> - return;
> - } else
> + RTE_TAILQ_FOREACH_SAFE(fp, &tbl->lru, lru, tmp) {
> + if (max_cycles + fp->start >= tms)
> return;
> +
> + /* check that death row has enough space */
> + if (RTE_IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt < fp->last_idx)
> + return;
> +
> + ip_frag_tbl_del(tbl, dr, fp);
> + }
> }
The refactored loop changes semantics beyond just fixing the unsafe TAILQ
usage. Previously, the function would return immediately when encountering
the first non-expired entry. Now with the inverted condition, does the code
still maintain the same early exit on non-expired entries?
Looking at the LRU ordering: entries in &tbl->lru should be ordered by age.
Does ip_frag_tbl_del() maintain this ordering? If newer entries can appear
before older ones in the list, could this cause premature loop termination?
The original logic attempted deletion only when death row had sufficient
space (>=). The new logic uses (<) and returns early. These appear
equivalent, but does flipping both conditions and exits preserve the same
behavior when death row space is exactly equal to fp->last_idx?
---
git sha: second patch (2/2) not specified
Author: Stephen Hemminger <stephen at networkplumber.org>
Subject: [PATCH 2/2] ip_frag: randomize hash seed
This patch adds hash seed randomization to prevent collision attacks in IP
fragmentation reassembly by using rte_rand() instead of a fixed prime.
> diff --git a/lib/ip_frag/ip_frag_internal.c b/lib/ip_frag/ip_frag_internal.c
> index 7cbef647df..382f42d0e1 100644
> --- a/lib/ip_frag/ip_frag_internal.c
> +++ b/lib/ip_frag/ip_frag_internal.c
> @@ -9,8 +9,6 @@
>
> #include "ip_frag_common.h"
>
> -#define PRIME_VALUE 0xeaad8405
[ ... ]
> @@ -38,7 +36,7 @@ ip_frag_tbl_reuse(struct rte_ip_frag_tbl *tbl, struct rte_ip_frag_death_row *dr,
>
>
> static inline void
> -ipv4_frag_hash(const struct ip_frag_key *key, uint32_t *v1, uint32_t *v2)
> +ipv4_frag_hash(const struct ip_frag_key *key, uint32_t *v1, uint32_t *v2, uint32_t seed)
> {
> uint32_t v;
> const uint32_t *p;
> @@ -46,12 +44,12 @@ ipv4_frag_hash(const struct ip_frag_key *key, uint32_t *v1, uint32_t *v2)
> p = (const uint32_t *)&key->src_dst;
>
> #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
> - v = rte_hash_crc_4byte(p[0], PRIME_VALUE);
> + v = rte_hash_crc_4byte(p[0], seed);
> v = rte_hash_crc_4byte(p[1], v);
> v = rte_hash_crc_4byte(key->id, v);
> #else
>
> - v = rte_jhash_3words(p[0], p[1], key->id, PRIME_VALUE);
> + v = rte_jhash_3words(p[0], p[1], key->id, seed);
> #endif /* RTE_ARCH_X86 */
[ ... ]
> diff --git a/lib/ip_frag/ip_reassembly.h b/lib/ip_frag/ip_reassembly.h
> index 54afed5417..9e1666ef67 100644
> --- a/lib/ip_frag/ip_reassembly.h
> +++ b/lib/ip_frag/ip_reassembly.h
> @@ -79,6 +79,7 @@ struct rte_ip_frag_tbl {
> uint32_t bucket_entries; /* hash associativity. */
> uint32_t nb_entries; /* total size of the table. */
> uint32_t nb_buckets; /* num of associativity lines. */
> + uint32_t seed; /* hash function init value */
The comment uses a tab before the comment start. Does this match the
alignment style of the other fields in this structure? The existing fields
use spaces for alignment before their comments.
> struct ip_frag_pkt *last; /* last used entry. */
> struct ip_pkt_list lru; /* LRU list for table entries. */
> struct ip_frag_tbl_stat stat; /* statistics counters. */
> diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
> index 79ac45289b..00bf9476de 100644
> --- a/lib/ip_frag/rte_ip_frag_common.c
> +++ b/lib/ip_frag/rte_ip_frag_common.c
> @@ -79,6 +79,7 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries,
> tbl->nb_buckets = bucket_num;
> tbl->bucket_entries = bucket_entries;
> tbl->entry_mask = (tbl->nb_entries - 1) & ~(tbl->bucket_entries - 1);
> + tbl->seed = rte_rand();
Does rte_rand() guarantee sufficient entropy during early initialization?
In rte_ip_frag_table_create(), if this function is called during EAL
initialization before the random subsystem is fully seeded, could tbl->seed
have predictable values across runs or deployments?
Does struct rte_ip_frag_tbl get exposed through any ABI that applications
depend on? Adding the seed field changes the structure size and layout.
Should this addition consider ABI compatibility, or is this structure
opaque to applications?
More information about the test-report
mailing list