[dpdk-dev] [PATCH 2/4] hash: add memory ordering to avoid race conditions

Ola Liljedahl Ola.Liljedahl at arm.com
Mon Oct 1 12:42:26 CEST 2018



On 28/09/2018, 02:43, "Wang, Yipeng1" <yipeng1.wang at intel.com> wrote:

    Some general comments for  the various __atomic_store/load added,
    
    1. Although it passes the compiler check, but I just want to confirm that if we should use GCC/clang builtins, or if
    There are higher level APIs in DPDK to do atomic operations? 
[Ola] Adding "higher level" API's on top of the basic language/compiler support is not a good idea.
There is an infinite amount of base types for the atomic operations, multiply that with all different types of atomic operations (e.g. load, store, fetch_add, add, cas etc etc) and the different memory orderings and you create a very large API (but likely only a small but irregular subset will be used). So lots of work for little gain and difficult to test every single item in the API.

For some compiler that does not support __atomic builtins, one could write an __atomic emulation layer. But I think GCC __atomic is already the ideal source code abstraction.

    
    2. We believe compiler will translate the atomic_store/load to regular MOV instruction on
    Total Store Order architecture (e.g. X86_64). But we run the perf test on x86 and here is the relative slowdown on
    lookup comparing to master head. I am not sure if the performance drop comes from the atomic buitins.
[Ola] Any performance difference is most likely not from the use of atomic builtins because as you write, on x86 they should translate to normal loads and stores in most situations. But the code and data structures have changed so there is some difference in e.g. memory accesses, couldn't this explain the performance difference?

    
    Keysize | single lookup | bulk lookup
    4 | 0.93 | 0.95
    8 | 0.95 | 0.96
    16 | 0.97 | 0.96
    32 | 0.97 | 1.00
    48 | 1.03 | 0.99
    64 | 1.04 | 0.98
    9 | 0.91 | 0.96
    13 | 0.97 | 0.98
    37 | 1.04 | 1.03
    40 | 1.02 | 0.98
    
    Other comments inlined.
    
    >-----Original Message-----
    >Only race condition that can occur is -  using the key store element
    >before the key write is completed. Hence, while inserting the element
    >the release memory order is used. Any other race condition is caught
    >by the key comparison. Memory orderings are added only where needed.
    >For ex: reads in the writer's context do not need memory ordering
    >as there is a single writer.
    >
    [Wang, Yipeng] I assume this commit works together with the next one to enable
    read-write concurrency on relaxed memory ordering machine. This commit by itself does not do that, right?
    If this is the case, should we merge the two commits,
    or maybe just explain it a little bit more in the commit message?
    
    >key_idx in the bucket entry and pdata in the key store element are
    >used for synchronisation. key_idx is used to release an inserted
    >entry in the bucket to the reader. Use of pdata for synchronisation
    >is required due to updation of an existing entry where-in only
    >the pdata is updated without updating key_idx.
    >
    >
    >-/* Search a key from bucket and update its data */
    >+/* Search a key from bucket and update its data.
    >+ * Writer holds the lock before calling this.
    >+ */
    > static inline int32_t
    > search_and_update(const struct rte_hash *h, void *data, const void *key,
    > 	struct rte_hash_bucket *bkt, hash_sig_t sig, hash_sig_t alt_hash)
    >@@ -499,8 +501,13 @@ search_and_update(const struct rte_hash *h, void *data, const void *key,
    > 			k = (struct rte_hash_key *) ((char *)keys +
    > 					bkt->key_idx[i] * h->key_entry_size);
    > 			if (rte_hash_cmp_eq(key, k->key, h) == 0) {
    >-				/* Update data */
    >-				k->pdata = data;
    >+				/* 'pdata' acts as the synchronization point
    >+				 * when an existing hash entry is updated.
    >+				 * Key is not updated in this case.
    >+				 */
    [Wang, Yipeng] I did not quite understand why do we need synchronization for hash data update.
    Since pdata write is already atomic, the lookup will either read out the stale data or the new data,
    which should be fine without synchronization.
    Is it to ensure the order of multiple reads in lookup threads?
[Ola] If pdata is used as a reference to access other shared data, you need to ensure that the access of pdata and accesses to other data are ordered appropriately (e.g. with acquire/release). I think reading a new pdata but stale associated data is a bad thing.

    
    >@@ -1243,11 +1289,19 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys,
    > 		while (sec_hitmask[i]) {
    > 			uint32_t hit_index = __builtin_ctzl(sec_hitmask[i]);
    >
    >-			uint32_t key_idx = secondary_bkt[i]->key_idx[hit_index];
    >+			uint32_t key_idx =
    >+			__atomic_load_n(
    >+				&secondary_bkt[i]->key_idx[hit_index],
    >+				__ATOMIC_ACQUIRE);
    > 			const struct rte_hash_key *key_slot =
    > 				(const struct rte_hash_key *)(
    > 				(const char *)h->key_store +
    > 				key_idx * h->key_entry_size);
    >+
    >+			if (key_idx != EMPTY_SLOT)
    [Wang, Yipeng] I think even for current code, we need to check empty_slot.
    Could you export this as a bug fix commit?
    
    
    



More information about the dev mailing list