[PATCH v3] eal: add seqlock
Ola Liljedahl
ola.liljedahl at arm.com
Sat Apr 2 20:15:59 CEST 2022
On 4/1/22 17:07, Mattias Rönnblom wrote:
> +
> +/**
> + * End a read-side critical section.
> + *
> + * A call to this function marks the end of a read-side critical
> + * section, for @p seqlock. The application must supply the sequence
> + * number produced by the corresponding rte_seqlock_read_lock() (or,
> + * in case of a retry, the rte_seqlock_tryunlock()) call.
> + *
> + * After this function has been called, the caller should not access
> + * the protected data.
> + *
> + * In case this function returns true, the just-read data was
> + * consistent and the set of atomic and non-atomic load operations
> + * performed between rte_seqlock_read_lock() and
> + * rte_seqlock_read_tryunlock() were atomic, as a whole.
> + *
> + * In case rte_seqlock_read_tryunlock() returns false, the data was
> + * modified as it was being read and may be inconsistent, and thus
> + * should be discarded. The @p begin_sn is updated with the
> + * now-current sequence number.
> + *
> + * @param seqlock
> + * A pointer to the seqlock.
> + * @param begin_sn
> + * The seqlock sequence number returned by
> + * rte_seqlock_read_lock() (potentially updated in subsequent
> + * rte_seqlock_read_tryunlock() calls) for this critical section.
> + * @return
> + * true or false, if the just-read seqlock-protected data was consistent
> + * or inconsistent, respectively, at the time it was read.
> + *
> + * @see rte_seqlock_read_lock()
> + */
> +__rte_experimental
> +static inline bool
> +rte_seqlock_read_tryunlock(const rte_seqlock_t *seqlock, uint32_t *begin_sn)
> +{
> + uint32_t end_sn;
> +
> + /* make sure the data loads happens before the sn load */
> + rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> +
> + end_sn = __atomic_load_n(&seqlock->sn, __ATOMIC_RELAXED);
Since we are reading and potentially returning the sequence number here
(repeating the read of the protected data), we need to use load-acquire.
I assume it is not expected that the user will call
rte_seqlock_read_lock() again.
Seeing this implementation, I might actually prefer the original
implementation, I think it is cleaner. But I would like for the begin
function also to wait for an even sequence number, the end function
would only have to check for same sequence number, this might improve
performance a little bit as readers won't perform one or several broken
reads while a write is in progress. The function names are a different
thing though.
The writer side behaves much more like a lock with mutual exclusion so
write_lock/write_unlock makes sense.
> +
> + if (unlikely(end_sn & 1 || *begin_sn != end_sn)) {
> + *begin_sn = end_sn;
> + return false;
> + }
> +
> + return true;
> +}
> +
More information about the dev
mailing list