[PATCH v3] eal: add seqlock
Mattias Rönnblom
hofors at lysator.liu.se
Sun Apr 3 08:51:39 CEST 2022
On 2022-04-02 20:15, Ola Liljedahl wrote:
> On 4/1/22 17:07, Mattias Rönnblom wrote:
>> +
>> +/**
>> + * End a read-side critical section.
>> + *
>> + * A call to this function marks the end of a read-side critical
>> + * section, for @p seqlock. The application must supply the sequence
>> + * number produced by the corresponding rte_seqlock_read_lock() (or,
>> + * in case of a retry, the rte_seqlock_tryunlock()) call.
>> + *
>> + * After this function has been called, the caller should not access
>> + * the protected data.
>> + *
>> + * In case this function returns true, the just-read data was
>> + * consistent and the set of atomic and non-atomic load operations
>> + * performed between rte_seqlock_read_lock() and
>> + * rte_seqlock_read_tryunlock() were atomic, as a whole.
>> + *
>> + * In case rte_seqlock_read_tryunlock() returns false, the data was
>> + * modified as it was being read and may be inconsistent, and thus
>> + * should be discarded. The @p begin_sn is updated with the
>> + * now-current sequence number.
>> + *
>> + * @param seqlock
>> + * A pointer to the seqlock.
>> + * @param begin_sn
>> + * The seqlock sequence number returned by
>> + * rte_seqlock_read_lock() (potentially updated in subsequent
>> + * rte_seqlock_read_tryunlock() calls) for this critical section.
>> + * @return
>> + * true or false, if the just-read seqlock-protected data was
>> consistent
>> + * or inconsistent, respectively, at the time it was read.
>> + *
>> + * @see rte_seqlock_read_lock()
>> + */
>> +__rte_experimental
>> +static inline bool
>> +rte_seqlock_read_tryunlock(const rte_seqlock_t *seqlock, uint32_t
>> *begin_sn)
>> +{
>> + uint32_t end_sn;
>> +
>> + /* make sure the data loads happens before the sn load */
>> + rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
>> +
>> + end_sn = __atomic_load_n(&seqlock->sn, __ATOMIC_RELAXED);
>
> Since we are reading and potentially returning the sequence number here
> (repeating the read of the protected data), we need to use load-acquire.
> I assume it is not expected that the user will call
> rte_seqlock_read_lock() again.
>
Good point.
> Seeing this implementation, I might actually prefer the original
> implementation, I think it is cleaner.
Me too.
> But I would like for the begin
> function also to wait for an even sequence number, the end function
> would only have to check for same sequence number, this might improve
> performance a little bit as readers won't perform one or several broken
> reads while a write is in progress. The function names are a different
> thing though.
>
The odd sn should be a rare case, if the seqlock is used for relatively
low frequency update scenarios, which is what I think it should be
designed for.
Waiting for an even sn in read_begin() would exclude the option for the
caller to defer reading the new data to same later time, in case it's
being written. That in turn would force even a single writer to make
sure its thread is not preempted, or risk blocking all lcore worker
cores attempting to read the protected data.
You could complete the above API with a read_trybegin() function to
address that issue, for those who care, but that would force some extra
complexity on the user.
> The writer side behaves much more like a lock with mutual exclusion so
> write_lock/write_unlock makes sense.
>
>> +
>> + if (unlikely(end_sn & 1 || *begin_sn != end_sn)) {
>> + *begin_sn = end_sn;
>> + return false;
>> + }
>> +
>> + return true;
>> +}
>> +
More information about the dev
mailing list