[PATCH 1/1] ring: safe partial ordering for head/tail update
Konstantin Ananyev
konstantin.ananyev at huawei.com
Wed Sep 24 13:50:56 CEST 2025
> > > > > Sure, I am talking about MT scenario.
> > > > > I think I already provided an example: DPDK mempool library (see below).
> > > > > In brief, It works like that:
> > > > > At init it allocates ring of N memory buffers and ring big enough to hold all of
> > > them.
> > > >
> > > > Sorry, I meant to say: "it allocates N memory buffers and ring big enough to
> hold
> > > all of them".
> > > >
> > > > > Then it enqueues all allocated memory buffers into the ring.
> > > > > mempool_get - retrieves (dequeues) buffers from the ring.
> > > > > mempool_put - puts them back (enqueues) to the ring
> > > > > get() might fail (ENOMEM), while put is expected to always succeed.
> > > But how does the thread which calls mempool_put() get hold of the memory
> buffers
> > > that
> > > were obtained using mempool_get() by some other thread? Or this is not the
> > > scenario you
> > > are worrying about?
> > > Is it rather that multiple threads independently call mempool_get() and then
> > > mempool_put()
> > > on their own buffers? And you are worried that a thread will fail to return
> > > (mempool_put) a
> > > buffer that it earlier allocated (mempool_get)? We could create a litmus test for
> > > that.
> >
> >
> > Both scenarios are possible.
> > For Run-To-Completion model each thread usually does: allocate/use/free group
> of mbufs.
> > For pipleline model one thread can allocate bunch of mbufs, then pass them to
> other
> > thread (via another ring for example) for further processing and then releasing.
> In the pipeline model, if the last stage (thread) frees (enqueues) buffers onto some
> ring buffer
> and the first stage (thread) allocates (dequeues) buffers from the same ring buffer
> but there
> isn't any other type of synchronization between the threads, we can never guarantee
> that
> the first thread will be able to dequeue buffers because it doesn't know whether the
> last
> thread has enqueued any buffers.
Yes, as I said above - for mempool use-case: dequeue can fail, enqueue should always succeed.
The closest analogy: malloc() can fail, free() should never fail.
>
> However, enqueue ought to always succeed. We should be able to create a litmus
> test for that.
> Ring 1 is used as mempool, it initially contains capacity elements (full).
> Ring 2 is used as pipe between stages 1 and 2, it initially contains 0 elements (empty).
> Thread 1 allocates/dequeues a buffer from ring 1.
> Thread 1 enqueues that buffer onto ring 2.
> Thread 2 dequeues a buffer from ring 2.
> Thread 2 frees/enqueues that buffer onto ring 1. <<< this must succeed!
> Does this reflect the situation you worry about?
This is one of the possible scenarios.
As I said above - mempool_put() is expected to always be able to enqueue element to the ring.
TBH, I am not sure what you are trying to prove with the litmus test.
Looking at the changes you proposed:
+ /*
+ * Ensure the entries calculation was not based on a stale
+ * and unsafe stail observation that causes underflow.
+ */
+ if ((int)*entries < 0)
+ *entries = 0;
+
/* check that we have enough room in ring */
if (unlikely(n > *entries))
n = (behavior == RTE_RING_QUEUE_FIXED) ?
0 : *entries;
*new_head = *old_head + n;
if (n == 0)
return 0;
It is clear that with these changes enqueue/dequeue might fail even
when there are available entries in the ring.
One simple way that probably will introduce a loop instead of 'if':
(keep reading head and tail values till we get a valid results)
but again I am not sure it is a good way.
Konstantin
More information about the dev
mailing list