[PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
Yasin CANER
yasinncaner at gmail.com
Wed May 17 13:37:46 CEST 2023
Hello,
mempool drivers is listed here.
https://doc.dpdk.org/guides/mempool/index.html
my app loads *rte_mempool_ring* and also *rte_mempool_stack* . In doc,
rte_mempool_ring is default mempool driver.
"-d librte_mbuf.so -d librte_mempool.so -d librte_mempool_ring.so -d
librte_mempool_stack.so -d librte_mempool_bucket.so -d librte_kni.so"
EAL: open shared lib librte_mbuf.so
EAL: open shared lib librte_mempool.so
EAL: open shared lib librte_mempool_ring.so
EAL: open shared lib librte_mempool_stack.so
EAL: lib.stack log level changed from disabled to notice
EAL: open shared lib librte_mempool_bucket.so
EAL: open shared lib librte_kni.so
EAL: open shared lib
DPDK_LIBS/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_mempool_octeontx.so
EAL: pmd.octeontx.mbox log level changed from disabled to notice
EAL: pmd.mempool.octeontx log level changed from disabled to notice
Morten Brørup <mb at smartsharesystems.com>, 17 May 2023 Çar, 12:04 tarihinde
şunu yazdı:
> *From:* Morten Brørup [mailto:mb at smartsharesystems.com]
> *Sent:* Wednesday, 17 May 2023 10.38
>
> *From:* Yasin CANER [mailto:yasinncaner at gmail.com]
> *Sent:* Wednesday, 17 May 2023 10.01
>
> Hello,
>
>
>
> I don't have full knowledge of how to work rte_mempool_ops_get_count() but
> there is another comment about it. Maybe it relates.
>
> /*
> * due to race condition (access to len is not locked), the
> * total can be greater than size... so fix the result
> */
>
>
>
> MB: This comment relates to the race when accessing the per-lcore cache
> counters.
>
>
>
> MB (continued): I have added more information, regarding mempool drivers,
> in Bugzilla: https://bugs.dpdk.org/show_bug.cgi?id=1229
>
>
>
>
>
> Best regards.
>
>
>
> Morten Brørup <mb at smartsharesystems.com>, 16 May 2023 Sal, 19:04
> tarihinde şunu yazdı:
>
> > From: Stephen Hemminger [mailto:stephen at networkplumber.org]
> > Sent: Tuesday, 16 May 2023 17.24
> >
> > On Tue, 16 May 2023 13:41:46 +0000
> > Yasin CANER <yasinncaner at gmail.com> wrote:
> >
> > > From: Yasin CANER <yasin.caner at ulakhaberlesme.com.tr>
> > >
> > > after a while working rte_mempool_avail_count function returns bigger
> > > than mempool size that cause miscalculation rte_mempool_in_use_count.
> > >
> > > it helps to avoid miscalculation rte_mempool_in_use_count.
> > >
> > > Bugzilla ID: 1229
> > >
> > > Signed-off-by: Yasin CANER <yasin.caner at ulakhaberlesme.com.tr>
> >
> > An alternative that avoids some code duplication.
> >
> > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> > index cf5dea2304a7..2406b112e7b0 100644
> > --- a/lib/mempool/rte_mempool.c
> > +++ b/lib/mempool/rte_mempool.c
> > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> > count = rte_mempool_ops_get_count(mp);
> >
> > if (mp->cache_size == 0)
> > - return count;
> > + goto exit;
>
> This bug can only occur here (i.e. with cache_size==0) if
> rte_mempool_ops_get_count() returns an incorrect value. The bug should be
> fixed there instead.
>
>
>
> MB (continued): The bug must be in the underlying mempool driver. I took a
> look at the ring and stack drivers, and they seem fine.
>
>
>
> >
> > for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
> > count += mp->local_cache[lcore_id].len;
> > @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> > * due to race condition (access to len is not locked), the
> > * total can be greater than size... so fix the result
> > */
> > +exit:
> > if (count > mp->size)
> > return mp->size;
> > return count;
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/dev/attachments/20230517/49da6cfb/attachment.htm>
More information about the dev
mailing list