[dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()

Neil Horman nhorman at tuxdriver.com
Sun Sep 28 22:42:33 CEST 2014


On Sun, Sep 28, 2014 at 05:52:16PM +0000, Wiles, Roger Keith wrote:
> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look at the code below and see if these seem to fix some concerns in how the ring is handled.
> 
> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len may still not satisfy the request from the cache.
> 
ret == req should be guaranteed.  As documented, rte_ring_mc_dequeue_bulk, when
called with behavior == FIXED (which it is internally), returns 0 iff the entire
request was satisfied, so we can safely add req.  That said, I've not validated
that it always does whats documented, but if it doesn't, the fix should likely
be internal to the function, not external to it.
Neil

> The second issue is if you believe the above code then we have to account for that issue in the stats.
> 
> Let me know what you think?
> ++Keith
> ———
> 
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 199a493..b1b1f7a 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>                    unsigned n, int is_mc)
>  {
>         int ret;
> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -       unsigned n_orig = n;
> -#endif
> +
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>         struct rte_mempool_cache *cache;
>         uint32_t index, len;
> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>                         goto ring_dequeue;
>                 }
>  
> -               cache->len += req;
> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
> +
> +               if ( cache->len < n ) {
> +                       /*
> +                        * Number (ret + cache->len) may not be >= n. As
> +                        * the 'ret' value maybe zero or less then 'req'.
> +                        *
> +                        * Note:
> +                        * An issue of order from the cache and common pool could
> +                        * be an issue if (cache->len != 0 and less then n), but the
> +                        * normal case it should be OK. If the user needs to preserve
> +                        * the order of packets then he must set cache_size == 0.
> +                        */
> +                       goto ring_dequeue;
> +               }
>         }
>  
>         /* Now fill in the response ... */
> @@ -1002,9 +1014,12 @@ ring_dequeue:
>                 ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
>  
>         if (ret < 0)
> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
> -       else
> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
> +       else {
>                 __MEMPOOL_STAT_ADD(mp, get_success, ret);
> +               // Catch the case when ret != n, adding zero should not be a problem.
> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
> +       }
>  
>         return ret;
>  }
> 
> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> 
> 


More information about the dev mailing list