[dpdk-dev] [PATCH v2 5/7] hash: add extendable bucket feature

Wang, Yipeng1 yipeng1.wang at intel.com
Fri Sep 28 19:35:57 CEST 2018


> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli at arm.com]
> Sent: Thursday, September 27, 2018 12:22 PM
> To: Richardson, Bruce <bruce.richardson at intel.com>
> Cc: Wang, Yipeng1 <yipeng1.wang at intel.com>; dev at dpdk.org;
> michel at digirati.com.br
> Subject: RE: [PATCH v2 5/7] hash: add extendable bucket feature
> 
> > > > +/* Allocate same number of extendable buckets */
> > > IMO, we are allocating too much memory to support this feature.
> > > Especially,
> > when we claim that keys ending up in the extendable table is a rare
> > occurrence. By doubling the memory we are effectively saying that the
> > main table might have 50% utilization. It will also significantly
> > increase the cycles required to iterate the complete hash table (in
> > rte_hash_iterate API) even when we expect that the extendable table
> contains very few entries.
> > >
> > > I am wondering if we can provide options to control the amount of
> > > extra
> > memory that gets allocated and make the memory allocation dynamic (or
> > on demand basis). I think this also goes well with the general
> > direction DPDK is taking - allocate resources as needed rather than
> > allocating all the resources during initialization.
> > >
> >
> > Given that adding new entries should not normally be a fast-path
> > function, how about allowing memory allocation in add itself. Why not
> > initialize with a fairly small number of extra bucket entries, and
> > then each time they are all used, double the number of entries. That
> > will give efficient resource scaling, I think.
> >
> +1
> 'small number of extra bucket entries' == 5% of total capacity requested
> (assuming cuckoo hash will provide 95% efficiency)
> 
> > /Bruce
 [Wang, Yipeng] 
Thanks for the comments.
We allocate same as table size for extendable buckets at creation
because the purpose is to provide capacity guarantee even for the worst scenario
(all keys collide in same buckets).
Applications (e.g. Telco workloads) that require 100% capacity guarantee will be sure
that insertion always succeeds below the specified table size.
With any dynamic memory allocation or less buckets, this guarantee is
broken (even if it is very rare). The dynamic memory allocation could fail. 

Users who do not need such behavior can disable this feature.
Given that the cuckoo algorithm already ensures
very high utilization, they usually do not need the extendable buckets.



More information about the dev mailing list