[dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/write lock

Ananyev, Konstantin konstantin.ananyev at intel.com
Fri May 13 19:10:43 CEST 2016



> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Stephen Hemminger
> Sent: Friday, May 06, 2016 4:56 PM
> To: Doherty, Declan
> Cc: Iremonger, Bernard; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/write lock
> 
> On Fri, 6 May 2016 11:32:19 +0100
> Declan Doherty <declan.doherty at intel.com> wrote:
> 
> > On 05/05/16 18:12, Stephen Hemminger wrote:
> > > On Thu,  5 May 2016 16:14:56 +0100
> > > Bernard Iremonger <bernard.iremonger at intel.com> wrote:
> > >
> > >> Fixes: a45b288ef21a ("bond: support link status polling")
> > >> Signed-off-by: Bernard Iremonger <bernard.iremonger at intel.com>
> > >
> > > You know an uncontested reader/writer lock is significantly slower
> > > than a spinlock.
> > >
> >
> > As we can have multiple readers of the active slave list / primary
> > slave, basically any tx/rx burst call needs to protect against a device
> > being removed/closed during it's operation now that we support
> > hotplugging, in the worst case this could mean we have 2(rx+tx) * queues
> > possibly using the active slave list simultaneously, in that case I
> > would have thought that a spinlock would have a much more significant
> > affect on performance?
> 
> Right, but the window where the shared variable is accessed is very small,
> and it is actually faster to use spinlock for that.

I don't think that window we hold the lock is that small, let say if we have
a burst of 32 packets * (let say) 50 cycles/pkt = ~1500 cycles - each IO thread would stall.
For me that's long enough to justify rwlock usage here, especially that 
DPDK rwlock price is not much bigger (as I remember) then spinlock -
it is basically 1 CAS operation.

Konstantin
 


More information about the dev mailing list