[dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/write lock

Iremonger, Bernard bernard.iremonger at intel.com
Thu May 26 18:24:17 CEST 2016


Hi Konstantin, 

<snip>
> > > > On 05/05/16 18:12, Stephen Hemminger wrote:
> > > > > On Thu,  5 May 2016 16:14:56 +0100 Bernard Iremonger
> > > > > <bernard.iremonger at intel.com> wrote:
> > > > >
> > > > >> Fixes: a45b288ef21a ("bond: support link status polling")
> > > > >> Signed-off-by: Bernard Iremonger <bernard.iremonger at intel.com>
> > > > >
> > > > > You know an uncontested reader/writer lock is significantly
> > > > > slower than a spinlock.
> > > > >
> > > >
> > > > As we can have multiple readers of the active slave list / primary
> > > > slave, basically any tx/rx burst call needs to protect against a
> > > > device being removed/closed during it's operation now that we
> > > > support hotplugging, in the worst case this could mean we have
> > > > 2(rx+tx) * queues possibly using the active slave list
> > > > simultaneously, in that case I would have thought that a spinlock
> > > > would have a much more significant affect on performance?
> > >
> > > Right, but the window where the shared variable is accessed is very
> > > small, and it is actually faster to use spinlock for that.
> >
> > I don't think that window we hold the lock is that small, let say if
> > we have a burst of 32 packets * (let say) 50 cycles/pkt = ~1500 cycles - each
> IO thread would stall.
> > For me that's long enough to justify rwlock usage here, especially
> > that DPDK rwlock price is not much bigger (as I remember) then
> > spinlock - it is basically 1 CAS operation.
> 
> As another alternative we can have a spinlock per queue, then different IO
> threads doing RX/XTX over different queues will be uncontended at all.
> Though control thread would need to grab locks for all configured queues :)
> 
> Konstantin
> 

I am preparing a v2 patchset which uses a spinlock per queue.

Regards,

Bernard.



More information about the dev mailing list