[dpdk-dev] [PATCH v1 1/2] spinlock: use wfe to reduce contention on aarch64
Gavin Hu
gavin.hu at arm.com
Fri Apr 24 09:07:40 CEST 2020
In acquiring a spinlock, cores repeatedly poll the lock variable.
This is replaced by rte_wait_until_equal API.
Running the micro benchmarking and the testpmd and l3fwd traffic tests
on ThunderX2, Ampere eMAG80 and Arm N1SDP, everything went well and no
notable performance gain nor degradation was measured.
Signed-off-by: Gavin Hu <gavin.hu at arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang at arm.com>
Reviewed-by: Phil Yang <phil.yang at arm.com>
Reviewed-by: Steve Capper <steve.capper at arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl at arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli at arm.com>
Tested-by: Pavan Nikhilesh <pbhagavatula at marvell.com>
---
lib/librte_eal/include/generic/rte_spinlock.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/lib/librte_eal/include/generic/rte_spinlock.h b/lib/librte_eal/include/generic/rte_spinlock.h
index 87ae7a4f1..5cc123247 100644
--- a/lib/librte_eal/include/generic/rte_spinlock.h
+++ b/lib/librte_eal/include/generic/rte_spinlock.h
@@ -28,7 +28,7 @@
* The rte_spinlock_t type.
*/
typedef struct {
- volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
+ volatile uint32_t locked; /**< lock status 0 = unlocked, 1 = locked */
} rte_spinlock_t;
/**
@@ -65,8 +65,7 @@ rte_spinlock_lock(rte_spinlock_t *sl)
while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
__ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
- while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED))
- rte_pause();
+ rte_wait_until_equal_32(&sl->locked, 0, __ATOMIC_RELAXED);
exp = 0;
}
}
--
2.17.1
More information about the dev
mailing list