[PATCH 02/12] powerpc: qspinlock: Mark accesses to qnode lock checks
Nicholas Piggin
npiggin at gmail.com
Tue May 9 12:02:57 AEST 2023
On Mon May 8, 2023 at 12:01 PM AEST, Rohan McLure wrote:
> The powerpc implemenation of qspinlocks will both poll and spin on the
> bitlock guarding a qnode. Mark these accesses with READ_ONCE to convey
> to KCSAN that polling is intentional here.
Yeah, and obviously pairs with the WRITE_ONCE so comment isn't really
necessary.
Reviewed-by: Nicholas Piggin <npiggin at gmail.com>
>
> Signed-off-by: Rohan McLure <rmclure at linux.ibm.com>
> ---
> arch/powerpc/lib/qspinlock.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
> index 9cf93963772b..579290d55abf 100644
> --- a/arch/powerpc/lib/qspinlock.c
> +++ b/arch/powerpc/lib/qspinlock.c
> @@ -435,7 +435,7 @@ static __always_inline bool yield_to_prev(struct qspinlock *lock, struct qnode *
>
> smp_rmb(); /* See __yield_to_locked_owner comment */
>
> - if (node->locked) {
> + if (READ_ONCE(node->locked)) {
> yield_to_preempted(prev_cpu, yield_count);
> spin_begin();
> return preempted;
> @@ -584,7 +584,7 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
>
> /* Wait for mcs node lock to be released */
> spin_begin();
> - while (node->locked) {
> + while (READ_ONCE(node->locked)) {
> spec_barrier();
>
> if (yield_to_prev(lock, node, old, paravirt))
> --
> 2.37.2
More information about the Linuxppc-dev
mailing list