[PATCH v2 2/5] powerpc/watchdog: Tighten non-atomic read-modify-write access
Laurent Dufour
ldufour at linux.ibm.com
Sat Nov 6 03:17:25 AEDT 2021
Le 04/11/2021 à 17:10, Nicholas Piggin a écrit :
> Most updates to wd_smp_cpus_pending are under lock except the watchdog
> interrupt bit clear.
>
> This can race with non-atomic RMW updates to the mask under lock, which
> can happen in two instances:
>
> Firstly, if another CPU detects this one is stuck, removes it from the
> mask, mask becomes empty and is re-filled with non-atomic stores. This
> is okay because it would re-fill the mask with this CPU's bit clear
> anyway (because this CPU is now stuck), so it doesn't matter that the
> bit clear update got "lost". Add a comment for this.
>
> Secondly, if another CPU detects a different CPU is stuck and removes it
> from the pending mask with a non-atomic store to bytes which also
> include the bit of this CPU. This case can result in the bit clear being
> lost and the end result being the bit is set. This should be so rare it
> hardly matters, but to make things simpler to reason about just avoid
> the non-atomic access for that case.
>
> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
Reviewed-by: Laurent Dufour <ldufour at linux.ibm.com>
> ---
> arch/powerpc/kernel/watchdog.c | 36 ++++++++++++++++++++++++----------
> 1 file changed, 26 insertions(+), 10 deletions(-)
>
> diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
> index be80071336a4..1d2623230297 100644
> --- a/arch/powerpc/kernel/watchdog.c
> +++ b/arch/powerpc/kernel/watchdog.c
> @@ -131,10 +131,10 @@ static void wd_lockup_ipi(struct pt_regs *regs)
> /* Do not panic from here because that can recurse into NMI IPI layer */
> }
>
> -static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
> +static bool set_cpu_stuck(int cpu, u64 tb)
> {
> - cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
> - cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
> + cpumask_set_cpu(cpu, &wd_smp_cpus_stuck);
> + cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
> /*
> * See wd_smp_clear_cpu_pending()
> */
> @@ -144,11 +144,9 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
> cpumask_andnot(&wd_smp_cpus_pending,
> &wd_cpus_enabled,
> &wd_smp_cpus_stuck);
> + return true;
> }
> -}
> -static void set_cpu_stuck(int cpu, u64 tb)
> -{
> - set_cpumask_stuck(cpumask_of(cpu), tb);
> + return false;
> }
>
> static void watchdog_smp_panic(int cpu, u64 tb)
> @@ -177,15 +175,17 @@ static void watchdog_smp_panic(int cpu, u64 tb)
> * get a backtrace on all of them anyway.
> */
> for_each_cpu(c, &wd_smp_cpus_pending) {
> + bool empty;
> if (c == cpu)
> continue;
> + /* Take the stuck CPUs out of the watch group */
> + empty = set_cpu_stuck(c, tb);
> smp_send_nmi_ipi(c, wd_lockup_ipi, 1000000);
> + if (empty)
> + break;
> }
> }
>
> - /* Take the stuck CPUs out of the watch group */
> - set_cpumask_stuck(&wd_smp_cpus_pending, tb);
> -
> wd_smp_unlock(&flags);
>
> if (sysctl_hardlockup_all_cpu_backtrace)
> @@ -232,6 +232,22 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
> return;
> }
>
> + /*
> + * All other updates to wd_smp_cpus_pending are performed under
> + * wd_smp_lock. All of them are atomic except the case where the
> + * mask becomes empty and is reset. This will not happen here because
> + * cpu was tested to be in the bitmap (above), and a CPU only clears
> + * its own bit. _Except_ in the case where another CPU has detected a
> + * hard lockup on our CPU and takes us out of the pending mask. So in
> + * normal operation there will be no race here, no problem.
> + *
> + * In the lockup case, this atomic clear-bit vs a store that refills
> + * other bits in the accessed word wll not be a problem. The bit clear
> + * is atomic so it will not cause the store to get lost, and the store
> + * will never set this bit so it will not overwrite the bit clear. The
> + * only way for a stuck CPU to return to the pending bitmap is to
> + * become unstuck itself.
> + */
> cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
>
> /*
>
More information about the Linuxppc-dev
mailing list