[PATCH v2 03/11] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings
Rohan McLure
rmclure at linux.ibm.com
Tue May 23 10:28:35 AEST 2023
On Wed May 10, 2023 at 1:31 PM AEST, Rohan McLure wrote:
> Prior to this patch, data races are detectable by KCSAN of the following
> forms:
>
> [1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
> or otherwise outside of a critical section
> [2] Interrupted critical sections, where the interrupt will itself
> acquire a lock
>
> In case [1], calling context does not need an mmiowb() call to be
> issued, otherwise it would do so itself. Such calls to
> mmiowb_set_pending() are either idempotent or no-ops.
>
> In case [2], irrespective of when the interrupt occurs, the interrupt
> will acquire and release its locks prior to its return, nesting_count
> will continue balanced. In the worst case, the interrupted critical
> section during a mmiowb_spin_unlock() call observes an mmiowb to be
> pending and afterward is interrupted, leading to an extraneous call to
> mmiowb(). This data race is clearly innocuous.
>
> Mark all potentially asynchronous memory accesses with READ_ONCE or
> WRITE_ONCE, including increments and decrements to nesting_count. This
> has the effect of removing KCSAN warnings at consumer's callsites.
>
> Signed-off-by: Rohan McLure <rmclure at linux.ibm.com>
> Reported-by: Michael Ellerman <mpe at ellerman.id.au>
> Reported-by: Gautam Menghani <gautam at linux.ibm.com>
> Tested-by: Gautam Menghani <gautam at linux.ibm.com>
> Acked-by: Arnd Bergmann <arnd at arndb.de>
> ---
> v2: Remove extraneous READ_ONCE in mmiowb_set_pending for nesting_count
> ---
> include/asm-generic/mmiowb.h | 14 +++++++++-----
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
> index 5698fca3bf56..6dea28c8835b 100644
> --- a/include/asm-generic/mmiowb.h
> +++ b/include/asm-generic/mmiowb.h
> @@ -37,25 +37,29 @@ static inline void mmiowb_set_pending(void)
> struct mmiowb_state *ms = __mmiowb_state();
>
> if (likely(ms->nesting_count))
> - ms->mmiowb_pending = ms->nesting_count;
> + WRITE_ONCE(ms->mmiowb_pending, ms->nesting_count);
> }
>
> static inline void mmiowb_spin_lock(void)
> {
> struct mmiowb_state *ms = __mmiowb_state();
> - ms->nesting_count++;
> +
> + /* Increment need not be atomic. Nestedness is balanced over interrupts. */
> + WRITE_ONCE(ms->nesting_count, READ_ONCE(ms->nesting_count) + 1);
> }
>
> static inline void mmiowb_spin_unlock(void)
> {
> struct mmiowb_state *ms = __mmiowb_state();
> + u16 pending = READ_ONCE(ms->mmiowb_pending);
>
> - if (unlikely(ms->mmiowb_pending)) {
> - ms->mmiowb_pending = 0;
> + WRITE_ONCE(ms->mmiowb_pending, 0);
> + if (unlikely(pending)) {
> mmiowb();
> }
>
> - ms->nesting_count--;
> + /* Decrement need not be atomic. Nestedness is balanced over interrupts. */
> + WRITE_ONCE(ms->nesting_count, READ_ONCE(ms->nesting_count) - 1);
Still think the nesting_counts don't need WRITE_ONCE/READ_ONCE.
data_race() maybe but I don't know if it's even classed as a data
race. How does KCSAN handle/annotate preempt_count, for example?
Thanks,
Nick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/linuxppc-dev/attachments/20230523/b1d4529f/attachment.htm>
More information about the Linuxppc-dev
mailing list