[PATCH] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

Christophe Leroy christophe.leroy at csgroup.eu
Wed Feb 22 19:40:44 AEDT 2023



Le 22/02/2023 à 09:16, Kautuk Consul a écrit :
> On Wed, Feb 22, 2023 at 07:02:34AM +0000, Christophe Leroy wrote:
>>
>>
>> Le 22/02/2023 à 07:01, Kautuk Consul a écrit :
>>> A link from ibm.com states:
>>> "Ensures that all instructions preceding the call to __lwsync
>>>    complete before any subsequent store instructions can be executed
>>>    on the processor that executed the function. Also, it ensures that
>>>    all load instructions preceding the call to __lwsync complete before
>>>    any subsequent load instructions can be executed on the processor
>>>    that executed the function. This allows you to synchronize between
>>>    multiple processors with minimal performance impact, as __lwsync
>>>    does not wait for confirmation from each processor."
>>>
>>> Thats why smp_rmb() and smp_wmb() are defined to lwsync.
>>> But this same understanding applies to parallel pipeline
>>> execution on each PowerPC processor.
>>> So, use the lwsync instruction for rmb() and wmb() on the PPC
>>> architectures that support it.
>>>
>>> Also removed some useless spaces.
>>>
>>> Signed-off-by: Kautuk Consul <kconsul at linux.vnet.ibm.com>
>>> ---
>>>    arch/powerpc/include/asm/barrier.h | 12 +++++++++---
>>>    1 file changed, 9 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
>>> index e80b2c0e9315..553f5a5d20bd 100644
>>> --- a/arch/powerpc/include/asm/barrier.h
>>> +++ b/arch/powerpc/include/asm/barrier.h
>>> @@ -41,11 +41,17 @@
>>>    
>>>    /* The sub-arch has lwsync */
>>>    #if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
>>> -#    define SMPWMB      LWSYNC
>>
>> This line shouldn't be changed by your patch
> I mentioned it in the commit message.
> But if you want I'll remove this. Did this because the rest
> of the file doesn't have these spaces.
>>
>>> +#undef rmb
>>> +#undef wmb
>>
>> I see no point with defining something and undefining them a few lines
>> later.
>>
>> Instead, why not do:
>>
>> #define mb()   __asm__ __volatile__ ("sync" : : : "memory")
>>
>> #if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
>> #define rmb()	({__asm__ __volatile__ ("lwsync" : : : "memory"); })
>> #define wmb()	({__asm__ __volatile__ ("lwsync" : : : "memory"); })
>> #else
>> #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
>> #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
>> #endif
>>
> I thought of doing that earlier, but there exists one more #elif
> for CONFIG_BOOKE and then the #else.
> That way we would have to put 3 different #defines for rmb and wmb
> and I wanted to avoid that.

No, I don't mean to use the existing #ifdef/elif/else.

Define an #ifdef /#else dedicated to xmb macros.

Something like that:

@@ -35,9 +35,15 @@
   * However, on CPUs that don't support lwsync, lwsync actually maps to a
   * heavy-weight sync, so smp_wmb() can be a lighter-weight eieio.
   */
+#if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
+#define __mb()   asm volatile ("lwsync" : : : "memory")
+#define __rmb()  asm volatile ("lwsync" : : : "memory")
+#define __wmb()  asm volatile ("lwsync" : : : "memory")
+#else
  #define __mb()   __asm__ __volatile__ ("sync" : : : "memory")
  #define __rmb()  __asm__ __volatile__ ("sync" : : : "memory")
  #define __wmb()  __asm__ __volatile__ ("sync" : : : "memory")
+#endif

  /* The sub-arch has lwsync */
  #if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)



>> By the way, why put it inside a ({ }) ?
> checkpatch.pl asks for ({}).
>> And I think nowdays we use "asm volatile" not "__asm__ __volatile__"
> I was just following what was there in the file already.
> Can change this if required.
>>
>> Shouldn't you also consider the same for mb() ?
> My change wasn't meant to address newer usages of as volatile
> #defines. I just wanted to redefine the rmb and wmb #defines
> to lwsync.

That's my point, why not also redefine mb to lwsync ?

>>
>> Note that your series will conflict with b6e259297a6b ("powerpc/kcsan:
>> Memory barriers semantics") in powerpc/next tree.
> I thought of defining the __rmb and __wmb macros but I decided against
> it because I didn't understand kcsan completely.
> I used the standard Linus' tree, not powerpc/next.
> Can you tell me which branch or git repo I should make this patch on ?

https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git

'merge' branch is a merge of branches 'master', 'fixes' and 'next'.

That's the branch to use, allthough it is not always in sync with fixes 
and next, in that case all you have to do is to locally merge 'next' and 
'fixes' branch until it's done remotely.

But just using 'next' branch also works most of the time.

Note that 'next' branch should already be part of linux-next so you may 
also use linux-next if you prefer.

>>
>>> +/* Redefine rmb() to lwsync. */
>>
>> WHat's the added value of this comment ? Isn't it obvious in the line
>> below that rmb() is being defined to lwsync ? Please avoid useless comments.
> Sure.
>>
>>> +#define rmb()	({__asm__ __volatile__ ("lwsync" : : : "memory"); })
>>> +/* Redefine wmb() to lwsync. */
>>> +#define wmb()	({__asm__ __volatile__ ("lwsync" : : : "memory"); })
>>> +#define SMPWMB      LWSYNC
>>>    #elif defined(CONFIG_BOOKE)
>>> -#    define SMPWMB      mbar
>>
>> This line shouldn't be changed by your patch
>>
>>> +#define SMPWMB      mbar
>>>    #else
>>> -#    define SMPWMB      eieio
> Ok. Can change my patch.
>>
>> This line shouldn't be changed by your patch
>>
>>> +#define SMPWMB      eieio
>>>    #endif
> Sure. Will retain this too.
>>>    
>>>    /* clang defines this macro for a builtin, which will not work with runtime patching */


More information about the Linuxppc-dev mailing list