[PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

Michael Ellerman mpe at ellerman.id.au
Mon Oct 26 13:20:01 AEDT 2015


On Wed, 2015-10-21 at 12:36 -0700, Paul E. McKenney wrote:

> On Wed, Oct 21, 2015 at 10:18:33AM +0200, Peter Zijlstra wrote:

> > On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote:

> > > I am not seeing a sync there, but I really have to defer to the
> > > maintainers on this one.  I could easily have missed one.
> > 
> > So x86 implies a full barrier for everything that changes the CPL; and
> > some form of implied ordering seems a must if you change the privilege
> > level unless you tag every single load/store with the priv level at that
> > time, which seems the more expensive option.
> 
> And it is entirely possible that there is some similar operation
> somewhere in the powerpc entry/exit code.  I would not trust myself
> to recognize it, though.

> > So I suspect the typical implementation will flush all load/stores,
> > change the effective priv level and continue.
> > 
> > This can of course be implemented at a pure per CPU ordering (RCpc),
> > which would be in line with the rest of Power, in which case you do
> > indeed need an explicit sync to make it visible to other CPUs.
> > 
> > But yes, if Michael or Ben could clarify this it would be good.
> 
> :-) ;-) ;-)

Sorry guys, these threads are so long I tend not to read them very actively :}

Looking at the system call path, the straight line path does not include any
barriers. I can't see any hidden in macros either.

We also have an explicit sync in the switch_to() path, which suggests that we
know system call is not a full barrier.

Also looking at the architecture, section 1.5 which talks about the
synchronisation that occurs on system calls, defines nothing in terms of
memory ordering, and includes a programming note which says "Unlike the
Synchronize instruction, a context synchronizing operation does not affect the
order in which storage accesses are performed.".

Whether that's actually how it's implemented I don't know, I'll see if I can
find out.

cheers



More information about the Linuxppc-dev mailing list