[RFC PATCH] lib: Introduce generic __cmpxchg_u64() and use it where needed

Paul E. McKenney paulmck at linux.ibm.com
Fri Nov 2 04:43:33 AEDT 2018


On Thu, Nov 01, 2018 at 06:14:32PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 01, 2018 at 09:59:38AM -0700, Eric Dumazet wrote:
> > On 11/01/2018 09:32 AM, Peter Zijlstra wrote:
> > 
> > >> Anyhow, if the atomic maintainers are willing to stand up and state for
> > >> the record that the atomic counters are guaranteed to wrap modulo 2^n
> > >> just like unsigned integers, then I'm happy to take Paul's patch.
> > > 
> > > I myself am certainly relying on it.
> > 
> > Could we get uatomic_t support maybe ?
> 
> Whatever for; it'd be the exact identical same functions as for
> atomic_t, except for a giant amount of code duplication to deal with the
> new type.
> 
> That is; today we merged a bunch of scripts that generates most of
> atomic*_t, so we could probably script uatomic*_t wrappers with minimal
> effort, but it would add several thousand lines of code to each compile
> for absolutely no reason what so ever.
> 
> > This reminds me of this sooooo silly patch :/
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=adb03115f4590baa280ddc440a8eff08a6be0cb7
> 
> Yes, that's stupid. UBSAN is just wrong there.

It would be good for UBSAN to treat atomic operations as guaranteed
2s complement with no UB for signed integer overflow.  After all, if
even the C standard is willing to do this...

Ah, but don't we disable interrupts and fall back to normal arithmetic
for UP systems?  Hmmm...  We do so for atomic_add_return() even on
x86, it turns out:

static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
{
	return i + xadd(&v->counter, i);
}

So UBSAN actually did have a point.  :-(

							Thanx, Paul



More information about the Linuxppc-dev mailing list