[PATCH][RT][PPC64] Fix preempt unsafe paths accessing per_cpu variables

Benjamin Herrenschmidt benh at kernel.crashing.org
Sat Jul 19 13:53:33 EST 2008


> There's lots of semantics that are changed with -rt that should make
> everything still work ;-)  Some spinlocks remain real spinlocks, but we
> shouldn't have a problem with most being mutexes.
> 
> There's some cases that uses per CPU variables or other per cpu actions
> that require a special CPU_LOCK that protects the data in a preemption
> mode. The slab.c code in -rt handles this.

Well, there is at least in my case a whole class of code that assumes
that because the whole thing happens within a spinlock section at the
toplevel, it could not only access per_cpu variables using the
__variants, that's easy, but it also assumes that it can add things bit
by bit as it gets called at the lower level to that per-cpu cache. It's
not actually prepared for possibly migrating to another CPU right in the
middle. 

I need to review that stuff a bit. I think we fixed some of that at one
point, and we made sure that the context switch itself would flush
pending MMU batches, so it -may- be fine in that specific case.

Cheers,
Ben. 




More information about the Linuxppc-dev mailing list