[PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
Benjamin Herrenschmidt
benh at kernel.crashing.org
Thu Jun 30 09:26:07 EST 2005
> Execution is resumed exactly where it has been interrupted.
>
> > The idea behind my patch was to get rid of that nr_free_contexts counter
> > that is (I thing) redundant with the context_map.
>
> Apparently its there to avoid the spinlock exactly on !FEW_CONTEXTS machines.
>
> I suppose that what happens is that get_mmu_context() gets preempted after stealing
> a context (so nr_free_contexts = 0), but before setting next_mmu_context to the
> next entry
>
> next_mmu_context = (ctx + 1) & LAST_CONTEXT;
Ugh ? Can switch_mm() be preempted at all ? Did I miss yet another
"let's open 10 gazillion races for gun" Ingo patch ?
> So if the now running higher prio tasks calls switch_mm() (which is likely to happen)
> it loops forever on atomic_dec_if_positive(&nr_free_contexts), while steal_context()
> sees "mm->context == CONTEXT".
I think the race is only when destroy_context() is preempted, but maybe
I missed something.
> I think that you should try "preempt_disable()/preempt_enable" pair at entry and
> exit of get_mmu_context() - I suppose around destroy_context() is not enough (you
> can try that also).
>
> spinlock ends up calling preempt_disable().
More information about the Linuxppc-embedded
mailing list