[PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue

Guillaume Autran gautran at mrv.com
Thu Jun 30 01:32:19 EST 2005



Benjamin Herrenschmidt wrote:

>On Tue, 2005-06-28 at 09:42 -0400, Guillaume Autran wrote:
>  
>
>>Hi,
>>
>>I happen to notice a race condition in the mmu_context code for the 8xx 
>>with very few context (16 MMU contexts) and kernel preemption enable. It 
>>is hard to reproduce has it shows only when many processes are 
>>created/destroy and the system is doing a lot of IRQ processing.
>>
>>In short, one process is trying to steal a context that is in the 
>>process of being freed (mm->context == NO_CONTEXT) but not completely 
>>freed (nr_free_contexts == 0).
>>The steal_context() function does not do anything and the process stays 
>>in the loop forever.
>>
>>Anyway, I got a patch that fixes this part. Does not seem to affect 
>>scheduling latency at all.
>>
>>Comments are appreciated.
>>    
>>
>
>Your patch seems to do a hell lot more than fixing this race ... What
>about just calling preempt_disable() in destroy_context() instead ?
>  
>
I'm still a bit confused with "kernel preemption". One thing for sure is 
that disabling kernel preemption does indeed fix my problem.
So, my question is, what if a task in the middle of being schedule gets 
preempted by an IRQ handler, where will this task restart execution ? 
Back at the beginning of schedule or where it left of ?

The idea behind my patch was to get rid of that nr_free_contexts counter 
that is (I thing) redundant with the context_map.

Regards,
Guillaume.

-- 
=======================================
Guillaume Autran
Senior Software Engineer
MRV Communications, Inc.
Tel: (978) 952-4932 office
E-mail: gautran at mrv.com
======================================= 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://ozlabs.org/pipermail/linuxppc-embedded/attachments/20050629/daebdc92/attachment.htm 


More information about the Linuxppc-embedded mailing list