local_irq_save not masking interrupts
Alex Zeffertt
ajz at cambridgebroadband.com
Wed Sep 27 02:42:42 EST 2006
Scott Wood wrote:
> Alex Zeffertt wrote:
>> I agree this indicates an intent to make it atomic, but I don't see how
>> this could cause interrupts to become re-enabled during the request_irq()
>> call. Also, since I am calling request_irq at insmod time, i.e. in
>> process
>> context, both GFP_ flags *should* work.
>
> You're effectively not in process context when you disable IRQs (at
> least, if you want them to stay that way). By specifying GFP_KERNEL,
> you're giving the allocator permission to go to sleep, enable IRQs, etc.
> The IRQ enabling will happen any time cache_grow() is called with
> GFP_WAIT (which is part of GFP_KERNEL), assuming a growable slab.
Ah-ha! This explains a lot...
One of the oddities I was seeing with this problem was that if I
did an "ifconfig down" on a completely unrelated net_device (a vlan)
the problem would *not* occur, i.e. I did not get an interrupt
during the critical section. Now I understand why: the "ifconfig down"
command freed some memory so that the kmalloc(,GFP_KERNEL) did not
need to grow the cache.
It follows from what you are saying that kmalloc(,GFP_KERNEL)
MUST NOT occur anywhere in the call chain during a critical section.
This must catch others out too. Surely kmalloc/cache_grow should
return NULL rather than enable interrupts. In fact, shouldn't it be
a BUG() if kmalloc(,GFP_KERNEL) is called with IRQs disabled?
Thanks for your explanation!
Alex
More information about the Linuxppc-embedded
mailing list