[PATCH][RFC] preempt_count corruption across H_CEDE call with CONFIG_PREEMPT on pseries
Vaidyanathan Srinivasan
svaidy at linux.vnet.ibm.com
Fri Aug 6 15:09:36 EST 2010
* Darren Hart <dvhltc at us.ibm.com> [2010-08-05 19:19:00]:
> On 07/22/2010 10:09 PM, Benjamin Herrenschmidt wrote:
> > On Thu, 2010-07-22 at 21:44 -0700, Darren Hart wrote:
> >
> >> suggestion I updated the instrumentation to display the
> >> local_save_flags and irqs_disabled_flags:
> >
> >> Jul 22 23:36:58 igoort1 kernel: local flags: 0, irqs disabled: 1
> >> Jul 22 23:36:58 igoort1 kernel: before H_CEDE current->stack: c00000010e9e3ce0, pcnt: 1
> >> Jul 22 23:36:58 igoort1 kernel: after H_CEDE current->stack: c00000010e9e3ce0, pcnt: 1
> >>
> >> I'm not sure if I'm reading that right, but I believe interrupts are
> >> intended to be disabled here. If accomplished via the
> >> spin_lock_irqsave() this would behave differently on RT. However, this
> >> path disables the interrupts handled by xics, all but the IPIs anyway.
> >> On RT I disabled the decrementer as well.
> >>
> >> Is it possible for RT to be receiving other interrupts here?
> >
> > Also you may want to call hard_irq_disable() to -really- disable
> > interrupts ... since we do lazy-disable on powerpc
> >
>
> A quick update and request for direction wrt the cede hcall, interrupt handling, and the stack pointer.
>
> I've added patches one at a time, eliminating bugs with preempt_rt
> (tip/rt/head). With my current patchset I have no crashes with a single run of
> random_online.sh (stress testing to hit the hang will sees is my todo for
> tomorrow).
>
> Current patchset includes:
> patches/0001-wms-fix01.patch # P7 lazy flushing thing
>
> # next four are sent to / queued for mainline
> patches/powerpc-increase-pseries_cpu_die-delay.patch
> patches/powerpc-enable-preemption-before-cpu_die.patch
> patches/powerpc-silence-__cpu_up-under-normal-operation.patch
> patches/powerpc-silence-xics_migrate_irqs_away-during-cpu-offline.patch
>
> # this one needs to be cleaned up and sent to mainline
> patches/powerpc-wait-for-cpu-to-go-inactive.patch
>
> patches/powerpc-disable-decrementer-on-offline.patch
> patches/powerpc-cpu_die-preempt-hack.patch # reset preempt_count to 0 after cede
> patches/powerpc-cede-processor-inst.patch # instrumentation
> patches/powerpc-hard_irq_disable.patch # hard_irq_disable before cede
> patches/powerpc-pad-thread_info.patch
>
> I didn't include all the patches as the relevant bits are included below in
> code form.
>
> With the instrumentation, it's clear the change to preempt_count() is targeted
> (since I added padding before and after preempt_count in the thread_info
> struct) and preempt_count is still changed. It is also clear that it isn't just
> a stray decrement since I set preempt_count() to 4 prior to calling cede and it
> still is 0xffffffff after the hcall. Also note that the stack pointer doesn't
> change across the cede call and neither does any other part of the thread_info
> structure.
>
> Adding hard_irq_disable() did seem to affect things somewhat. Rather than a
> serialized list of before/after thread_info prints around cede, I see
> several befores then several afters. But the preempt_count is still modified.
>
> The relevant code looks like this (from pseries_mach_cpu_die())
>
> hard_irq_disable(); /* this doesn't fix things... but
> does change behavior a bit */
> preempt_count() = 0x4;
> asm("mr %0,1" : "=r" (sp)); /* stack pointer is in R1 */
> printk("before cede: sp=%lx pcnt=%x\n", sp, preempt_count());
> print_thread_info();
> extended_cede_processor(cede_latency_hint);
> asm("mr %0,1" : "=r" (sp));
> printk("after cede: sp=%lx pcnt=%x\n", sp, preempt_count());
> print_thread_info();
> preempt_count() = 0;
>
>
> With these patches applied, the output across cede looks like:
>
> before cede: sp=c000000000b57150 pcnt=4
> *** current->thread_info ***
> ti->task: c000000000aa1410
> ti->exec_domain: c000000000a59958
> ti->cpu: 0
> ti->stomp_on_me: 57005 /* 0xDEAD - forgot to print in hex */
> ti->preempt_count: 4
> ti->stomp_on_me_too: 48879 /* 0xBEEF - forgot to print in hex */
> ti->local_flags: 0
> *** end thread_info ***
> after cede: sp=c000000000b57150 pcnt=ffffffff
> *** current->thread_info ***
> ti->task: c000000000aa1410
> ti->exec_domain: c000000000a59958
> ti->cpu: 0
Is this print sufficient to prove that we did not jump CPU?
We agree that decrementer can possibly expire and we could have gone
to the handler and come back just like we would do in an idle loop.
This will not happen always, but I could see a window of time during
which this may happen. But that should not change the preempt_count
unconditionally to -1 with the same stack pointer.
> ti->stomp_on_me: 57005
> ti->preempt_count: ffffffff
> ti->stomp_on_me_too: 48879
> ti->local_flags: 0
Is there a method to identify whether we had executed any of the
interrupt handler while in this code?
> *** end thread_info ***
>
> Are there any additional thoughts on what might be causing preempt_count to change?
> I was thinking that cede might somehow force it to 0 (or perhaps one of the
> preempt_count explicit value setters in irq.c) and then decrement it - but that dec
> should trigger an error in the dec_preempt_count() as I have CONFIG_DEBUG_PREEMPT=y.
Decrementer is only the suspect, we have not yet proved that the dec
handler is being executed. Can somebody be using our stack? The
pointer is same.. but do we still own it? I cannot think of how
somebody else could be using this cpu's stack... but just a thought to
explain this anomaly.
--Vaidy
More information about the Linuxppc-dev
mailing list