[PATCH] Software Suspend 2 for ppc64 (1/2 - core)
Benjamin Herrenschmidt
benh at kernel.crashing.org
Tue Jul 19 11:24:54 EST 2005
Hi!
>
> + if (try_to_freeze()) {
> + signr = 0;
> + if (!signal_pending(current))
> + goto no_signal;
> + }
> +
> + if (freezing(current)) {
> + try_to_freeze();
> + signr = 0;
> + recalc_sigpending();
> + if (!signal_pending(current))
> + goto no_signal;
> + }
The above looks a bit weird & redundant ... But then, it might be some
subtelty with Nigel's updated refrigerator.
> + asm volatile ("std 1,%0" : "=m" (s->sp));
> + asm volatile ("std 2,%0" : "=m" (s->r2));
> + asm volatile ("std 12,%0" : "=m" (s->r[0]));
> + asm volatile ("std 13,%0" : "=m" (s->r[1]));
.../...
> + asm volatile ("mftb 4; std 4,%0": "=m" (s->tb));
> +
> + /* Save SPRGs */
> + asm volatile ("mfsprg 4,0; std 4,%0 " : "=m" (s->sprg[0]));
> + asm volatile ("mfsprg 4,1; std 4,%0 " : "=m" (s->sprg[1]));
> + asm volatile ("mfsprg 4,2; std 4,%0 " : "=m" (s->sprg[2]));
> + asm volatile ("mfsprg 4,3; std 4,%0 " : "=m" (s->sprg[3]));
> +
> + /* Save MSR & SDR1 */
> + asm volatile ("mfmsr 4; std 4,%0" : "=m" (s->msr));
> + asm volatile ("mfsdr1 4; std 4,%0": "=m" (s->sdr1));
> +}
The above should be in an assembly .S file. Generally, I don't like the
way the processor state saving is done separately from the actual low
level asm suspend call. It makes little sense. You assume gcc won't play
with registers behind your back, fairly unsafe.
> +void __smp_suspend_lowlevel(void * data)
.../...
> + __restore_processor_state(suspend2_saved_contexts +
> + _smp_processor_id());
> + local_flush_tlb();
I'm not sure calling local_flush_tlb() here makes much sense. You need
to do more than just flushing the current batch here. You actually need
to invalidate the entire TLB which is not necessarily easy, and you need
to take care of the SLB as well.
> + /*
> + *Save context and go back to idling.
> + * Note that we cannot leave the processor
> + * here. It must be able to receive IPIs if
> + * the LZF compression driver (eg) does a
> + * vfree after compressing the kernel etc
> + */
Gack ? Can you explain the above comment a bit more ? Something is
playing with kernel virtual space while CPUs are locked into IPIs and/or
that kind of horror ? Doesn't seem like a very sane thing to do.
> +static inline void move_stack_to_nonconflicing_area(void)
> +{
> + unsigned long old_stack, src;
> +
> + new_stack_page =
> + suspend2_get_nonconflicting_pages(get_order(THREAD_SIZE));
> +
> + BUG_ON(!new_stack_page);
> +
> + /* geting stack address */
> + asm volatile ("std %%r1, %0" : "=m" (old_stack));
> +
> + src = old_stack & (~(THREAD_SIZE - 1));
> +
> + /* Copy stack */
> + memcpy((void*)new_stack_page, (void*)src, THREAD_SIZE);
> +
> + new_stack_page += (old_stack - src);
> +
> + /* switch to new stack */
> + asm volatile ("ld %%r1, %0" : "=m" (new_stack_page));
> +
> +}
In what context is the above called ? You are likely to die an horrible
death when moving the stack around if you take an SLB miss at the wrong
time. The kernel is careful about always locking the current kernel
stack segment in SLB, various bits make assumption that remains true at
all time.
In general, you seem to completely ignore the hash table and SLB. That
might work by luck, because your "loader" kernel is the same as your
"saved" kernel, you'll eventually end up with proper bolted down hash
entries, but it's very dodgy. You are also ignoring the iommu, and RTAs,
you should pray the firmware will get them back to you at the same
place. Among others ...
Ben.
More information about the Linuxppc64-dev
mailing list