PowerPC BUG: using smp_processor_id() in preemptible code

Peter Zijlstra a.p.zijlstra at chello.nl
Fri Feb 25 08:07:55 EST 2011


On Thu, 2011-02-24 at 12:47 -0800, Hugh Dickins wrote:

Lovely problem :-), benh mentioned it on IRC, but I never got around to
finding the email thread, thanks for the CC.

> What would be better for 2.6.38 and 2.6.37-stable?  Moving that call to
> vunmap_page_range back under vb->lock, or the partial-Peter-patch below?
> And then what should be done for 2.6.39?

I think you'll also need the arch/powerpc/kernel/process.c changes that
cause context switches to flush the tlb_batch queues.

> --- 2.6.38-rc5/arch/powerpc/mm/tlb_hash64.c     2010-02-24 10:52:17.000000000 -0800
> +++ linux/arch/powerpc/mm/tlb_hash64.c  2011-02-15 23:27:21.000000000 -0800
> @@ -38,13 +38,11 @@ DEFINE_PER_CPU(struct ppc64_tlb_batch, p
>   * neesd to be flushed. This function will either perform the flush
>   * immediately or will batch it up if the current CPU has an active
>   * batch on it.
> - *
> - * Must be called from within some kind of spinlock/non-preempt region...
>   */
>  void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
>                      pte_t *ptep, unsigned long pte, int huge)
>  {
> -       struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
> +       struct ppc64_tlb_batch *batch = &get_cpu_var(ppc64_tlb_batch);
>         unsigned long vsid, vaddr;
>         unsigned int psize;
>         int ssize;
> @@ -99,6 +97,7 @@ void hpte_need_flush(struct mm_struct *m
>          */
>         if (!batch->active) {
>                 flush_hash_page(vaddr, rpte, psize, ssize, 0);
> +               put_cpu_var(ppc64_tlb_batch);
>                 return;
>         }
>  
> @@ -127,6 +126,7 @@ void hpte_need_flush(struct mm_struct *m
>         batch->index = ++i;
>         if (i >= PPC64_TLB_BATCH_NR)
>                 __flush_tlb_pending(batch);
> +       put_cpu_var(ppc64_tlb_batch);
>  }
>  
>  /* 



More information about the Linuxppc-dev mailing list