mmotm threatens ppc preemption again
Hugh Dickins
hughd at google.com
Sun Mar 20 15:11:10 EST 2011
Hi Ben,
As I warned a few weeks ago, Jeremy has vmalloc apply_to_pte_range
patches in mmotm, which again assault PowerPC's expectations, and
cause lots of noise with CONFIG_PREEMPT=y CONFIG_PREEMPT_DEBUG=y.
This time in vmalloc as well as vfree; and Peter's fix to the last
lot, which went into 2.6.38, doesn't protect against these ones.
Here's what I now see when I swapon and swapoff:
BUG: using smp_processor_id() in preemptible [00000000] code: swapon/3230
caller is .apply_to_pte_range+0x118/0x1f0
Call Trace:
[c000000029c3b870] [c00000000000f38c] .show_stack+0x6c/0x16c (unreliable)
[c000000029c3b920] [c00000000022e024] .debug_smp_processor_id+0xe4/0x11c
[c000000029c3b9b0] [c0000000000de78c] .apply_to_pte_range+0x118/0x1f0
[c000000029c3ba70] [c0000000000de988] .apply_to_pud_range+0x124/0x188
[c000000029c3bb40] [c0000000000dea90] .apply_to_page_range_batch+0xa4/0xe8
[c000000029c3bc00] [c0000000000eb2c0] .map_vm_area+0x50/0x94
[c000000029c3bca0] [c0000000000ec368] .__vmalloc_area_node+0x144/0x190
[c000000029c3bd50] [c0000000000f1738] .SyS_swapon+0x270/0x704
[c000000029c3be30] [c0000000000075a8] syscall_exit+0x0/0x40
BUG: using smp_processor_id() in preemptible [00000000] code: swapon/3230
caller is .apply_to_pte_range+0x168/0x1f0
Call Trace:
[c000000029c3b870] [c00000000000f38c] .show_stack+0x6c/0x16c (unreliable)
[c000000029c3b920] [c00000000022e024] .debug_smp_processor_id+0xe4/0x11c
[c000000029c3b9b0] [c0000000000de7dc] .apply_to_pte_range+0x168/0x1f0
[c000000029c3ba70] [c0000000000de988] .apply_to_pud_range+0x124/0x188
[c000000029c3bb40] [c0000000000dea90] .apply_to_page_range_batch+0xa4/0xe8
[c000000029c3bc00] [c0000000000eb2c0] .map_vm_area+0x50/0x94
[c000000029c3bca0] [c0000000000ec368] .__vmalloc_area_node+0x144/0x190
[c000000029c3bd50] [c0000000000f1738] .SyS_swapon+0x270/0x704
[c000000029c3be30] [c0000000000075a8] syscall_exit+0x0/0x40
Adding 1572860k swap on /dev/sdb4. Priority:-1 extents:1 across:1572860k SS
BUG: using smp_processor_id() in preemptible [00000000] code: swapoff/3231
caller is .apply_to_pte_range+0x118/0x1f0
Call Trace:
[c0000000260d38b0] [c00000000000f38c] .show_stack+0x6c/0x16c (unreliable)
[c0000000260d3960] [c00000000022e024] .debug_smp_processor_id+0xe4/0x11c
[c0000000260d39f0] [c0000000000de78c] .apply_to_pte_range+0x118/0x1f0
[c0000000260d3ab0] [c0000000000de988] .apply_to_pud_range+0x124/0x188
[c0000000260d3b80] [c0000000000dea90] .apply_to_page_range_batch+0xa4/0xe8
[c0000000260d3c40] [c0000000000eb0d8] .remove_vm_area+0x90/0xd4
[c0000000260d3cd0] [c0000000000ec0a8] .__vunmap+0x50/0x104
[c0000000260d3d60] [c0000000000f32fc] .SyS_swapoff+0x4d8/0x5e8
[c0000000260d3e30] [c0000000000075a8] syscall_exit+0x0/0x40
BUG: using smp_processor_id() in preemptible [00000000] code: swapoff/3231
caller is .apply_to_pte_range+0x168/0x1f0
Call Trace:
[c0000000260d38b0] [c00000000000f38c] .show_stack+0x6c/0x16c (unreliable)
[c0000000260d3960] [c00000000022e024] .debug_smp_processor_id+0xe4/0x11c
[c0000000260d39f0] [c0000000000de7dc] .apply_to_pte_range+0x168/0x1f0
[c0000000260d3ab0] [c0000000000de988] .apply_to_pud_range+0x124/0x188
[c0000000260d3b80] [c0000000000dea90] .apply_to_page_range_batch+0xa4/0xe8
[c0000000260d3c40] [c0000000000eb0d8] .remove_vm_area+0x90/0xd4
[c0000000260d3cd0] [c0000000000ec0a8] .__vunmap+0x50/0x104
[c0000000260d3d60] [c0000000000f32fc] .SyS_swapoff+0x4d8/0x5e8
[c0000000260d3e30] [c0000000000075a8] syscall_exit+0x0/0x40
BUG: using smp_processor_id() in preemptible [00000000] code: swapoff/3231
caller is .__flush_tlb_pending+0x20/0xb4
Call Trace:
[c0000000260d3830] [c00000000000f38c] .show_stack+0x6c/0x16c (unreliable)
[c0000000260d38e0] [c00000000022e024] .debug_smp_processor_id+0xe4/0x11c
[c0000000260d3970] [c00000000002efbc] .__flush_tlb_pending+0x20/0xb4
[c0000000260d39f0] [c0000000000de7fc] .apply_to_pte_range+0x188/0x1f0
[c0000000260d3ab0] [c0000000000de988] .apply_to_pud_range+0x124/0x188
[c0000000260d3b80] [c0000000000dea90] .apply_to_page_range_batch+0xa4/0xe8
[c0000000260d3c40] [c0000000000eb0d8] .remove_vm_area+0x90/0xd4
[c0000000260d3cd0] [c0000000000ec0a8] .__vunmap+0x50/0x104
[c0000000260d3d60] [c0000000000f32fc] .SyS_swapoff+0x4d8/0x5e8
[c0000000260d3e30] [c0000000000075a8] syscall_exit+0x0/0x40
BUG: using smp_processor_id() in preemptible [00000000] code: swapoff/3231
caller is .native_flush_hash_range+0x3c/0x384
Call Trace:
[c0000000260d36f0] [c00000000000f38c] .show_stack+0x6c/0x16c (unreliable)
[c0000000260d37a0] [c00000000022e024] .debug_smp_processor_id+0xe4/0x11c
[c0000000260d3830] [c00000000002e2c8] .native_flush_hash_range+0x3c/0x384
[c0000000260d38e0] [c00000000002c370] .flush_hash_range+0x4c/0xc8
[c0000000260d3970] [c00000000002f02c] .__flush_tlb_pending+0x90/0xb4
[c0000000260d39f0] [c0000000000de7fc] .apply_to_pte_range+0x188/0x1f0
[c0000000260d3ab0] [c0000000000de988] .apply_to_pud_range+0x124/0x188
[c0000000260d3b80] [c0000000000dea90] .apply_to_page_range_batch+0xa4/0xe8
[c0000000260d3c40] [c0000000000eb0d8] .remove_vm_area+0x90/0xd4
[c0000000260d3cd0] [c0000000000ec0a8] .__vunmap+0x50/0x104
[c0000000260d3d60] [c0000000000f32fc] .SyS_swapoff+0x4d8/0x5e8
[c0000000260d3e30] [c0000000000075a8] syscall_exit+0x0/0x40
I work around them with the patch below, but would prefer not to disable
preemption on all architectures there. Though I'm not a huge fan of
apply_to_pte_range myself (I feel it glosses over differences, such as
how often one needs to let preemption in): I wouldn't mind if we left
vmalloc as is without it.
Hugh
--- mmotm/mm/memory.c
+++ fixed/mm/memory.c
@@ -2021,9 +2021,11 @@ static int apply_to_pte_range(struct mm_
int err;
spinlock_t *uninitialized_var(ptl);
- pte = (mm == &init_mm) ?
- pte_alloc_kernel(pmd, addr) :
- pte_alloc_map_lock(mm, pmd, addr, &ptl);
+ if (mm == &init_mm) {
+ pte = pte_alloc_kernel(pmd, addr);
+ preempt_disable();
+ } else
+ pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
if (!pte)
return -ENOMEM;
@@ -2033,7 +2035,9 @@ static int apply_to_pte_range(struct mm_
err = fn(pte, (end - addr) / PAGE_SIZE, addr, data);
arch_leave_lazy_mmu_mode();
- if (mm != &init_mm)
+ if (mm == &init_mm)
+ preempt_enable();
+ else
pte_unmap_unlock(pte, ptl);
return err;
}
More information about the Linuxppc-dev
mailing list