[PATCH] powerpc/mm: Convert slb presence warning check to WARN_ON_ONCE
Aneesh Kumar K.V
aneesh.kumar at linux.ibm.com
Fri Feb 15 19:24:51 AEDT 2019
We are hitting false positive in some case. Till we root cause
this, convert WARN_ON to WARN_ON_WONCE.
A sample stack dump looks like
NIP [c00000000007ac40] assert_slb_presence+0x90/0xa0
LR [c00000000007b270] slb_flush_and_restore_bolted+0x90/0xc0
Call Trace:
arch_send_call_function_ipi_mask+0xcc/0x110 (unreliable)
0xc000000f9f38f560
slice_flush_segments+0x58/0xb0
on_each_cpu+0x74/0xf0
slice_get_unmapped_area+0x6d4/0x9e0
hugetlb_get_unmapped_area+0x124/0x150
get_unmapped_area+0xf0/0x1a0
do_mmap+0x1a4/0x6b0
vm_mmap_pgoff+0xbc/0x150
ksys_mmap_pgoff+0x260/0x2f0
sys_mmap+0x104/0x130
system_call+0x5c/0x70
We are checking whether we were able to successfully insert
kernel stack SLB entries. If that is not case we will crash next.
So we are not losing much debug data.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar at linux.ibm.com>
---
arch/powerpc/mm/slb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index bc3914d54e26..dca0cbd71b60 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -71,7 +71,7 @@ static void assert_slb_presence(bool present, unsigned long ea)
asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
- WARN_ON(present == (tmp == 0));
+ WARN_ON_ONCE(present == (tmp == 0));
#endif
}
--
2.20.1
More information about the Linuxppc-dev
mailing list