[bug report from fstests] WARNING: CPU: 1 PID: 0 at arch/powerpc/mm/mmu_context.c:106 switch_mm_irqs_off+0x220/0x270
Ritesh Harjani (IBM)
ritesh.list at gmail.com
Sat Nov 16 23:43:36 AEDT 2024
Zorro Lang <zlang at redhat.com> writes:
> Hi,
>
> Recently fstests generic/650 hit a kernel warning on ppc64le [1] with
> xfs (default mkfs option). My latest test on mainline linux v6.12-rc6+
> with HEAD=da4373fbcf006deda90e5e6a87c499e0ff747572 .
I tried this on KVM pseries machine type, but I am unable to hit it.
Let me try that on an actual LPAR and confirm whether I can still hit
this or not. Then maybe we can see if we could get a git bisect log of it.
Thanks for reporting it.
-ritesh
>
> Thanks,
> Zorro
>
>
> [1]
> FSTYP -- xfs (debug)
> PLATFORM -- Linux/ppc64le rdma-cert-03-lp10 6.12.0-rc6+ #1 SMP Sat Nov 9 13:18:41 EST 2024
> MKFS_OPTIONS -- -f -m crc=1,finobt=1,rmapbt=1,reflink=1,inobtcount=1,bigtime=1 /dev/sda3
> MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 /dev/sda3 /mnt/xfstests/scratch
>
> generic/650 _check_dmesg: something found in dmesg (see /var/lib/xfstests/results//generic/650.dmesg)
>
>
> HINT: You _MAY_ be missing kernel fix:
> ecd49f7a36fb xfs: fix per-cpu CIL structure aggregation racing with dying cpus
>
> Ran: generic/650
> Failures: generic/650
> Failed 1 of 1 tests
>
>
> # cat /var/lib/xfstests/results//generic/650.dmesg
> [16630.359077] run fstests generic/650 at 2024-11-09 18:03:21
> [16631.058519] ------------[ cut here ]------------
> [16631.058531] WARNING: CPU: 1 PID: 0 at arch/powerpc/mm/mmu_context.c:106 switch_mm_irqs_off+0x220/0x270
> [16631.058542] Modules linked in: overlay dm_zero dm_log_writes dm_thin_pool dm_persistent_data dm_bio_prison dm_snapshot dm_bufio ext4 mbcache jbd2 dm_flakey bonding tls rfkill sunrpc ibmveth pseries_rng vmx_crypto sg dm_mod fuse loop nfnetlink xfs sd_mod nvme nvme_core ibmvscsi scsi_transport_srp nvme_auth [last unloaded: scsi_debug]
> [16631.058617] CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Kdump: loaded Tainted: G W 6.12.0-rc6+ #1
> [16631.058623] Tainted: [W]=WARN
> [16631.058625] Hardware name: IBM,9009-22G POWER9 (architected) 0x4e0203 0xf000005 of:IBM,FW950.11 (VL950_075) hv:phyp pSeries
> [16631.058629] NIP: c0000000000b02c0 LR: c0000000000b0220 CTR: c000000000152a20
> [16631.058633] REGS: c000000008bd7ad0 TRAP: 0700 Tainted: G W (6.12.0-rc6+)
> [16631.058637] MSR: 8000000002823033 <SF,VEC,VSX,FP,ME,IR,DR,RI,LE> CR: 2800440a XER: 20040000
> [16631.058660] CFAR: c0000000000b0230 IRQMASK: 3
> GPR00: c0000000000b027c c000000008bd7d70 c000000002616800 c00000016131a900
> GPR04: 0000000000000000 000000000000000a 0000000000000000 0000000000000000
> GPR08: 0000000000000000 0000000000000000 0000000000000001 0000000000000000
> GPR12: c000000000152a20 c00000000ffcf300 0000000000000000 000000001ef31820
> GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
> GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000001
> GPR24: 0000000000000001 c0000000089acb00 c000000004fafcb0 0000000000000000
> GPR28: c000000004f24880 c00000016131a900 0000000000000001 c000000004f25180
> [16631.058740] NIP [c0000000000b02c0] switch_mm_irqs_off+0x220/0x270
> [16631.058746] LR [c0000000000b0220] switch_mm_irqs_off+0x180/0x270
> [16631.058751] Call Trace:
> [16631.058754] [c000000008bd7d70] [c0000000000b027c] switch_mm_irqs_off+0x1dc/0x270 (unreliable)
> [16631.058763] [c000000008bd7de0] [c0000000002572f8] idle_task_exit+0x118/0x1b0
> [16631.058771] [c000000008bd7e40] [c000000000152a70] pseries_cpu_offline_self+0x50/0x150
> [16631.058780] [c000000008bd7eb0] [c000000000078678] arch_cpu_idle_dead+0x68/0x7c
> [16631.058787] [c000000008bd7ee0] [c00000000029f504] do_idle+0x1c4/0x290
> [16631.058793] [c000000008bd7f40] [c00000000029fa90] cpu_startup_entry+0x60/0x70
> [16631.058800] [c000000008bd7f70] [c00000000007825c] start_secondary+0x44c/0x480
> [16631.058807] [c000000008bd7fe0] [c00000000000e258] start_secondary_prolog+0x10/0x14
> [16631.058815] Code: 38800004 387c00f8 487f7a51 60000000 813c00f8 7129000a 4182ff40 2c3d0000 4182ff38 7c0004ac 4bffffb8 4bffff30 <0fe00000> 4bffff70 60000000 60000000
> [16631.058848] irq event stamp: 15169028
> [16631.058850] hardirqs last enabled at (15169027): [<c0000000003eb510>] tick_nohz_idle_exit+0x1c0/0x3b0
> [16631.058858] hardirqs last disabled at (15169028): [<c000000001ab7eac>] __schedule+0x5ac/0xc30
> [16631.058865] softirqs last enabled at (15169016): [<c0000000001d24c8>] handle_softirqs+0x578/0x620
> [16631.058871] softirqs last disabled at (15168957): [<c00000000001b1dc>] do_softirq_own_stack+0x6c/0x90
> [16631.058878] ---[ end trace 0000000000000000 ]---
> [16631.814920] NOHZ tick-stop error: local softirq work is pending, handler #40!!!
> [16635.633774] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16635.752583] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16635.764402] XFS (sda5): Ending clean mount
> [16637.842353] clockevent: decrementer mult[83126f] shift[24] cpu[6]
> [16640.956776] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16641.078711] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16641.090273] XFS (sda5): Ending clean mount
> [16646.729241] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16646.850952] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16646.863752] XFS (sda5): Ending clean mount
> [16653.467205] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16653.594764] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16653.606795] XFS (sda5): Ending clean mount
> [16659.346160] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16659.465528] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16659.478422] XFS (sda5): Ending clean mount
> [16665.517371] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16665.656116] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16665.668005] XFS (sda5): Ending clean mount
> [16672.655768] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16672.781813] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16672.794638] XFS (sda5): Ending clean mount
> [16679.551991] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16679.668639] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16679.683949] XFS (sda5): Ending clean mount
> [16685.384912] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16685.510632] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16685.522819] XFS (sda5): Ending clean mount
> [16691.213987] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16691.347125] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16691.358483] XFS (sda5): Ending clean mount
> [16712.697103] XFS (sda5): EXPERIMENTAL online scrub feature in use. Use at your own risk!
> [16715.846166] XFS (sda5): Unmounting Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16716.047814] XFS (sda5): Mounting V5 Filesystem 3e7de15a-300a-41f2-8745-5be808cc3f7b
> [16716.058628] XFS (sda5): Ending clean mount
More information about the Linuxppc-dev
mailing list