[PATCH] powerpc/mce: Fix a bug where mce loops on memory UE.
Balbir Singh
bsingharora at gmail.com
Mon Apr 23 21:14:12 AEST 2018
On Mon, Apr 23, 2018 at 8:33 PM, Mahesh Jagannath Salgaonkar
<mahesh at linux.vnet.ibm.com> wrote:
> On 04/23/2018 12:21 PM, Balbir Singh wrote:
>> On Mon, Apr 23, 2018 at 2:59 PM, Mahesh J Salgaonkar
>> <mahesh at linux.vnet.ibm.com> wrote:
>>> From: Mahesh Salgaonkar <mahesh at linux.vnet.ibm.com>
>>>
>>> The current code extracts the physical address for UE errors and then
>>> hooks it up into memory failure infrastructure. On successful extraction
>>> of physical address it wrongly sets "handled = 1" which means this UE error
>>> has been recovered. Since MCE handler gets return value as handled = 1, it
>>> assumes that error has been recovered and goes back to same NIP. This causes
>>> MCE interrupt again and again in a loop leading to hard lockup.
>>>
>>> Also, initialize phys_addr to ULONG_MAX so that we don't end up queuing
>>> undesired page to hwpoison.
>>>
>>> Without this patch we see:
>>> [ 1476.541984] Severe Machine check interrupt [Recovered]
>>> [ 1476.541985] NIP: [000000001002588c] PID: 7109 Comm: find
>>> [ 1476.541986] Initiator: CPU
>>> [ 1476.541987] Error type: UE [Load/Store]
>>> [ 1476.541988] Effective address: 00007fffd2755940
>>> [ 1476.541989] Physical address: 000020181a080000
>>> [...]
>>> [ 1476.542003] Severe Machine check interrupt [Recovered]
>>> [ 1476.542004] NIP: [000000001002588c] PID: 7109 Comm: find
>>> [ 1476.542005] Initiator: CPU
>>> [ 1476.542006] Error type: UE [Load/Store]
>>> [ 1476.542006] Effective address: 00007fffd2755940
>>> [ 1476.542007] Physical address: 000020181a080000
>>> [ 1476.542010] Severe Machine check interrupt [Recovered]
>>> [ 1476.542012] NIP: [000000001002588c] PID: 7109 Comm: find
>>> [ 1476.542013] Initiator: CPU
>>> [ 1476.542014] Error type: UE [Load/Store]
>>> [ 1476.542015] Effective address: 00007fffd2755940
>>> [ 1476.542016] Physical address: 000020181a080000
>>> [ 1476.542448] Memory failure: 0x20181a08: recovery action for dirty LRU page: Recovered
>>> [ 1476.542452] Memory failure: 0x20181a08: already hardware poisoned
>>> [ 1476.542453] Memory failure: 0x20181a08: already hardware poisoned
>>> [ 1476.542454] Memory failure: 0x20181a08: already hardware poisoned
>>> [ 1476.542455] Memory failure: 0x20181a08: already hardware poisoned
>>> [ 1476.542456] Memory failure: 0x20181a08: already hardware poisoned
>>> [ 1476.542457] Memory failure: 0x20181a08: already hardware poisoned
>>> [...]
>>> [ 1490.972174] Watchdog CPU:38 Hard LOCKUP
>>>
>>> After this patch we see:
>>>
>>> [ 325.384336] Severe Machine check interrupt [Not recovered]
>>
>> How did you test for this?
>
> By injecting cache SUE using L2 FIR register (0x1001080c).
>
>> If the error was recovered, shouldn't the
>> process have gotten
>> a SIGBUS and we should have prevented further access as a part of the handling
>> (memory_failure()). Do we just need a MF_MUST_KILL in the flags?
>
> We hook it up to memory_failure() through a work queue and by the time
> work queue kicks in, the application continues to restart and hit same
> NIP again and again. Every MCE again hooks the same address to memory
> failure work queue and throws multiple recovered MCE messages for same
> address. Once the memory_failure() hwpoisons the page, application gets
> SIGBUS and then we are fine.
>
That seems quite broken and not recovered is very confusing. So effectively
we can never recover from a MCE UE. I think we need a notion of delayed
recovery then? Where we do recover, but mark is as recovered with delays?
We might want to revisit our recovery process and see if the recovery requires
to turn the MMU on, but that is for later, I suppose.
> But in case of UE in kernel space, if early machine_check handler
> "machine_check_early()" returns as recovered then
> machine_check_handle_early() queues up the MCE event and continues from
> NIP assuming it is safe causing a MCE loop. So, for UE in kernel we end
> up in hard lockup.
>
Yeah for the kernel, we need to definitely cause a panic for now, I've got other
patches for things we need to do for pmem that would allow potential recovery.
Balbir Singh
More information about the Linuxppc-dev
mailing list