[PATCH] powerpc/mce: Fix a bug where mce loops on memory UE.

Mahesh Jagannath Salgaonkar mahesh at linux.vnet.ibm.com
Mon Apr 23 23:01:36 AEST 2018


On 04/23/2018 04:44 PM, Balbir Singh wrote:
> On Mon, Apr 23, 2018 at 8:33 PM, Mahesh Jagannath Salgaonkar
> <mahesh at linux.vnet.ibm.com> wrote:
>> On 04/23/2018 12:21 PM, Balbir Singh wrote:
>>> On Mon, Apr 23, 2018 at 2:59 PM, Mahesh J Salgaonkar
>>> <mahesh at linux.vnet.ibm.com> wrote:
>>>> From: Mahesh Salgaonkar <mahesh at linux.vnet.ibm.com>
>>>>
>>>> The current code extracts the physical address for UE errors and then
>>>> hooks it up into memory failure infrastructure. On successful extraction
>>>> of physical address it wrongly sets "handled = 1" which means this UE error
>>>> has been recovered. Since MCE handler gets return value as handled = 1, it
>>>> assumes that error has been recovered and goes back to same NIP. This causes
>>>> MCE interrupt again and again in a loop leading to hard lockup.
>>>>
>>>> Also, initialize phys_addr to ULONG_MAX so that we don't end up queuing
>>>> undesired page to hwpoison.
>>>>
>>>> Without this patch we see:
>>>> [ 1476.541984] Severe Machine check interrupt [Recovered]
>>>> [ 1476.541985]   NIP: [000000001002588c] PID: 7109 Comm: find
>>>> [ 1476.541986]   Initiator: CPU
>>>> [ 1476.541987]   Error type: UE [Load/Store]
>>>> [ 1476.541988]     Effective address: 00007fffd2755940
>>>> [ 1476.541989]     Physical address:  000020181a080000
>>>> [...]
>>>> [ 1476.542003] Severe Machine check interrupt [Recovered]
>>>> [ 1476.542004]   NIP: [000000001002588c] PID: 7109 Comm: find
>>>> [ 1476.542005]   Initiator: CPU
>>>> [ 1476.542006]   Error type: UE [Load/Store]
>>>> [ 1476.542006]     Effective address: 00007fffd2755940
>>>> [ 1476.542007]     Physical address:  000020181a080000
>>>> [ 1476.542010] Severe Machine check interrupt [Recovered]
>>>> [ 1476.542012]   NIP: [000000001002588c] PID: 7109 Comm: find
>>>> [ 1476.542013]   Initiator: CPU
>>>> [ 1476.542014]   Error type: UE [Load/Store]
>>>> [ 1476.542015]     Effective address: 00007fffd2755940
>>>> [ 1476.542016]     Physical address:  000020181a080000
>>>> [ 1476.542448] Memory failure: 0x20181a08: recovery action for dirty LRU page: Recovered
>>>> [ 1476.542452] Memory failure: 0x20181a08: already hardware poisoned
>>>> [ 1476.542453] Memory failure: 0x20181a08: already hardware poisoned
>>>> [ 1476.542454] Memory failure: 0x20181a08: already hardware poisoned
>>>> [ 1476.542455] Memory failure: 0x20181a08: already hardware poisoned
>>>> [ 1476.542456] Memory failure: 0x20181a08: already hardware poisoned
>>>> [ 1476.542457] Memory failure: 0x20181a08: already hardware poisoned
>>>> [...]
>>>> [ 1490.972174] Watchdog CPU:38 Hard LOCKUP
>>>>
>>>> After this patch we see:
>>>>
>>>> [  325.384336] Severe Machine check interrupt [Not recovered]
>>>
>>> How did you test for this?
>>
>> By injecting cache SUE using L2 FIR register (0x1001080c).
>>
>>> If the error was recovered, shouldn't the
>>> process have gotten
>>> a SIGBUS and we should have prevented further access as a part of the handling
>>> (memory_failure()). Do we just need a MF_MUST_KILL in the flags?
>>
>> We hook it up to memory_failure() through a work queue and by the time
>> work queue kicks in, the application continues to restart and hit same
>> NIP again and again. Every MCE again hooks the same address to memory
>> failure work queue and throws multiple recovered MCE messages for same
>> address. Once the memory_failure() hwpoisons the page, application gets
>> SIGBUS and then we are fine.
>>
> 
> That seems quite broken and not recovered is very confusing. So effectively
> we can never recover from a MCE UE. 

By not setting handle = 1, the recovery code will fall through
machine_check_exception()->opal_machine_check() and then SIGBUS is sent
to this process to recover OR head to panic path for kernel UE. We have
already hooked up the physical address to memory_failure() which will
later hwpoison the page whenever work queue kicks in. This patch makes
sure this happens.

> I think we need a notion of delayed
> recovery then? Where we do recover, but mark is as recovered with delays?

Yeah, may be we can set disposition for userspace mce event as recovery
in progress/delayed and then print the mce event again from work queue
by looking at return value from memory_failure(). What do you think ?

> We might want to revisit our recovery process and see if the recovery requires
> to turn the MMU on, but that is for later, I suppose.
> 
>> But in case of UE in kernel space, if early machine_check handler
>> "machine_check_early()" returns as recovered then
>> machine_check_handle_early() queues up the MCE event and continues from
>> NIP assuming it is safe causing a MCE loop. So, for UE in kernel we end
>> up in hard lockup.
>>
> 
> Yeah for the kernel, we need to definitely cause a panic for now, I've got other
> patches for things we need to do for pmem that would allow potential recovery.
> 
> Balbir Singh
> 



More information about the Linuxppc-dev mailing list