[RFC PATCH 2/3] powerpc/lib: Initialize a temporary mm for code patching

Michael Ellerman mpe at ellerman.id.au
Fri Apr 17 10:57:10 AEST 2020


Christophe Leroy <christophe.leroy at c-s.fr> writes:
> Le 31/03/2020 à 05:19, Christopher M Riedl a écrit :
>>> On March 24, 2020 11:10 AM Christophe Leroy <christophe.leroy at c-s.fr> wrote:
>>> Le 23/03/2020 à 05:52, Christopher M. Riedl a écrit :
>>>> When code patching a STRICT_KERNEL_RWX kernel the page containing the
>>>> address to be patched is temporarily mapped with permissive memory
>>>> protections. Currently, a per-cpu vmalloc patch area is used for this
>>>> purpose. While the patch area is per-cpu, the temporary page mapping is
>>>> inserted into the kernel page tables for the duration of the patching.
>>>> The mapping is exposed to CPUs other than the patching CPU - this is
>>>> undesirable from a hardening perspective.
>>>>
>>>> Use the `poking_init` init hook to prepare a temporary mm and patching
>>>> address. Initialize the temporary mm by copying the init mm. Choose a
>>>> randomized patching address inside the temporary mm userspace address
>>>> portion. The next patch uses the temporary mm and patching address for
>>>> code patching.
>>>>
>>>> Based on x86 implementation:
>>>>
>>>> commit 4fc19708b165
>>>> ("x86/alternatives: Initialize temporary mm for patching")
>>>>
>>>> Signed-off-by: Christopher M. Riedl <cmr at informatik.wtf>
>>>> ---
>>>>    arch/powerpc/lib/code-patching.c | 26 ++++++++++++++++++++++++++
>>>>    1 file changed, 26 insertions(+)
>>>>
>>>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>>>> index 3345f039a876..18b88ecfc5a8 100644
>>>> --- a/arch/powerpc/lib/code-patching.c
>>>> +++ b/arch/powerpc/lib/code-patching.c
>>>> @@ -11,6 +11,8 @@
>>>>    #include <linux/cpuhotplug.h>
>>>>    #include <linux/slab.h>
>>>>    #include <linux/uaccess.h>
>>>> +#include <linux/sched/task.h>
>>>> +#include <linux/random.h>
>>>>    
>>>>    #include <asm/pgtable.h>
>>>>    #include <asm/tlbflush.h>
>>>> @@ -39,6 +41,30 @@ int raw_patch_instruction(unsigned int *addr, unsigned int instr)
>>>>    }
>>>>    
>>>>    #ifdef CONFIG_STRICT_KERNEL_RWX
>>>> +
>>>> +__ro_after_init struct mm_struct *patching_mm;
>>>> +__ro_after_init unsigned long patching_addr;
>>>
>>> Can we make those those static ?
>>>
>> 
>> Yes, makes sense to me.
>> 
>>>> +
>>>> +void __init poking_init(void)
>>>> +{
>>>> +	spinlock_t *ptl; /* for protecting pte table */
>>>> +	pte_t *ptep;
>>>> +
>>>> +	patching_mm = copy_init_mm();
>>>> +	BUG_ON(!patching_mm);
>>>
>>> Does it needs to be a BUG_ON() ? Can't we fail gracefully with just a
>>> WARN_ON ?
>>>
>> 
>> I'm not sure what failing gracefully means here? The main reason this could
>> fail is if there is not enough memory to allocate the patching_mm. The
>> previous implementation had this justification for BUG_ON():
>
> But the system can continue running just fine after this failure.
> Only the things that make use of code patching will fail (ftrace, kgdb, ...)

That's probably true of ftrace, but we can't fail patching for jump
labels (static keys).

See:

void arch_jump_label_transform(struct jump_entry *entry,
			       enum jump_label_type type)
{
	u32 *addr = (u32 *)(unsigned long)entry->code;

	if (type == JUMP_LABEL_JMP)
		patch_branch(addr, entry->target, 0);
	else
		patch_instruction(addr, PPC_INST_NOP);
}

cheers


More information about the Linuxppc-dev mailing list