[RFC PATCH 0/3] Use per-CPU temporary mappings for patching

Andrew Donnellan ajd at linux.ibm.com
Wed Mar 25 13:51:27 AEDT 2020


On 23/3/20 3:52 pm, Christopher M. Riedl wrote:
> When compiled with CONFIG_STRICT_KERNEL_RWX, the kernel must create
> temporary mappings when patching itself. These mappings temporarily
> override the strict RWX text protections to permit a write. Currently,
> powerpc allocates a per-CPU VM area for patching. Patching occurs as
> follows:
> 
> 	1. Map page of text to be patched to per-CPU VM area w/
> 	   PAGE_KERNEL protection
> 	2. Patch text
> 	3. Remove the temporary mapping
> 
> While the VM area is per-CPU, the mapping is actually inserted into the
> kernel page tables. Presumably, this could allow another CPU to access
> the normally write-protected text - either malicously or accidentally -
> via this same mapping if the address of the VM area is known. Ideally,
> the mapping should be kept local to the CPU doing the patching (or any
> other sensitive operations requiring temporarily overriding memory
> protections) [0].
> 
> x86 introduced "temporary mm" structs which allow the creation of
> mappings local to a particular CPU [1]. This series intends to bring the
> notion of a temporary mm to powerpc and harden powerpc by using such a
> mapping for patching a kernel with strict RWX permissions.
> 
> The first patch introduces the temporary mm struct and API for powerpc
> along with a new function to retrieve a current hw breakpoint.
> 
> The second patch uses the `poking_init` init hook added by the x86
> patches to initialize a temporary mm and patching address. The patching
> address is randomized between 0 and DEFAULT_MAP_WINDOW-PAGE_SIZE. The
> upper limit is necessary due to how the hash MMU operates - by default
> the space above DEFAULT_MAP_WINDOW is not available. For now, both hash
> and radix randomize inside this range. The number of possible random
> addresses is dependent on PAGE_SIZE and limited by DEFAULT_MAP_WINDOW.
> 
> Bits of entropy with 64K page size on BOOK3S_64:
> 
> 	bits-o-entropy = log2(DEFAULT_MAP_WINDOW_USER64 / PAGE_SIZE)
> 
> 	PAGE_SIZE=64K, DEFAULT_MAP_WINDOW_USER64=128TB
> 	bits-o-entropy = log2(128TB / 64K)
> 	bits-o-entropy = 31
> 
> Currently, randomization occurs only once during initialization at boot.
> 
> The third patch replaces the VM area with the temporary mm in the
> patching code. The page for patching has to be mapped PAGE_SHARED with
> the hash MMU since hash prevents the kernel from accessing userspace
> pages with PAGE_PRIVILEGED bit set. There is on-going work on my side to
> explore if this is actually necessary in the hash codepath.
> 
> Testing so far is limited to booting on QEMU (power8 and power9 targets)
> and a POWER8 VM along with setting some simple xmon breakpoints (which
> makes use of code-patching). A POC lkdtm test is in-progress to actually
> exploit the existing vulnerability (ie. the mapping during patching is
> exposed in kernel page tables and accessible by other CPUS) - this will
> accompany a future v1 of this series.
> 
> [0]: https://github.com/linuxppc/issues/issues/224
> [1]: https://lore.kernel.org/kernel-hardening/20190426232303.28381-1-nadav.amit@gmail.com/
> 
> Christopher M. Riedl (3):
>    powerpc/mm: Introduce temporary mm
>    powerpc/lib: Initialize a temporary mm for code patching
>    powerpc/lib: Use a temporary mm for code patching
> 
>   arch/powerpc/include/asm/debug.h       |   1 +
>   arch/powerpc/include/asm/mmu_context.h |  56 +++++++++-
>   arch/powerpc/kernel/process.c          |   5 +
>   arch/powerpc/lib/code-patching.c       | 140 ++++++++++++++-----------
>   4 files changed, 137 insertions(+), 65 deletions(-)

This series causes a build failure with ppc64e_defconfig

https://openpower.xyz/job/snowpatch/job/snowpatch-linux-sparse/16478//artifact/linux/report.txt

-- 
Andrew Donnellan              OzLabs, ADL Canberra
ajd at linux.ibm.com             IBM Australia Limited



More information about the Linuxppc-dev mailing list