[PATCH 1/3] powerpc/mm: Don't alias user region to other regions below PAGE_OFFSET

Aneesh Kumar K.V aneesh.kumar at linux.vnet.ibm.com
Fri Sep 2 22:22:16 AEST 2016


Hi Paul,

Really nice catch. Was this found by code analysis or do we have any
reported issue around this ?

Paul Mackerras <paulus at ozlabs.org> writes:

> In commit c60ac5693c47 ("powerpc: Update kernel VSID range", 2013-03-13)
> we lost a check on the region number (the top four bits of the effective
> address) for addresses below PAGE_OFFSET.  That commit replaced a check
> that the top 18 bits were all zero with a check that bits 46 - 59 were
> zero (performed for all addresses, not just user addresses).

To make review easy for others, here is the relevant diff from that commit.

 _GLOBAL(slb_allocate_realmode)
-       /* r3 = faulting address */
+       /*
+        * check for bad kernel/user address
+        * (ea & ~REGION_MASK) >= PGTABLE_RANGE
+        */
+       rldicr. r9,r3,4,(63 - 46 - 4)
+       bne-    8f
 
        srdi    r9,r3,60                /* get region */

......
And because we were doing the above check, I removed
.........

 BEGIN_FTR_SECTION
        b       slb_finish_load
 END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
        b       slb_finish_load_1T
 
-0:     /* user address: proto-VSID = context << 15 | ESID. First check
-        * if the address is within the boundaries of the user region
-        */
-       srdi.   r9,r10,USER_ESID_BITS
-       bne-    8f                      /* invalid ea bits set */
-
-
+0:


>
> This means that userspace can access an address like 0x1000_0xxx_xxxx_xxxx
> and we will insert a valid SLB entry for it.  The VSID used will be the
> same as if the top 4 bits were 0, but the page size will be some random
> value obtained by indexing beyond the end of the mm_ctx_high_slices_psize
> array in the paca.  If that page size is the same as would be used for
> region 0, then userspace just has an alias of the region 0 space.  If the
> page size is different, then no HPTE will be found for the access, and
> the process will get a SIGSEGV (since hash_page_mm() will refuse to create
> a HPTE for the bogus address).
>
> The access beyond the end of the mm_ctx_high_slices_psize can be at most
> 5.5MB past the array, and so will be in RAM somewhere.  Since the access
> is a load performed in real mode, it won't fault or crash the kernel.
> At most this bug could perhaps leak a little bit of information about
> blocks of 32 bytes of memory located at offsets of i * 512kB past the
> paca->mm_ctx_high_slices_psize array, for 1 <= i <= 11.


Reviewed-by: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>

>
> Cc: stable at vger.kernel.org # v3.10+
> Cc: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
> Signed-off-by: Paul Mackerras <paulus at ozlabs.org>
> ---
>  arch/powerpc/mm/slb_low.S | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
> index dfdb90c..9f19834 100644
> --- a/arch/powerpc/mm/slb_low.S
> +++ b/arch/powerpc/mm/slb_low.S
> @@ -113,7 +113,12 @@ BEGIN_FTR_SECTION
>  END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
>  	b	slb_finish_load_1T
>
> -0:
> +0:	/*
> +	 * For userspace addresses, make sure this is region 0.
> +	 */
> +	cmpdi	r9, 0
> +	bne	8f
> +
>  	/* when using slices, we extract the psize off the slice bitmaps
>  	 * and then we need to get the sllp encoding off the mmu_psize_defs
>  	 * array.
> -- 
> 2.7.4



More information about the Linuxppc-dev mailing list