[PATCH] powerpc/mm: Fix virt_addr_valid() etc. on 64-bit hash

Balbir Singh bsingharora at gmail.com
Fri May 19 04:00:06 AEST 2017


On Thu, May 18, 2017 at 8:37 PM, Michael Ellerman <mpe at ellerman.id.au> wrote:
> virt_addr_valid() is supposed to tell you if it's OK to call virt_to_page() on
> an address. What this means in practice is that it should only return true for
> addresses in the linear mapping which are backed by a valid PFN.
>
> We are failing to properly check that the address is in the linear mapping,
> because virt_to_pfn() will return a valid looking PFN for more or less any
> address. That bug is actually caused by __pa(), used in virt_to_pfn().
>
> eg: __pa(0xc000000000010000) = 0x10000  # Good
>     __pa(0xd000000000010000) = 0x10000  # Bad!
>     __pa(0x0000000000010000) = 0x10000  # Bad!
>

I fixed something similar in skiboot and KVM, I should have audited this space
as well.

> This started happening after commit bdbc29c19b26 ("powerpc: Work around gcc
> miscompilation of __pa() on 64-bit") (Aug 2013), where we changed the definition
> of __pa() to work around a GCC bug. Prior to that we subtracted PAGE_OFFSET from
> the value passed to __pa(), meaning __pa() of a 0xd or 0x0 address would give
> you something bogus back.
>
> Until we can verify if that GCC bug is no longer an issue, or come up with
> another solution, this commit does the minimal fix to make virt_addr_valid()
> work, by explicitly checking that the address is in the linear mapping region.
>
> Fixes: bdbc29c19b26 ("powerpc: Work around gcc miscompilation of __pa() on 64-bit")
> Signed-off-by: Michael Ellerman <mpe at ellerman.id.au>
> ---
>  arch/powerpc/include/asm/page.h | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 2a32483c7b6c..8da5d4c1cab2 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -132,7 +132,19 @@ extern long long virt_phys_offset;
>  #define virt_to_pfn(kaddr)     (__pa(kaddr) >> PAGE_SHIFT)
>  #define virt_to_page(kaddr)    pfn_to_page(virt_to_pfn(kaddr))
>  #define pfn_to_kaddr(pfn)      __va((pfn) << PAGE_SHIFT)
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +/*
> + * On hash the vmalloc and other regions alias to the kernel region when passed
> + * through __pa(), which virt_to_pfn() uses. That means virt_addr_valid() can
> + * return true for some vmalloc addresses, which is incorrect. So explicitly
> + * check that the address is in the kernel region.
> + */
> +#define virt_addr_valid(kaddr) (REGION_ID(kaddr) == KERNEL_REGION_ID && \
> +                               pfn_valid(virt_to_pfn(kaddr)))
> +#else
>  #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
> +#endif
>

Looks good to me

Reviewed-by: Balbir Singh <bsingharora at gmail.com>


More information about the Linuxppc-dev mailing list