[PATCH V5 26/31] powerpc/mm: Remove the dependency on pte bit position in asm code

Anshuman Khandual khandual at linux.vnet.ibm.com
Fri Nov 27 00:32:59 AEDT 2015


On 11/23/2015 03:52 PM, Aneesh Kumar K.V wrote:
> We should not expect pte bit position in asm code. Simply
> by moving part of that to C

There is a full stop missing in the second sentence. The commit
message here does not tell about why we would want to process the
page access flags or other PTE flags in the C code. Was it needed
at this stage of this series during PTE change or its just an
improvement which could have segregated out before.

> 
> Acked-by: Scott Wood <scottwood at freescale.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/exceptions-64s.S | 16 +++-------------
>  arch/powerpc/mm/hash_utils_64.c      | 29 +++++++++++++++++++++++++++++
>  2 files changed, 32 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
> index 0a0399c2af11..34920f11dbdd 100644
> --- a/arch/powerpc/kernel/exceptions-64s.S
> +++ b/arch/powerpc/kernel/exceptions-64s.S
> @@ -1556,28 +1556,18 @@ do_hash_page:
>  	lwz	r0,TI_PREEMPT(r11)	/* If we're in an "NMI" */
>  	andis.	r0,r0,NMI_MASK at h	/* (i.e. an irq when soft-disabled) */
>  	bne	77f			/* then don't call hash_page now */
> -	/*
> -	 * We need to set the _PAGE_USER bit if MSR_PR is set or if we are
> -	 * accessing a userspace segment (even from the kernel). We assume
> -	 * kernel addresses always have the high bit set.
> -	 */
> -	rlwinm	r4,r4,32-25+9,31-9,31-9	/* DSISR_STORE -> _PAGE_RW */
> -	rotldi	r0,r3,15		/* Move high bit into MSR_PR posn */
> -	orc	r0,r12,r0		/* MSR_PR | ~high_bit */
> -	rlwimi	r4,r0,32-13,30,30	/* becomes _PAGE_USER access bit */
> -	ori	r4,r4,1			/* add _PAGE_PRESENT */
> -	rlwimi	r4,r5,22+2,31-2,31-2	/* Set _PAGE_EXEC if trap is 0x400 */
>  
>  	/*
>  	 * r3 contains the faulting address
> -	 * r4 contains the required access permissions
> +	 * r4 msr
>  	 * r5 contains the trap number
>  	 * r6 contains dsisr
>  	 *
>  	 * at return r3 = 0 for success, 1 for page fault, negative for error
>  	 */
> +        mr 	r4,r12
>  	ld      r6,_DSISR(r1)
> -	bl	hash_page		/* build HPTE if possible */
> +	bl	__hash_page		/* build HPTE if possible */

>  	cmpdi	r3,0			/* see if hash_page succeeded */

The comment needs to change to __hash_page ^^^^^^^^^^^^^^^^^^^^^^^^.

>  
>  	/* Success */
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index db35e7d83088..04d549527eaa 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -1162,6 +1162,35 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
>  }
>  EXPORT_SYMBOL_GPL(hash_page);

So we are still keeping hash_page as an exported symbol here as
there are consumers for it ?
 
>  
> +int __hash_page(unsigned long ea, unsigned long msr, unsigned long trap,
> +		unsigned long dsisr)
> +{
> +	unsigned long access = _PAGE_PRESENT;
> +	unsigned long flags = 0;
> +	struct mm_struct *mm = current->mm;
> +
> +	if (REGION_ID(ea) == VMALLOC_REGION_ID)
> +		mm = &init_mm;
> +
> +	if (dsisr & DSISR_NOHPTE)
> +		flags |= HPTE_NOHPTE_UPDATE;
> +
> +	if (dsisr & DSISR_ISSTORE)
> +		access |= _PAGE_RW;
> +	/*
> +	 * We need to set the _PAGE_USER bit if MSR_PR is set or if we are
> +	 * accessing a userspace segment (even from the kernel). We assume
> +	 * kernel addresses always have the high bit set.
> +	 */
> +	if ((msr & MSR_PR) || (REGION_ID(ea) == USER_REGION_ID))
> +		access |= _PAGE_USER;
> +
> +	if (trap == 0x400)
> +		access |= _PAGE_EXEC;
> +
> +	return hash_page_mm(mm, ea, access, trap, flags);
> +}

There are some code similarity between hash_page and __hash_page
above. Cant we consolidate some part of it ?



More information about the Linuxppc-dev mailing list