[PATCH 1/7] powerpc: introduce pte_set_hash_slot() helper

Balbir Singh bsingharora at gmail.com
Wed Sep 13 17:55:53 AEST 2017


On Sat, Sep 9, 2017 at 8:44 AM, Ram Pai <linuxram at us.ibm.com> wrote:
> Introduce pte_set_hash_slot().It  sets the (H_PAGE_F_SECOND|H_PAGE_F_GIX)
> bits at  the   appropriate   location   in   the   PTE  of  4K  PTE.  For
> 64K PTE, it  sets  the  bits  in  the  second  part  of  the  PTE. Though
> the implementation  for the former just needs the slot parameter, it does
> take some additional parameters to keep the prototype consistent.
>
> This function  will  be  handy  as  we   work   towards  re-arranging the
> bits in the later patches.
>
> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
> Signed-off-by: Ram Pai <linuxram at us.ibm.com>
> ---
>  arch/powerpc/include/asm/book3s/64/hash-4k.h  |   15 +++++++++++++++
>  arch/powerpc/include/asm/book3s/64/hash-64k.h |   25 +++++++++++++++++++++++++
>  2 files changed, 40 insertions(+), 0 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
> index 0c4e470..8909039 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
> @@ -48,6 +48,21 @@ static inline int hash__hugepd_ok(hugepd_t hpd)
>  }
>  #endif
>
> +/*
> + * 4k pte format is  different  from  64k  pte  format.  Saving  the
> + * hash_slot is just a matter of returning the pte bits that need to
> + * be modified. On 64k pte, things are a  little  more  involved and
> + * hence  needs   many   more  parameters  to  accomplish  the  same.
> + * However we  want  to abstract this out from the caller by keeping
> + * the prototype consistent across the two formats.
> + */
> +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte,
> +                       unsigned int subpg_index, unsigned long slot)
> +{
> +       return (slot << H_PAGE_F_GIX_SHIFT) &
> +               (H_PAGE_F_SECOND | H_PAGE_F_GIX);
> +}
> +
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>
>  static inline char *get_hpte_slot_array(pmd_t *pmdp)
> diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> index 9732837..6652669 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> @@ -74,6 +74,31 @@ static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
>         return (pte_val(rpte.pte) >> H_PAGE_F_GIX_SHIFT) & 0xf;
>  }
>
> +/*
> + * Commit the hash slot and return pte bits that needs to be modified.
> + * The caller is expected to modify the pte bits accordingly and
> + * commit the pte to memory.
> + */
> +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte,
> +               unsigned int subpg_index, unsigned long slot)
> +{
> +       unsigned long *hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
> +
> +       rpte.hidx &= ~(0xfUL << (subpg_index << 2));
> +       *hidxp = rpte.hidx  | (slot << (subpg_index << 2));
> +       /*
> +        * Commit the hidx bits to memory before returning.
> +        * Anyone reading  pte  must  ensure hidx bits are
> +        * read  only  after  reading the pte by using the

Can you lose the only and make it "read after reading the pte"
read only is easy to confuse as read-only

> +        * read-side  barrier  smp_rmb(). __real_pte() can
> +        * help ensure that.
> +        */
> +       smp_wmb();
> +
> +       /* no pte bits to be modified, return 0x0UL */
> +       return 0x0UL;

Acked-by: Balbir Singh <bsingharora at gmail.com>

Balbir Singh.


More information about the Linuxppc-dev mailing list