[PATCH 1/7] powerpc: introduce pte_set_hash_slot() helper

Michael Ellerman mpe at ellerman.id.au
Thu Oct 19 15:52:27 AEDT 2017


Ram Pai <linuxram at us.ibm.com> writes:
> diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> index 9732837..6652669 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> @@ -74,6 +74,31 @@ static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
>  	return (pte_val(rpte.pte) >> H_PAGE_F_GIX_SHIFT) & 0xf;
>  }
>  
> +/*
> + * Commit the hash slot and return pte bits that needs to be modified.
> + * The caller is expected to modify the pte bits accordingly and
> + * commit the pte to memory.
> + */
> +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte,
> +		unsigned int subpg_index, unsigned long slot)
> +{
> +	unsigned long *hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
> +
> +	rpte.hidx &= ~(0xfUL << (subpg_index << 2));
> +	*hidxp = rpte.hidx  | (slot << (subpg_index << 2));
                           ^
                           stray space here
> +	/*
> +	 * Commit the hidx bits to memory before returning.

I'd prefer we didn't use "commit", it implies the bits are actually
written to memory by the barrier, which is not true. The barrier is just
a barrier or fence which prevents some reorderings of the things before
it and the things after it.

> +	 * Anyone reading  pte  must  ensure hidx bits are
> +	 * read  only  after  reading the pte by using the
> +	 * read-side  barrier  smp_rmb().

That seems OK. Though I'm reminded that I dislike your justified
comments, the odd spacing is jarring to read.

>          __real_pte() can
> +	 * help ensure that.

It doesn't help, it *does* do that.

cheers


More information about the Linuxppc-dev mailing list