[PATCH 8/9] powerpc/mm: Wire up hpte_removebolted for powernv
Oliver O'Halloran
oohall at gmail.com
Thu Apr 13 14:21:53 AEST 2017
On Wed, Apr 12, 2017 at 11:53 AM, Balbir Singh <bsingharora at gmail.com> wrote:
> On Wed, 2017-04-12 at 03:42 +1000, Oliver O'Halloran wrote:
>> From: Rashmica Gupta <rashmica.g at gmail.com>
>>
>> Adds support for removing bolted (i.e kernel linear mapping) mappings on
>> powernv. This is needed to support memory hot unplug operations which
>> are required for the teardown of DAX/PMEM devices.
>>
>> Cc: Rashmica Gupta <rashmica.g at gmail.com>
>> Cc: Anton Blanchard <anton at samba.org>
>> Signed-off-by: Oliver O'Halloran <oohall at gmail.com>
>> ---
>> Could the original author of this add their S-o-b? I pulled it out of
>> Rashmica's memtrace patch, but I remember someone saying Anton wrote
>> it originally.
>> ---
>> arch/powerpc/mm/hash_native_64.c | 31 +++++++++++++++++++++++++++++++
>> 1 file changed, 31 insertions(+)
>>
>> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
>> index 65bb8f33b399..9ba91d4905a4 100644
>> --- a/arch/powerpc/mm/hash_native_64.c
>> +++ b/arch/powerpc/mm/hash_native_64.c
>> @@ -407,6 +407,36 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
>> tlbie(vpn, psize, psize, ssize, 0);
>> }
>>
>> +/*
>> + * Remove a bolted kernel entry. Memory hotplug uses this.
>> + *
>> + * No need to lock here because we should be the only user.
>
> As long as this is after the necessary isolation and is called from
> arch_remove_memory(), I think we should be fine
>
>> + */
>> +static int native_hpte_removebolted(unsigned long ea, int psize, int ssize)
>> +{
>> + unsigned long vpn;
>> + unsigned long vsid;
>> + long slot;
>> + struct hash_pte *hptep;
>> +
>> + vsid = get_kernel_vsid(ea, ssize);
>> + vpn = hpt_vpn(ea, vsid, ssize);
>> +
>> + slot = native_hpte_find(vpn, psize, ssize);
>> + if (slot == -1)
>> + return -ENOENT;
>
> If slot == -1, it means someone else removed the HPTE entry? Are we racing?
> I suspect we should never hit this situation during hotunplug, specifically
> since this is bolted.
Or the slot was never populated in the first place. I'd rather keep
the current behaviour since it aligns with the behaviour of
pSeries_lpar_hpte_removebolted and we might hit these situations in
the future if the sub-section hotplug patches are merged (big if...).
>
>> +
>> + hptep = htab_address + slot;
>> +
>> + /* Invalidate the hpte */
>> + hptep->v = 0;
>
> Under DEBUG or otherwise, I would add more checks like
>
> 1. was hpte_v & HPTE_V_VALID and BOLTED set? If not, we've already invalidated
> that hpte and we can skip the tlbie. Since this was bolted you might be right
> that it is always valid and bolted
A VM_WARN_ON() if the bolted bit is clear might be appropriate. We
don't need to check the valid bit since hpte_native_find() will fail
if it's cleared.
>
>> +
>> + /* Invalidate the TLB */
>> + tlbie(vpn, psize, psize, ssize, 0);
>
> The API also does not clear linear_map_hash_slots[] under DEBUG_PAGEALLOC
I'm not sure what API you're referring to here. The tracking for
linear_map_hash_slots[] is agnostic of mmu_hash_ops so we shouldn't be
touching it here. It also looks like DEBUG_PAGEALLOC is a bit broken
with hotplugged memory anyway so I think that's a fix for a different
patch.
>
>> + return 0;
>> +}
>> +
>> +
>
> Balbir Singh.
More information about the Linuxppc-dev
mailing list