[PATCH v3] KVM: PPC: Book3S HV: Migrate pinned pages out of CMA
Balbir Singh
bsingharora at gmail.com
Wed Sep 7 09:53:30 AEST 2016
On 06/09/16 21:54, Anshuman Khandual wrote:
> On 09/06/2016 11:57 AM, Balbir Singh wrote:
>>
>> When PCI Device pass-through is enabled via VFIO, KVM-PPC will
>> pin pages using get_user_pages_fast(). One of the downsides of
>> the pinning is that the page could be in CMA region. The CMA
>> region is used for other allocations like the hash page table.
>> Ideally we want the pinned pages to be from non CMA region.
>>
>> This patch (currently only for KVM PPC with VFIO) forcefully
>> migrates the pages out (huge pages are omitted for the moment).
>> There are more efficient ways of doing this, but that might
>> be elaborate and might impact a larger audience beyond just
>> the kvm ppc implementation.
>>
>> The magic is in new_iommu_non_cma_page() which allocates the
>> new page from a non CMA region.
>>
>> I've tested the patches lightly at my end. The full solution
>> requires migration of THP pages in the CMA region. That work
>> will be done incrementally on top of this.
>>
>> Previous discussion was at
>> http://permalink.gmane.org/gmane.linux.kernel.mm/136738
>>
>> Cc: Benjamin Herrenschmidt <benh at kernel.crashing.org>
>> Cc: Michael Ellerman <mpe at ellerman.id.au>
>> Cc: Paul Mackerras <paulus at ozlabs.org>
>> Cc: Alexey Kardashevskiy <aik at ozlabs.ru>
>>
>> Signed-off-by: Balbir Singh <bsingharora at gmail.com>
>> Acked-by: Alexey Kardashevskiy <aik at ozlabs.ru>
>> ---
>> arch/powerpc/include/asm/mmu_context.h | 1 +
>> arch/powerpc/mm/mmu_context_iommu.c | 81 ++++++++++++++++++++++++++++++++--
>> 2 files changed, 78 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
>> index 9d2cd0c..475d1be 100644
>> --- a/arch/powerpc/include/asm/mmu_context.h
>> +++ b/arch/powerpc/include/asm/mmu_context.h
>> @@ -18,6 +18,7 @@ extern void destroy_context(struct mm_struct *mm);
>> #ifdef CONFIG_SPAPR_TCE_IOMMU
>> struct mm_iommu_table_group_mem_t;
>>
>> +extern int isolate_lru_page(struct page *page); /* from internal.h */
>
> Small nit, cant we just add "mm/internal.h" header here with full path ?
>
I did not think it was worth including it here
>> extern bool mm_iommu_preregistered(void);
>> extern long mm_iommu_get(unsigned long ua, unsigned long entries,
>> struct mm_iommu_table_group_mem_t **pmem);
>> diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
>> index da6a216..e0f1c33 100644
>> --- a/arch/powerpc/mm/mmu_context_iommu.c
>> +++ b/arch/powerpc/mm/mmu_context_iommu.c
>> @@ -15,6 +15,9 @@
>> #include <linux/rculist.h>
>> #include <linux/vmalloc.h>
>> #include <linux/mutex.h>
>> +#include <linux/migrate.h>
>> +#include <linux/hugetlb.h>
>> +#include <linux/swap.h>
>> #include <asm/mmu_context.h>
>>
>> static DEFINE_MUTEX(mem_list_mutex);
>> @@ -72,6 +75,55 @@ bool mm_iommu_preregistered(void)
>> }
>> EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
>>
>> +/*
>> + * Taken from alloc_migrate_target with changes to remove CMA allocations
>> + */
>> +struct page *new_iommu_non_cma_page(struct page *page, unsigned long private,
>> + int **resultp)
>> +{
>> + gfp_t gfp_mask = GFP_USER;
>> + struct page *new_page;
>> +
>> + if (PageHuge(page) || PageTransHuge(page) || PageCompound(page))
>> + return NULL;
>> +
>> + if (PageHighMem(page))
>> + gfp_mask |= __GFP_HIGHMEM;
>> +
>> + /*
>> + * We don't want the allocation to force an OOM if possibe
>> + */
>> + new_page = alloc_page(gfp_mask | __GFP_NORETRY | __GFP_NOWARN);
>
> So what guarantees that the new page too wont come from MIGRATE_CMA
> page block ? Is absence of __GFP_MOVABLE flag enough.
I think so, that is what I am relying on and checked for
Also should not
> we be checking that migrate type of the new allocated page is indeed
> not MIGRATE_CMA ?
>
I don't think that is required, may be I can do a VM_WARN_ON for debugging
Balbir Singh.
More information about the Linuxppc-dev
mailing list