[PATCH 2/2] mm/dax: Don't enable huge dax mapping by default
Aneesh Kumar K.V
aneesh.kumar at linux.ibm.com
Wed Mar 20 19:06:43 AEDT 2019
Dan Williams <dan.j.williams at intel.com> writes:
>
>> Now what will be page size used for mapping vmemmap?
>
> That's up to the architecture's vmemmap_populate() implementation.
>
>> Architectures
>> possibly will use PMD_SIZE mapping if supported for vmemmap. Now a
>> device-dax with struct page in the device will have pfn reserve area aligned
>> to PAGE_SIZE with the above example? We can't map that using
>> PMD_SIZE page size?
>
> IIUC, that's a different alignment. Currently that's handled by
> padding the reservation area up to a section (128MB on x86) boundary,
> but I'm working on patches to allow sub-section sized ranges to be
> mapped.
I am missing something w.r.t code. The below code align that using nd_pfn->align
if (nd_pfn->mode == PFN_MODE_PMEM) {
unsigned long memmap_size;
/*
* vmemmap_populate_hugepages() allocates the memmap array in
* HPAGE_SIZE chunks.
*/
memmap_size = ALIGN(64 * npfns, HPAGE_SIZE);
offset = ALIGN(start + SZ_8K + memmap_size + dax_label_reserve,
nd_pfn->align) - start;
}
IIUC that is finding the offset where to put vmemmap start. And that has
to be aligned to the page size with which we may end up mapping vmemmap
area right?
Yes we find the npfns by aligning up using PAGES_PER_SECTION. But that
is to compute howmany pfns we should map for this pfn dev right?
-aneesh
More information about the Linuxppc-dev
mailing list