[PATCH v2 00/11] Remove device private pages from physical address space

Zi Yan ziy at nvidia.com
Thu Jan 8 07:21:33 AEDT 2026


On 7 Jan 2026, at 13:36, Matthew Brost wrote:

> On Wed, Jan 07, 2026 at 08:18:12PM +1100, Jordan Niethe wrote:
>> Today, when creating these device private struct pages, the first step
>> is to use request_free_mem_region() to get a range of physical address
>> space large enough to represent the devices memory. This allocated
>> physical address range is then remapped as device private memory using
>> memremap_pages.
>>
>> Needing allocation of physical address space has some problems:
>>
>>   1) There may be insufficient physical address space to represent the
>>      device memory. KASLR reducing the physical address space and VM
>>      configurations with limited physical address space increase the
>>      likelihood of hitting this especially as device memory increases. This
>>      has been observed to prevent device private from being initialized.
>>
>>   2) Attempting to add the device private pages to the linear map at
>>      addresses beyond the actual physical memory causes issues on
>>      architectures like aarch64  - meaning the feature does not work there [0].
>>
>> This series changes device private memory so that it does not require
>> allocation of physical address space and these problems are avoided.
>> Instead of using the physical address space, we introduce a "device
>> private address space" and allocate from there.
>>
>> A consequence of placing the device private pages outside of the
>> physical address space is that they no longer have a PFN. However, it is
>> still necessary to be able to look up a corresponding device private
>> page from a device private PTE entry, which means that we still require
>> some way to index into this device private address space. Instead of a
>> PFN, device private pages use an offset into this device private address
>> space to look up device private struct pages.
>>
>> The problem that then needs to be addressed is how to avoid confusing
>> these device private offsets with PFNs. It is the inherent limited usage
>> of the device private pages themselves which make this possible. A
>> device private page is only used for userspace mappings, we do not need
>> to be concerned with them being used within the mm more broadly. This
>> means that the only way that the core kernel looks up these pages is via
>> the page table, where their PTE already indicates if they refer to a
>> device private page via their swap type, e.g.  SWP_DEVICE_WRITE. We can
>> use this information to determine if the PTE contains a PFN which should
>> be looked up in the page map, or a device private offset which should be
>> looked up elsewhere.
>>
>> This applies when we are creating PTE entries for device private pages -
>> because they have their own type there are already must be handled
>> separately, so it is a small step to convert them to a device private
>> PFN now too.
>>
>> The first part of the series updates callers where device private
>> offsets might now be encountered to track this extra state.
>>
>> The last patch contains the bulk of the work where we change how we
>> convert between device private pages to device private offsets and then
>> use a new interface for allocating device private pages without the need
>> for reserving physical address space.
>>
>> By removing the device private pages from the physical address space,
>> this series also opens up the possibility to moving away from tracking
>> device private memory using struct pages in the future. This is
>> desirable as on systems with large amounts of memory these device
>> private struct pages use a signifiant amount of memory and take a
>> significant amount of time to initialize.
>>
>> *** Changes in v2 ***
>>
>> The most significant change in v2 is addressing code paths that are
>> common between MEMORY_DEVICE_PRIVATE and MEMORY_DEVICE_COHERENT devices.
>>
>> This had been overlooked in previous revisions.
>>
>> To do this we introduce a migrate_pfn_from_page() helper which will call
>> device_private_offset_to_page() and set the MIGRATE_PFN_DEVICE_PRIVATE
>> flag if required.
>>
>> In places where we could have a device private offset
>> (MEMORY_DEVICE_PRIVATE) or a pfn (MEMORY_DEVICE_COHERENT) we update to
>> use an mpfn to disambiguate.  This includes some users in the drivers
>> and migrate_device_{pfns,range}().
>>
>> Seeking opinions on using the mpfns like this or if a new type would be
>> preferred.
>>
>>   - mm/migrate_device: Introduce migrate_pfn_from_page() helper
>>     - New to series
>>
>>   - drm/amdkfd: Use migrate pfns internally
>>     - New to series
>>
>>   - mm/migrate_device: Make migrate_device_{pfns,range}() take mpfns
>>     - New to series
>>
>>   - mm/migrate_device: Add migrate PFN flag to track device private pages
>>     - Update for migrate_pfn_from_page()
>>     - Rename to MIGRATE_PFN_DEVICE_PRIVATE
>>     - drm/amd: Check adev->gmc.xgmi.connected_to_cpu
>>     - lib/test_hmm.c: Check chunk->pagemap.type == MEMORY_DEVICE_PRIVATE
>>
>>   - mm: Add helpers to create migration entries from struct pages
>>     - Add a flags param
>>
>>   - mm: Add a new swap type for migration entries of device private pages
>>     - Add softleaf_is_migration_device_private_read()
>>
>>   - mm: Add helpers to create device private entries from struct pages
>>     - Add a flags param
>>
>>   - mm: Remove device private pages from the physical address space
>>     - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);
>>
>> Testing:
>> - selftests/mm/hmm-tests on an amd64 VM
>>
>> * NOTE: I will need help in testing the driver changes *
>>
>
> Thanks for the series. For some reason Intel's CI couldn't apply this
> series to drm-tip to get results [1]. I'll manually apply this and run all
> our SVM tests and get back you on results + review the changes here. For
> future reference if you want to use our CI system, the series must apply
> to drm-tip, feel free to rebase this series and just send to intel-xe
> list if you want CI results.
>
> I was also wondering if Nvidia could help review one our core MM patches
> [2] which is gating enabling 2M device pages too?

I will take a look. But next time, do you mind Ccing MM maintainers and
reviewers based on MAINTAINERS file? Otherwise, it is hard for people to
check every email from linux-mm.

Thanks.

>
> Matt
>
> [1] https://patchwork.freedesktop.org/series/159738/
> [2] https://patchwork.freedesktop.org/patch/694775/?series=159119&rev=1
>
>> Revisions:
>> - RFC: https://lore.kernel.org/all/20251128044146.80050-1-jniethe@nvidia.com/
>> - v1: https://lore.kernel.org/all/20251231043154.42931-1-jniethe@nvidia.com/
>>
>> [0] https://lore.kernel.org/lkml/CAMj1kXFZ=4hLL1w6iCV5O5uVoVLHAJbc0rr40j24ObenAjXe9w@mail.gmail.com/
>>
>> Jordan Niethe (11):
>>   mm/migrate_device: Introduce migrate_pfn_from_page() helper
>>   drm/amdkfd: Use migrate pfns internally
>>   mm/migrate_device: Make migrate_device_{pfns,range}() take mpfns
>>   mm/migrate_device: Add migrate PFN flag to track device private pages
>>   mm/page_vma_mapped: Add flags to page_vma_mapped_walk::pfn to track
>>     device private pages
>>   mm: Add helpers to create migration entries from struct pages
>>   mm: Add a new swap type for migration entries of device private pages
>>   mm: Add helpers to create device private entries from struct pages
>>   mm/util: Add flag to track device private pages in page snapshots
>>   mm/hmm: Add flag to track device private pages
>>   mm: Remove device private pages from the physical address space
>>
>>  Documentation/mm/hmm.rst                 |  11 +-
>>  arch/powerpc/kvm/book3s_hv_uvmem.c       |  43 ++---
>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  45 +++---
>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h |   2 +-
>>  drivers/gpu/drm/drm_pagemap.c            |  11 +-
>>  drivers/gpu/drm/nouveau/nouveau_dmem.c   |  45 ++----
>>  drivers/gpu/drm/xe/xe_svm.c              |  37 ++---
>>  fs/proc/page.c                           |   6 +-
>>  include/drm/drm_pagemap.h                |   8 +-
>>  include/linux/hmm.h                      |   7 +-
>>  include/linux/leafops.h                  | 116 ++++++++++++--
>>  include/linux/memremap.h                 |  64 +++++++-
>>  include/linux/migrate.h                  |  23 ++-
>>  include/linux/mm.h                       |   9 +-
>>  include/linux/rmap.h                     |  33 +++-
>>  include/linux/swap.h                     |   8 +-
>>  include/linux/swapops.h                  | 136 ++++++++++++++++
>>  lib/test_hmm.c                           |  86 ++++++----
>>  mm/debug.c                               |   9 +-
>>  mm/hmm.c                                 |   5 +-
>>  mm/huge_memory.c                         |  43 ++---
>>  mm/hugetlb.c                             |  15 +-
>>  mm/memory.c                              |   5 +-
>>  mm/memremap.c                            | 193 ++++++++++++++++++-----
>>  mm/migrate.c                             |   6 +-
>>  mm/migrate_device.c                      |  76 +++++----
>>  mm/mm_init.c                             |   8 +-
>>  mm/mprotect.c                            |  10 +-
>>  mm/page_vma_mapped.c                     |  32 +++-
>>  mm/rmap.c                                |  59 ++++---
>>  mm/util.c                                |   8 +-
>>  mm/vmscan.c                              |   2 +-
>>  32 files changed, 822 insertions(+), 339 deletions(-)
>>
>>
>> base-commit: f8f9c1f4d0c7a64600e2ca312dec824a0bc2f1da
>> -- 
>> 2.34.1
>>


Best Regards,
Yan, Zi


More information about the Linuxppc-dev mailing list