[PATCH v1 14/16] mm: rename zap_page_range_single() to zap_vma_range()

David Hildenbrand (Arm) david at kernel.org
Mon Mar 2 19:22:31 AEDT 2026


On 2/28/26 13:44, Alice Ryhl wrote:
> On Fri, Feb 27, 2026 at 09:08:45PM +0100, David Hildenbrand (Arm) wrote:
>> diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/page_range.rs
>> index fdd97112ef5c..2fddd4ed8d4c 100644
>> --- a/drivers/android/binder/page_range.rs
>> +++ b/drivers/android/binder/page_range.rs
>> @@ -130,7 +130,7 @@ pub(crate) struct ShrinkablePageRange {
>>      pid: Pid,
>>      /// The mm for the relevant process.
>>      mm: ARef<Mm>,
>> -    /// Used to synchronize calls to `vm_insert_page` and `zap_page_range_single`.
>> +    /// Used to synchronize calls to `vm_insert_page` and `zap_vma_range`.
>>      #[pin]
>>      mm_lock: Mutex<()>,
>>      /// Spinlock protecting changes to pages.
>> @@ -719,7 +719,7 @@ fn drop(self: Pin<&mut Self>) {
>>  
>>      if let Some(vma) = mmap_read.vma_lookup(vma_addr) {
>>          let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);
>> -        vma.zap_page_range_single(user_page_addr, PAGE_SIZE);
>> +        vma.zap_vma_range(user_page_addr, PAGE_SIZE);
>>      }
> 
> LGTM. Be aware that this will have a merge conflict with patches
> currently in char-misc-linus that are scheduled to land in an -rc.

Thanks. @Andrew will likely run into that when rebasing, where we can fix it up.

> 
>> diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
>> index dd2046bd5cde..e4488ad86a65 100644
>> --- a/drivers/android/binder_alloc.c
>> +++ b/drivers/android/binder_alloc.c
>> @@ -1185,7 +1185,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
>>  	if (vma) {
>>  		trace_binder_unmap_user_start(alloc, index);
>>  
>> -		zap_page_range_single(vma, page_addr, PAGE_SIZE);
>> +		zap_vma_range(vma, page_addr, PAGE_SIZE);
>>  
>>  		trace_binder_unmap_user_end(alloc, index);
> 
> LGTM.
> 
>> diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
>> index b8e59e4420f3..04b3cc925d67 100644
>> --- a/rust/kernel/mm/virt.rs
>> +++ b/rust/kernel/mm/virt.rs
>> @@ -113,7 +113,7 @@ pub fn end(&self) -> usize {
>>      /// kernel goes further in freeing unused page tables, but for the purposes of this operation
>>      /// we must only assume that the leaf level is cleared.
>>      #[inline]
>> -    pub fn zap_page_range_single(&self, address: usize, size: usize) {
>> +    pub fn zap_vma_range(&self, address: usize, size: usize) {
>>          let (end, did_overflow) = address.overflowing_add(size);
>>          if did_overflow || address < self.start() || self.end() < end {
>>              // TODO: call WARN_ONCE once Rust version of it is added
>> @@ -124,7 +124,7 @@ pub fn zap_page_range_single(&self, address: usize, size: usize) {
>>          // sufficient for this method call. This method has no requirements on the vma flags. The
>>          // address range is checked to be within the vma.
>>          unsafe {
>> -            bindings::zap_page_range_single(self.as_ptr(), address, size)
>> +            bindings::zap_vma_range(self.as_ptr(), address, size)
>>          };
>>      }
> 
> Same as previous patch: please run rustfmt. It will format on a single
> line, like this:
> 
>         unsafe { bindings::zap_vma_range(self.as_ptr(), address, size) };
> 

@Andrew, after squashing the fixup into patch #2, this hunk should look like this:

diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
index 6bfd91cfa1f4..63eb730b0b05 100644
--- a/rust/kernel/mm/virt.rs
+++ b/rust/kernel/mm/virt.rs
@@ -113,7 +113,7 @@ pub fn end(&self) -> usize {
     /// kernel goes further in freeing unused page tables, but for the purposes of this operation
     /// we must only assume that the leaf level is cleared.
     #[inline]
-    pub fn zap_page_range_single(&self, address: usize, size: usize) {
+    pub fn zap_vma_range(&self, address: usize, size: usize) {
         let (end, did_overflow) = address.overflowing_add(size);
         if did_overflow || address < self.start() || self.end() < end {
             // TODO: call WARN_ONCE once Rust version of it is added
@@ -123,7 +123,7 @@ pub fn zap_page_range_single(&self, address: usize, size: usize) {
         // SAFETY: By the type invariants, the caller has read access to this VMA, which is
         // sufficient for this method call. This method has no requirements on the vma flags. The
         // address range is checked to be within the vma.
-        unsafe { bindings::zap_page_range_single(self.as_ptr(), address, size) };
+        unsafe { bindings::zap_vma_range(self.as_ptr(), address, size) };
     }
 
     /// If the [`VM_MIXEDMAP`] flag is set, returns a [`VmaMixedMap`] to this VMA, otherwise


> with the above change applied:
> 
> Acked-by: Alice Ryhl <aliceryhl at google.com> # Rust and Binder

Thanks!

-- 
Cheers,

David


More information about the Linuxppc-dev mailing list