These are the functions needed by Binder's shrinker.
Binder uses zap_page_range_single in the shrinker path to remove an
unused page from the mmap'd region. Note that pages are only removed
from the mmap'd region lazily when shrinker asks for it.
Binder uses list_lru_add/del to keep track of the shrinker lru list, and
it can't use _obj because the list head is not stored inline in the page
actually being lru freed, so page_to_nid(virt_to_page(item)) on the list
head computes the nid of the wrong page.
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
mm/list_lru.c | 2 ++
mm/memory.c | 1 +
2 files changed, 3 insertions(+)
diff --git a/mm/list_lru.c b/mm/list_lru.c
index ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
unlock_list_lru(l, false);
return false;
}
+EXPORT_SYMBOL_GPL(list_lru_add);
bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
{
@@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
unlock_list_lru(l, false);
return false;
}
+EXPORT_SYMBOL_GPL(list_lru_del);
bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
{
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
zap_page_range_single_batched(&tlb, vma, address, size, details);
tlb_finish_mmu(&tlb);
}
+EXPORT_SYMBOL(zap_page_range_single);
/**
* zap_vma_ptes - remove ptes mapping the vma
--
2.53.0.rc2.204.g2597b5adb4-goog
On Thu, Feb 05, 2026 at 10:51:28AM +0000, Alice Ryhl wrote:
> These are the functions needed by Binder's shrinker.
>
> Binder uses zap_page_range_single in the shrinker path to remove an
> unused page from the mmap'd region. Note that pages are only removed
> from the mmap'd region lazily when shrinker asks for it.
>
> Binder uses list_lru_add/del to keep track of the shrinker lru list, and
> it can't use _obj because the list head is not stored inline in the page
> actually being lru freed, so page_to_nid(virt_to_page(item)) on the list
> head computes the nid of the wrong page.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
> mm/list_lru.c | 2 ++
> mm/memory.c | 1 +
> 2 files changed, 3 insertions(+)
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> unlock_list_lru(l, false);
> return false;
> }
> +EXPORT_SYMBOL_GPL(list_lru_add);
>
> bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
> {
> @@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> unlock_list_lru(l, false);
> return false;
> }
> +EXPORT_SYMBOL_GPL(list_lru_del);
Same point as before about exporting symbols, but given the _obj variants are
exported already this one is more valid.
>
> bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> {
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> zap_page_range_single_batched(&tlb, vma, address, size, details);
> tlb_finish_mmu(&tlb);
> }
> +EXPORT_SYMBOL(zap_page_range_single);
Sorry but I don't want this exported at all.
This is an internal implementation detail which allows fine-grained control of
behaviour via struct zap_details (which binder doesn't use, of course :)
We either need a wrapper that eliminates this parameter (but then we're adding a
wrapper to this behaviour that is literally for one driver that is _temporarily_
being modularised which is weak justifiction), or use of a function that invokes
it that is currently exported.
Again the general policy with exports is that we really don't want to provide
them at all if we can help it, and if we do, only when it's really justified.
Drivers by nature generally abuse any interfaces provided, we reduce the surface
area of bugs in the kernel by minimising what's available (even via header for
in-tree...)
>
> /**
> * zap_vma_ptes - remove ptes mapping the vma
>
> --
> 2.53.0.rc2.204.g2597b5adb4-goog
>
Cheers, Lorenzo
On Thu, Feb 05, 2026 at 11:29:04AM +0000, Lorenzo Stoakes wrote: > We either need a wrapper that eliminates this parameter (but then we're adding a > wrapper to this behaviour that is literally for one driver that is _temporarily_ > being modularised which is weak justifiction), or use of a function that invokes > it that is currently exported. I have not talked with distros about it, but quite a few of them enable Binder because one or two applications want to use Binder to emulate Android. I imagine that even if Android itself goes back to built-in, distros would want it as a module so that you don't have to load it for every user, rather than for the few users that want to use waydroid or similar. A few examples: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/5711a17344ec7cfd90443374a30d5cd3e9a9439e/config#L10993 https://salsa.debian.org/kernel-team/linux/-/blob/debian/latest/debian/config/arm64/config?ref_type=heads#L106 https://gitlab.com/cki-project/kernel-ark/-/blob/os-build/redhat/configs/fedora/generic/x86/CONFIG_ANDROID_BINDER_IPC?ref_type=heads Alice
On Thu, Feb 05, 2026 at 12:07:00PM +0000, Alice Ryhl wrote: > On Thu, Feb 05, 2026 at 11:29:04AM +0000, Lorenzo Stoakes wrote: > > We either need a wrapper that eliminates this parameter (but then we're adding a > > wrapper to this behaviour that is literally for one driver that is _temporarily_ > > being modularised which is weak justifiction), or use of a function that invokes > > it that is currently exported. > > I have not talked with distros about it, but quite a few of them enable > Binder because one or two applications want to use Binder to emulate > Android. I imagine that even if Android itself goes back to built-in, > distros would want it as a module so that you don't have to load it for > every user, rather than for the few users that want to use waydroid or > similar. > > A few examples: > https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/5711a17344ec7cfd90443374a30d5cd3e9a9439e/config#L10993 > https://salsa.debian.org/kernel-team/linux/-/blob/debian/latest/debian/config/arm64/config?ref_type=heads#L106 > https://gitlab.com/cki-project/kernel-ark/-/blob/os-build/redhat/configs/fedora/generic/x86/CONFIG_ANDROID_BINDER_IPC?ref_type=heads I mean you should update the cover letter to make this clear and drop the whole reference to things being temporary, this is a lot more strident than the cover letter is. In any case, that has nothing to do with whether or not we export internal implementation details to a module. Something being in-tree compiled gets to use actually far too many internal interfaces that really should not have been exposed, we've been far too leniant about that, and that's something I want to address (mm has mm/*.h internal-only headers, not sure how we'll deal with that with rust though). Sadly even with in-tree, every interface you make available leads to driver abuse. So something compiled in-tree using X, Y or Z interface doesn't mean that it's correct or even wise, and modularising forces you to rethink that. folio_mkclean() is a great example, we were about to be able to make that mm-internal then 2 more filesystems started using it oops :) > > Alice Cheers, Lorenzo
On 2/5/26 12:29, Lorenzo Stoakes wrote:
> On Thu, Feb 05, 2026 at 10:51:28AM +0000, Alice Ryhl wrote:
>> These are the functions needed by Binder's shrinker.
>>
>> Binder uses zap_page_range_single in the shrinker path to remove an
>> unused page from the mmap'd region. Note that pages are only removed
>> from the mmap'd region lazily when shrinker asks for it.
>>
>> Binder uses list_lru_add/del to keep track of the shrinker lru list, and
>> it can't use _obj because the list head is not stored inline in the page
>> actually being lru freed, so page_to_nid(virt_to_page(item)) on the list
>> head computes the nid of the wrong page.
>>
>> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
>> ---
>> mm/list_lru.c | 2 ++
>> mm/memory.c | 1 +
>> 2 files changed, 3 insertions(+)
>>
>> diff --git a/mm/list_lru.c b/mm/list_lru.c
>> index ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
>> --- a/mm/list_lru.c
>> +++ b/mm/list_lru.c
>> @@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
>> unlock_list_lru(l, false);
>> return false;
>> }
>> +EXPORT_SYMBOL_GPL(list_lru_add);
>>
>> bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
>> {
>> @@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
>> unlock_list_lru(l, false);
>> return false;
>> }
>> +EXPORT_SYMBOL_GPL(list_lru_del);
>
> Same point as before about exporting symbols, but given the _obj variants are
> exported already this one is more valid.
>
>>
>> bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
>> {
>> diff --git a/mm/memory.c b/mm/memory.c
>> index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
>> zap_page_range_single_batched(&tlb, vma, address, size, details);
>> tlb_finish_mmu(&tlb);
>> }
>> +EXPORT_SYMBOL(zap_page_range_single);
>
> Sorry but I don't want this exported at all.
>
> This is an internal implementation detail which allows fine-grained control of
> behaviour via struct zap_details (which binder doesn't use, of course :)
I don't expect anybody to set zap_details, but yeah, it could be abused.
It could be abused right now from anywhere else in the kernel
where we don't build as a module :)
Apparently we export a similar function in rust where we just removed the last parameter.
I think zap_page_range_single() is only called with non-NULL from mm/memory.c.
So the following makes likely sense even outside of the context of this series:
From d2a2d20994456b9a66008b7fef12e379e76fc9f8 Mon Sep 17 00:00:00 2001
From: "David Hildenbrand (arm)" <david@kernel.org>
Date: Thu, 5 Feb 2026 12:42:09 +0100
Subject: [PATCH] tmp
Signed-off-by: David Hildenbrand (arm) <david@kernel.org>
---
arch/s390/mm/gmap_helpers.c | 2 +-
drivers/android/binder_alloc.c | 2 +-
include/linux/mm.h | 4 ++--
kernel/bpf/arena.c | 3 +--
kernel/events/core.c | 2 +-
mm/memory.c | 15 +++++++++------
net/ipv4/tcp.c | 5 ++---
rust/kernel/mm/virt.rs | 2 +-
8 files changed, 18 insertions(+), 17 deletions(-)
diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c
index d41b19925a5a..859f5570c3dc 100644
--- a/arch/s390/mm/gmap_helpers.c
+++ b/arch/s390/mm/gmap_helpers.c
@@ -102,7 +102,7 @@ void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsigned lo
if (!vma)
return;
if (!is_vm_hugetlb_page(vma))
- zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr, NULL);
+ zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr);
vmaddr = vma->vm_end;
}
}
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 979c96b74cad..b0201bc6893a 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -1186,7 +1186,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
if (vma) {
trace_binder_unmap_user_start(alloc, index);
- zap_page_range_single(vma, page_addr, PAGE_SIZE, NULL);
+ zap_page_range_single(vma, page_addr, PAGE_SIZE);
trace_binder_unmap_user_end(alloc, index);
}
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f0d5be9dc736..b7cc6ef49917 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2621,11 +2621,11 @@ struct page *vm_normal_page_pud(struct vm_area_struct *vma, unsigned long addr,
void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
unsigned long size);
void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
- unsigned long size, struct zap_details *details);
+ unsigned long size);
static inline void zap_vma_pages(struct vm_area_struct *vma)
{
zap_page_range_single(vma, vma->vm_start,
- vma->vm_end - vma->vm_start, NULL);
+ vma->vm_end - vma->vm_start);
}
void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
struct vm_area_struct *start_vma, unsigned long start,
diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
index 872dc0e41c65..242c931d3740 100644
--- a/kernel/bpf/arena.c
+++ b/kernel/bpf/arena.c
@@ -503,8 +503,7 @@ static void zap_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
struct vma_list *vml;
list_for_each_entry(vml, &arena->vma_list, head)
- zap_page_range_single(vml->vma, uaddr,
- PAGE_SIZE * page_cnt, NULL);
+ zap_page_range_single(vml->vma, uaddr, PAGE_SIZE * page_cnt);
}
static void arena_free_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 8cca80094624..1dfb33c39c2f 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6926,7 +6926,7 @@ static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
#ifdef CONFIG_MMU
/* Clear any partial mappings on error. */
if (err)
- zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE, NULL);
+ zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE);
#endif
return err;
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..4f8dcdcd20f3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2155,17 +2155,16 @@ void zap_page_range_single_batched(struct mmu_gather *tlb,
* @vma: vm_area_struct holding the applicable pages
* @address: starting address of pages to zap
* @size: number of bytes to zap
- * @details: details of shared cache invalidation
*
* The range must fit into one VMA.
*/
void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
- unsigned long size, struct zap_details *details)
+ unsigned long size)
{
struct mmu_gather tlb;
tlb_gather_mmu(&tlb, vma->vm_mm);
- zap_page_range_single_batched(&tlb, vma, address, size, details);
+ zap_page_range_single_batched(&tlb, vma, address, size, NULL);
tlb_finish_mmu(&tlb);
}
@@ -2187,7 +2186,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
!(vma->vm_flags & VM_PFNMAP))
return;
- zap_page_range_single(vma, address, size, NULL);
+ zap_page_range_single(vma, address, size);
}
EXPORT_SYMBOL_GPL(zap_vma_ptes);
@@ -2963,7 +2962,7 @@ static int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long add
* maintain page reference counts, and callers may free
* pages due to the error. So zap it early.
*/
- zap_page_range_single(vma, addr, size, NULL);
+ zap_page_range_single(vma, addr, size);
return error;
}
@@ -4187,7 +4186,11 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma,
unsigned long start_addr, unsigned long end_addr,
struct zap_details *details)
{
- zap_page_range_single(vma, start_addr, end_addr - start_addr, details);
+ struct mmu_gather tlb;
+
+ tlb_gather_mmu(&tlb, vma->vm_mm);
+ zap_page_range_single_batched(&tlb, vma, address, size, details);
+ tlb_finish_mmu(&tlb);
}
static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index d5319ebe2452..9e92c71389f3 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2052,7 +2052,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma,
maybe_zap_len = total_bytes_to_map - /* All bytes to map */
*length + /* Mapped or pending */
(pages_remaining * PAGE_SIZE); /* Failed map. */
- zap_page_range_single(vma, *address, maybe_zap_len, NULL);
+ zap_page_range_single(vma, *address, maybe_zap_len);
err = 0;
}
@@ -2217,8 +2217,7 @@ static int tcp_zerocopy_receive(struct sock *sk,
total_bytes_to_map = avail_len & ~(PAGE_SIZE - 1);
if (total_bytes_to_map) {
if (!(zc->flags & TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT))
- zap_page_range_single(vma, address, total_bytes_to_map,
- NULL);
+ zap_page_range_single(vma, address, total_bytes_to_map);
zc->length = total_bytes_to_map;
zc->recv_skip_hint = 0;
} else {
diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
index da21d65ccd20..b8e59e4420f3 100644
--- a/rust/kernel/mm/virt.rs
+++ b/rust/kernel/mm/virt.rs
@@ -124,7 +124,7 @@ pub fn zap_page_range_single(&self, address: usize, size: usize) {
// sufficient for this method call. This method has no requirements on the vma flags. The
// address range is checked to be within the vma.
unsafe {
- bindings::zap_page_range_single(self.as_ptr(), address, size, core::ptr::null_mut())
+ bindings::zap_page_range_single(self.as_ptr(), address, size)
};
}
--
2.43.0
--
Cheers,
David
On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote:
> On 2/5/26 12:29, Lorenzo Stoakes wrote:
> > On Thu, Feb 05, 2026 at 10:51:28AM +0000, Alice Ryhl wrote:
> > > bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> > > {
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> > > zap_page_range_single_batched(&tlb, vma, address, size, details);
> > > tlb_finish_mmu(&tlb);
> > > }
> > > +EXPORT_SYMBOL(zap_page_range_single);
> >
> > Sorry but I don't want this exported at all.
> >
> > This is an internal implementation detail which allows fine-grained control of
> > behaviour via struct zap_details (which binder doesn't use, of course :)
>
> I don't expect anybody to set zap_details, but yeah, it could be abused.
> It could be abused right now from anywhere else in the kernel
> where we don't build as a module :)
>
> Apparently we export a similar function in rust where we just removed the last parameter.
To clarify, said Rust function gets inlined into Rust Binder, so Rust
Binder calls the zap_page_range_single() symbol directly.
Alice
On Thu, Feb 05, 2026 at 11:58:00AM +0000, Alice Ryhl wrote:
> On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote:
> > On 2/5/26 12:29, Lorenzo Stoakes wrote:
> > > On Thu, Feb 05, 2026 at 10:51:28AM +0000, Alice Ryhl wrote:
> > > > bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> > > > {
> > > > diff --git a/mm/memory.c b/mm/memory.c
> > > > index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
> > > > --- a/mm/memory.c
> > > > +++ b/mm/memory.c
> > > > @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> > > > zap_page_range_single_batched(&tlb, vma, address, size, details);
> > > > tlb_finish_mmu(&tlb);
> > > > }
> > > > +EXPORT_SYMBOL(zap_page_range_single);
> > >
> > > Sorry but I don't want this exported at all.
> > >
> > > This is an internal implementation detail which allows fine-grained control of
> > > behaviour via struct zap_details (which binder doesn't use, of course :)
> >
> > I don't expect anybody to set zap_details, but yeah, it could be abused.
> > It could be abused right now from anywhere else in the kernel
> > where we don't build as a module :)
> >
> > Apparently we export a similar function in rust where we just removed the last parameter.
>
> To clarify, said Rust function gets inlined into Rust Binder, so Rust
> Binder calls the zap_page_range_single() symbol directly.
Presumably only for things compiled into the kernel right?
On Thu, Feb 05, 2026 at 12:10:38PM +0000, Lorenzo Stoakes wrote:
> On Thu, Feb 05, 2026 at 11:58:00AM +0000, Alice Ryhl wrote:
> > On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote:
> > > On 2/5/26 12:29, Lorenzo Stoakes wrote:
> > > > On Thu, Feb 05, 2026 at 10:51:28AM +0000, Alice Ryhl wrote:
> > > > > bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> > > > > {
> > > > > diff --git a/mm/memory.c b/mm/memory.c
> > > > > index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
> > > > > --- a/mm/memory.c
> > > > > +++ b/mm/memory.c
> > > > > @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> > > > > zap_page_range_single_batched(&tlb, vma, address, size, details);
> > > > > tlb_finish_mmu(&tlb);
> > > > > }
> > > > > +EXPORT_SYMBOL(zap_page_range_single);
> > > >
> > > > Sorry but I don't want this exported at all.
> > > >
> > > > This is an internal implementation detail which allows fine-grained control of
> > > > behaviour via struct zap_details (which binder doesn't use, of course :)
> > >
> > > I don't expect anybody to set zap_details, but yeah, it could be abused.
> > > It could be abused right now from anywhere else in the kernel
> > > where we don't build as a module :)
> > >
> > > Apparently we export a similar function in rust where we just removed the last parameter.
> >
> > To clarify, said Rust function gets inlined into Rust Binder, so Rust
> > Binder calls the zap_page_range_single() symbol directly.
>
> Presumably only for things compiled into the kernel right?
No, building Rust Binder with =m triggers this error for me:
ERROR: modpost: "zap_page_range_single" [drivers/android/binder/rust_binder.ko] undefined!
Alice
On 2/5/26 13:10, Lorenzo Stoakes wrote: > On Thu, Feb 05, 2026 at 11:58:00AM +0000, Alice Ryhl wrote: >> On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote: >>> >>> I don't expect anybody to set zap_details, but yeah, it could be abused. >>> It could be abused right now from anywhere else in the kernel >>> where we don't build as a module :) >>> >>> Apparently we export a similar function in rust where we just removed the last parameter. >> >> To clarify, said Rust function gets inlined into Rust Binder, so Rust >> Binder calls the zap_page_range_single() symbol directly. > > Presumably only for things compiled into the kernel right? Could Rust just use zap_vma_ptes() or does it want to zap things in VMAs that are not VM_PFNMAP? -- Cheers, David
On Thu, Feb 05, 2026 at 01:13:57PM +0100, David Hildenbrand (arm) wrote: > On 2/5/26 13:10, Lorenzo Stoakes wrote: > > On Thu, Feb 05, 2026 at 11:58:00AM +0000, Alice Ryhl wrote: > > > On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote: > > > > > > > > I don't expect anybody to set zap_details, but yeah, it could be abused. > > > > It could be abused right now from anywhere else in the kernel > > > > where we don't build as a module :) > > > > > > > > Apparently we export a similar function in rust where we just removed the last parameter. > > > > > > To clarify, said Rust function gets inlined into Rust Binder, so Rust > > > Binder calls the zap_page_range_single() symbol directly. > > > > Presumably only for things compiled into the kernel right? > > Could Rust just use zap_vma_ptes() or does it want to zap things in VMAs > that are not VM_PFNMAP? The VMA is VM_MIXEDMAP, not VM_PFNMAP. Alice
On Thu, Feb 05, 2026 at 12:19:22PM +0000, Alice Ryhl wrote: > On Thu, Feb 05, 2026 at 01:13:57PM +0100, David Hildenbrand (arm) wrote: > > On 2/5/26 13:10, Lorenzo Stoakes wrote: > > > On Thu, Feb 05, 2026 at 11:58:00AM +0000, Alice Ryhl wrote: > > > > On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote: > > > > > > > > > > I don't expect anybody to set zap_details, but yeah, it could be abused. > > > > > It could be abused right now from anywhere else in the kernel > > > > > where we don't build as a module :) > > > > > > > > > > Apparently we export a similar function in rust where we just removed the last parameter. > > > > > > > > To clarify, said Rust function gets inlined into Rust Binder, so Rust > > > > Binder calls the zap_page_range_single() symbol directly. > > > > > > Presumably only for things compiled into the kernel right? > > > > Could Rust just use zap_vma_ptes() or does it want to zap things in VMAs > > that are not VM_PFNMAP? > > The VMA is VM_MIXEDMAP, not VM_PFNMAP. OK this smells like David's cleanup could extend it to allow for VM_MIXEDMAP :) then we solve the export problem. The two of these cause endless issues, it's really a mess... > > Alice Cheers, Lorenzo
On 2/5/26 13:24, Lorenzo Stoakes wrote: > On Thu, Feb 05, 2026 at 12:19:22PM +0000, Alice Ryhl wrote: >> On Thu, Feb 05, 2026 at 01:13:57PM +0100, David Hildenbrand (arm) wrote: >>> >>> Could Rust just use zap_vma_ptes() or does it want to zap things in VMAs >>> that are not VM_PFNMAP? >> >> The VMA is VM_MIXEDMAP, not VM_PFNMAP. > > OK this smells like David's cleanup could extend it to allow for > VM_MIXEDMAP :) then we solve the export problem. My thinking ... and while at it, gonna remove these functions to make them a bit more ... consistent in naming. -- Cheers, David
+cc Christoph for his input on exports here.
On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote:
> On 2/5/26 12:29, Lorenzo Stoakes wrote:
> > On Thu, Feb 05, 2026 at 10:51:28AM +0000, Alice Ryhl wrote:
> > > These are the functions needed by Binder's shrinker.
> > >
> > > Binder uses zap_page_range_single in the shrinker path to remove an
> > > unused page from the mmap'd region. Note that pages are only removed
> > > from the mmap'd region lazily when shrinker asks for it.
> > >
> > > Binder uses list_lru_add/del to keep track of the shrinker lru list, and
> > > it can't use _obj because the list head is not stored inline in the page
> > > actually being lru freed, so page_to_nid(virt_to_page(item)) on the list
> > > head computes the nid of the wrong page.
> > >
> > > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> > > ---
> > > mm/list_lru.c | 2 ++
> > > mm/memory.c | 1 +
> > > 2 files changed, 3 insertions(+)
> > >
> > > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > > index ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
> > > --- a/mm/list_lru.c
> > > +++ b/mm/list_lru.c
> > > @@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > > unlock_list_lru(l, false);
> > > return false;
> > > }
> > > +EXPORT_SYMBOL_GPL(list_lru_add);
> > >
> > > bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
> > > {
> > > @@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > > unlock_list_lru(l, false);
> > > return false;
> > > }
> > > +EXPORT_SYMBOL_GPL(list_lru_del);
> >
> > Same point as before about exporting symbols, but given the _obj variants are
> > exported already this one is more valid.
> >
> > >
> > > bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> > > {
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> > > zap_page_range_single_batched(&tlb, vma, address, size, details);
> > > tlb_finish_mmu(&tlb);
> > > }
> > > +EXPORT_SYMBOL(zap_page_range_single);
> >
> > Sorry but I don't want this exported at all.
> >
> > This is an internal implementation detail which allows fine-grained control of
> > behaviour via struct zap_details (which binder doesn't use, of course :)
>
> I don't expect anybody to set zap_details, but yeah, it could be abused.
> It could be abused right now from anywhere else in the kernel
> where we don't build as a module :)
>
> Apparently we export a similar function in rust where we just removed the last parameter.
What??
Alice - can you confirm rust isn't exporting stuff that isn't explicitly marked
EXPORT_SYMBOL*() for use by other rust modules?
It's important we keep this in sync, otherwise rust is overriding kernel policy.
>
> I think zap_page_range_single() is only called with non-NULL from mm/memory.c.
>
> So the following makes likely sense even outside of the context of this series:
>
Yeah this looks good so feel free to add a R-b from me tag when you send it
BUT...
I'm still _very_ uncomfortable with exporting this just for binder which seems
to be doing effectively mm tasks itself in a way that makes me think it needs a
rework to not be doing that and to update core mm to add functionality if it's
needed.
In any case, if we _do_ export this I think I'm going to insist on this being
EXPORT_SYMBOL_FOR_MODULES() _only_ for the binder in-tree module.
Thanks, Lorenzo
> From d2a2d20994456b9a66008b7fef12e379e76fc9f8 Mon Sep 17 00:00:00 2001
> From: "David Hildenbrand (arm)" <david@kernel.org>
> Date: Thu, 5 Feb 2026 12:42:09 +0100
> Subject: [PATCH] tmp
>
> Signed-off-by: David Hildenbrand (arm) <david@kernel.org>
> ---
> arch/s390/mm/gmap_helpers.c | 2 +-
> drivers/android/binder_alloc.c | 2 +-
> include/linux/mm.h | 4 ++--
> kernel/bpf/arena.c | 3 +--
> kernel/events/core.c | 2 +-
> mm/memory.c | 15 +++++++++------
> net/ipv4/tcp.c | 5 ++---
> rust/kernel/mm/virt.rs | 2 +-
> 8 files changed, 18 insertions(+), 17 deletions(-)
>
> diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c
> index d41b19925a5a..859f5570c3dc 100644
> --- a/arch/s390/mm/gmap_helpers.c
> +++ b/arch/s390/mm/gmap_helpers.c
> @@ -102,7 +102,7 @@ void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsigned lo
> if (!vma)
> return;
> if (!is_vm_hugetlb_page(vma))
> - zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr, NULL);
> + zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr);
> vmaddr = vma->vm_end;
> }
> }
> diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> index 979c96b74cad..b0201bc6893a 100644
> --- a/drivers/android/binder_alloc.c
> +++ b/drivers/android/binder_alloc.c
> @@ -1186,7 +1186,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
> if (vma) {
> trace_binder_unmap_user_start(alloc, index);
> - zap_page_range_single(vma, page_addr, PAGE_SIZE, NULL);
> + zap_page_range_single(vma, page_addr, PAGE_SIZE);
> trace_binder_unmap_user_end(alloc, index);
> }
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f0d5be9dc736..b7cc6ef49917 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2621,11 +2621,11 @@ struct page *vm_normal_page_pud(struct vm_area_struct *vma, unsigned long addr,
> void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
> unsigned long size);
> void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> - unsigned long size, struct zap_details *details);
> + unsigned long size);
> static inline void zap_vma_pages(struct vm_area_struct *vma)
> {
> zap_page_range_single(vma, vma->vm_start,
> - vma->vm_end - vma->vm_start, NULL);
> + vma->vm_end - vma->vm_start);
> }
> void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
> struct vm_area_struct *start_vma, unsigned long start,
> diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> index 872dc0e41c65..242c931d3740 100644
> --- a/kernel/bpf/arena.c
> +++ b/kernel/bpf/arena.c
> @@ -503,8 +503,7 @@ static void zap_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
> struct vma_list *vml;
> list_for_each_entry(vml, &arena->vma_list, head)
> - zap_page_range_single(vml->vma, uaddr,
> - PAGE_SIZE * page_cnt, NULL);
> + zap_page_range_single(vml->vma, uaddr, PAGE_SIZE * page_cnt);
> }
> static void arena_free_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 8cca80094624..1dfb33c39c2f 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -6926,7 +6926,7 @@ static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
> #ifdef CONFIG_MMU
> /* Clear any partial mappings on error. */
> if (err)
> - zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE, NULL);
> + zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE);
> #endif
> return err;
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a4..4f8dcdcd20f3 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2155,17 +2155,16 @@ void zap_page_range_single_batched(struct mmu_gather *tlb,
> * @vma: vm_area_struct holding the applicable pages
> * @address: starting address of pages to zap
> * @size: number of bytes to zap
> - * @details: details of shared cache invalidation
> *
> * The range must fit into one VMA.
> */
> void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> - unsigned long size, struct zap_details *details)
> + unsigned long size)
> {
> struct mmu_gather tlb;
> tlb_gather_mmu(&tlb, vma->vm_mm);
> - zap_page_range_single_batched(&tlb, vma, address, size, details);
> + zap_page_range_single_batched(&tlb, vma, address, size, NULL);
> tlb_finish_mmu(&tlb);
> }
> @@ -2187,7 +2186,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
> !(vma->vm_flags & VM_PFNMAP))
> return;
> - zap_page_range_single(vma, address, size, NULL);
> + zap_page_range_single(vma, address, size);
> }
> EXPORT_SYMBOL_GPL(zap_vma_ptes);
> @@ -2963,7 +2962,7 @@ static int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long add
> * maintain page reference counts, and callers may free
> * pages due to the error. So zap it early.
> */
> - zap_page_range_single(vma, addr, size, NULL);
> + zap_page_range_single(vma, addr, size);
> return error;
> }
> @@ -4187,7 +4186,11 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma,
> unsigned long start_addr, unsigned long end_addr,
> struct zap_details *details)
> {
> - zap_page_range_single(vma, start_addr, end_addr - start_addr, details);
> + struct mmu_gather tlb;
> +
> + tlb_gather_mmu(&tlb, vma->vm_mm);
> + zap_page_range_single_batched(&tlb, vma, address, size, details);
> + tlb_finish_mmu(&tlb);
> }
> static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index d5319ebe2452..9e92c71389f3 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -2052,7 +2052,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma,
> maybe_zap_len = total_bytes_to_map - /* All bytes to map */
> *length + /* Mapped or pending */
> (pages_remaining * PAGE_SIZE); /* Failed map. */
> - zap_page_range_single(vma, *address, maybe_zap_len, NULL);
> + zap_page_range_single(vma, *address, maybe_zap_len);
> err = 0;
> }
> @@ -2217,8 +2217,7 @@ static int tcp_zerocopy_receive(struct sock *sk,
> total_bytes_to_map = avail_len & ~(PAGE_SIZE - 1);
> if (total_bytes_to_map) {
> if (!(zc->flags & TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT))
> - zap_page_range_single(vma, address, total_bytes_to_map,
> - NULL);
> + zap_page_range_single(vma, address, total_bytes_to_map);
> zc->length = total_bytes_to_map;
> zc->recv_skip_hint = 0;
> } else {
> diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
> index da21d65ccd20..b8e59e4420f3 100644
> --- a/rust/kernel/mm/virt.rs
> +++ b/rust/kernel/mm/virt.rs
> @@ -124,7 +124,7 @@ pub fn zap_page_range_single(&self, address: usize, size: usize) {
> // sufficient for this method call. This method has no requirements on the vma flags. The
> // address range is checked to be within the vma.
> unsafe {
> - bindings::zap_page_range_single(self.as_ptr(), address, size, core::ptr::null_mut())
> + bindings::zap_page_range_single(self.as_ptr(), address, size)
> };
> }
> --
> 2.43.0
>
>
> --
> Cheers,
>
> David
On Thu, Feb 5, 2026 at 12:58 PM Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > What?? > > Alice - can you confirm rust isn't exporting stuff that isn't explicitly marked > EXPORT_SYMBOL*() for use by other rust modules? > > It's important we keep this in sync, otherwise rust is overriding kernel policy. Currently, Rust GPL-exports every mangled symbol from the `kernel` crate. To call something you would need to be a Rust caller (not C -- that is not supported at all, even if technically you could hack something up) and the Rust API would then need to give you access to it (i.e. you need to be able to pass the Rust language rules, e.g. being public etc.). In this case if we are talking about the `VmaRef` type, someone that can get a reference to a value of that type could then call the `zap_page_range_single` method. That in turns would try to call the C one, but that one is not exported, right? So it should be fine. In the future, for Rust, we may specify whether a particular crate exports or not (and perhaps even allow to export non-GPL, but originally it was decided to only export GPL stuff to be on the safe side; and perhaps in certain namespaces etc.). Cheers, Miguel
On Thu, Feb 05, 2026 at 01:24:09PM +0100, Miguel Ojeda wrote: > On Thu, Feb 5, 2026 at 12:58 PM Lorenzo Stoakes > <lorenzo.stoakes@oracle.com> wrote: > > > > What?? > > > > Alice - can you confirm rust isn't exporting stuff that isn't explicitly marked > > EXPORT_SYMBOL*() for use by other rust modules? > > > > It's important we keep this in sync, otherwise rust is overriding kernel policy. > > Currently, Rust GPL-exports every mangled symbol from the `kernel` > crate. To call something you would need to be a Rust caller (not C -- > that is not supported at all, even if technically you could hack > something up) and the Rust API would then need to give you access to > it (i.e. you need to be able to pass the Rust language rules, e.g. > being public etc.). > > In this case if we are talking about the `VmaRef` type, someone that > can get a reference to a value of that type could then call the > `zap_page_range_single` method. That in turns would try to call the C > one, but that one is not exported, right? So it should be fine. > OK cool. Thanks for the explanation. > In the future, for Rust, we may specify whether a particular crate > exports or not (and perhaps even allow to export non-GPL, but > originally it was decided to only export GPL stuff to be on the safe > side; and perhaps in certain namespaces etc.). We'd definitely need to keep this in sync with C exports, maybe we can find a nice way of doing/checking this. Generally we're _very_ conservative about exports, and actually it'd be nice if maybe rust only provided GPL ones. The GPL is a good thing :) But I guess we can figure that out later. > > Cheers, > Miguel Cheers, Lorenzo
On 2/5/26 12:57, Lorenzo Stoakes wrote: > +cc Christoph for his input on exports here. > > On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote: >> On 2/5/26 12:29, Lorenzo Stoakes wrote: >>> >>> Same point as before about exporting symbols, but given the _obj variants are >>> exported already this one is more valid. >>> >>> >>> Sorry but I don't want this exported at all. >>> >>> This is an internal implementation detail which allows fine-grained control of >>> behaviour via struct zap_details (which binder doesn't use, of course :) >> >> I don't expect anybody to set zap_details, but yeah, it could be abused. >> It could be abused right now from anywhere else in the kernel >> where we don't build as a module :) >> >> Apparently we export a similar function in rust where we just removed the last parameter. > > What?? > > Alice - can you confirm rust isn't exporting stuff that isn't explicitly marked > EXPORT_SYMBOL*() for use by other rust modules? > > It's important we keep this in sync, otherwise rust is overriding kernel policy. > >> >> I think zap_page_range_single() is only called with non-NULL from mm/memory.c. >> >> So the following makes likely sense even outside of the context of this series: >> > > Yeah this looks good so feel free to add a R-b from me tag when you send it > BUT... > > I'm still _very_ uncomfortable with exporting this just for binder which seems > to be doing effectively mm tasks itself in a way that makes me think it needs a > rework to not be doing that and to update core mm to add functionality if it's > needed. > > In any case, if we _do_ export this I think I'm going to insist on this being > EXPORT_SYMBOL_FOR_MODULES() _only_ for the binder in-tree module. Works for me. Staring at it again, I think I landed in cleanup land. zap_vma_ptes() is exported and does the same thing as zap_page_range_single(), just with some additional safety checks. Fun. Let me cleanup. Good finger exercise after one month of almost-not coding :) -- Cheers, David
On Thu, Feb 05, 2026 at 01:03:35PM +0100, David Hildenbrand (arm) wrote: > On 2/5/26 12:57, Lorenzo Stoakes wrote: > > +cc Christoph for his input on exports here. > > > > On Thu, Feb 05, 2026 at 12:43:03PM +0100, David Hildenbrand (arm) wrote: > > > On 2/5/26 12:29, Lorenzo Stoakes wrote: > > > > > > > > Same point as before about exporting symbols, but given the _obj variants are > > > > exported already this one is more valid. > > > > > > > > > > > > Sorry but I don't want this exported at all. > > > > > > > > This is an internal implementation detail which allows fine-grained control of > > > > behaviour via struct zap_details (which binder doesn't use, of course :) > > > > > > I don't expect anybody to set zap_details, but yeah, it could be abused. > > > It could be abused right now from anywhere else in the kernel > > > where we don't build as a module :) > > > > > > Apparently we export a similar function in rust where we just removed the last parameter. > > > > What?? > > > > Alice - can you confirm rust isn't exporting stuff that isn't explicitly marked > > EXPORT_SYMBOL*() for use by other rust modules? > > > > It's important we keep this in sync, otherwise rust is overriding kernel policy. > > > > > > > > I think zap_page_range_single() is only called with non-NULL from mm/memory.c. > > > > > > So the following makes likely sense even outside of the context of this series: > > > > > > > Yeah this looks good so feel free to add a R-b from me tag when you send it > > BUT... > > > > I'm still _very_ uncomfortable with exporting this just for binder which seems > > to be doing effectively mm tasks itself in a way that makes me think it needs a > > rework to not be doing that and to update core mm to add functionality if it's > > needed. > > > > In any case, if we _do_ export this I think I'm going to insist on this being > > EXPORT_SYMBOL_FOR_MODULES() _only_ for the binder in-tree module. > > Works for me. :) > > Staring at it again, I think I landed in cleanup land. > > zap_vma_ptes() is exported and does the same thing as > zap_page_range_single(), just with some additional safety checks. Yeah saw that, except it insists only on VM_PFN VMAs which makes me question our making this more generally available to OOT drivers. > > Fun. > > > Let me cleanup. Good finger exercise after one month of almost-not coding :) :) I am less interested in cleanups at this stage at least for a while so feel free to fixup glaringly horrible things so I can vicariously enjoy it at least... > > -- > Cheers, > > David Cheers, Lorenzo
On 2/5/26 12:43, David Hildenbrand (arm) wrote:
> On 2/5/26 12:29, Lorenzo Stoakes wrote:
>> On Thu, Feb 05, 2026 at 10:51:28AM +0000, Alice Ryhl wrote:
>>> These are the functions needed by Binder's shrinker.
>>>
>>> Binder uses zap_page_range_single in the shrinker path to remove an
>>> unused page from the mmap'd region. Note that pages are only removed
>>> from the mmap'd region lazily when shrinker asks for it.
>>>
>>> Binder uses list_lru_add/del to keep track of the shrinker lru list, and
>>> it can't use _obj because the list head is not stored inline in the page
>>> actually being lru freed, so page_to_nid(virt_to_page(item)) on the list
>>> head computes the nid of the wrong page.
>>>
>>> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
>>> ---
>>> mm/list_lru.c | 2 ++
>>> mm/memory.c | 1 +
>>> 2 files changed, 3 insertions(+)
>>>
>>> diff --git a/mm/list_lru.c b/mm/list_lru.c
>>> index
>>> ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
>>> --- a/mm/list_lru.c
>>> +++ b/mm/list_lru.c
>>> @@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct
>>> list_head *item, int nid,
>>> unlock_list_lru(l, false);
>>> return false;
>>> }
>>> +EXPORT_SYMBOL_GPL(list_lru_add);
>>>
>>> bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
>>> {
>>> @@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct
>>> list_head *item, int nid,
>>> unlock_list_lru(l, false);
>>> return false;
>>> }
>>> +EXPORT_SYMBOL_GPL(list_lru_del);
>>
>> Same point as before about exporting symbols, but given the _obj
>> variants are
>> exported already this one is more valid.
>>
>>>
>>> bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
>>> {
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index
>>> da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct
>>> vm_area_struct *vma, unsigned long address,
>>> zap_page_range_single_batched(&tlb, vma, address, size, details);
>>> tlb_finish_mmu(&tlb);
>>> }
>>> +EXPORT_SYMBOL(zap_page_range_single);
>>
>> Sorry but I don't want this exported at all.
>>
>> This is an internal implementation detail which allows fine-grained
>> control of
>> behaviour via struct zap_details (which binder doesn't use, of course :)
>
> I don't expect anybody to set zap_details, but yeah, it could be abused.
> It could be abused right now from anywhere else in the kernel
> where we don't build as a module :)
>
> Apparently we export a similar function in rust where we just removed
> the last parameter.
>
> I think zap_page_range_single() is only called with non-NULL from mm/
> memory.c.
>
> So the following makes likely sense even outside of the context of this
> series:
The following should compile :)
From b1c35afb1b819a42f4ec1119564b3b37cceb9968 Mon Sep 17 00:00:00 2001
From: "David Hildenbrand (arm)" <david@kernel.org>
Date: Thu, 5 Feb 2026 12:42:09 +0100
Subject: [PATCH] mm/memory: remove "zap_details" parameter from
zap_page_range_single()
Nobody except memory.c should really set that parameter to non-NULL. So
let's just drop it and make unmap_mapping_range_vma() use
zap_page_range_single_batched() instead.
Signed-off-by: David Hildenbrand (arm) <david@kernel.org>
---
arch/s390/mm/gmap_helpers.c | 2 +-
drivers/android/binder_alloc.c | 2 +-
include/linux/mm.h | 5 ++---
kernel/bpf/arena.c | 3 +--
kernel/events/core.c | 2 +-
mm/madvise.c | 3 +--
mm/memory.c | 16 ++++++++++------
net/ipv4/tcp.c | 5 ++---
rust/kernel/mm/virt.rs | 2 +-
9 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c
index d41b19925a5a..859f5570c3dc 100644
--- a/arch/s390/mm/gmap_helpers.c
+++ b/arch/s390/mm/gmap_helpers.c
@@ -102,7 +102,7 @@ void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsigned lo
if (!vma)
return;
if (!is_vm_hugetlb_page(vma))
- zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr, NULL);
+ zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr);
vmaddr = vma->vm_end;
}
}
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 979c96b74cad..b0201bc6893a 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -1186,7 +1186,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
if (vma) {
trace_binder_unmap_user_start(alloc, index);
- zap_page_range_single(vma, page_addr, PAGE_SIZE, NULL);
+ zap_page_range_single(vma, page_addr, PAGE_SIZE);
trace_binder_unmap_user_end(alloc, index);
}
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f0d5be9dc736..5764991546bb 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2621,11 +2621,10 @@ struct page *vm_normal_page_pud(struct vm_area_struct *vma, unsigned long addr,
void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
unsigned long size);
void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
- unsigned long size, struct zap_details *details);
+ unsigned long size);
static inline void zap_vma_pages(struct vm_area_struct *vma)
{
- zap_page_range_single(vma, vma->vm_start,
- vma->vm_end - vma->vm_start, NULL);
+ zap_page_range_single(vma, vma->vm_start, vma->vm_end - vma->vm_start);
}
void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
struct vm_area_struct *start_vma, unsigned long start,
diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
index 872dc0e41c65..242c931d3740 100644
--- a/kernel/bpf/arena.c
+++ b/kernel/bpf/arena.c
@@ -503,8 +503,7 @@ static void zap_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
struct vma_list *vml;
list_for_each_entry(vml, &arena->vma_list, head)
- zap_page_range_single(vml->vma, uaddr,
- PAGE_SIZE * page_cnt, NULL);
+ zap_page_range_single(vml->vma, uaddr, PAGE_SIZE * page_cnt);
}
static void arena_free_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 8cca80094624..1dfb33c39c2f 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6926,7 +6926,7 @@ static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
#ifdef CONFIG_MMU
/* Clear any partial mappings on error. */
if (err)
- zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE, NULL);
+ zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE);
#endif
return err;
diff --git a/mm/madvise.c b/mm/madvise.c
index b617b1be0f53..abcbfd1f0662 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1200,8 +1200,7 @@ static long madvise_guard_install(struct madvise_behavior *madv_behavior)
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
- zap_page_range_single(vma, range->start,
- range->end - range->start, NULL);
+ zap_page_range_single(vma, range->start, range->end - range->start);
}
/*
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..82985da5f7e6 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2155,17 +2155,16 @@ void zap_page_range_single_batched(struct mmu_gather *tlb,
* @vma: vm_area_struct holding the applicable pages
* @address: starting address of pages to zap
* @size: number of bytes to zap
- * @details: details of shared cache invalidation
*
* The range must fit into one VMA.
*/
void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
- unsigned long size, struct zap_details *details)
+ unsigned long size)
{
struct mmu_gather tlb;
tlb_gather_mmu(&tlb, vma->vm_mm);
- zap_page_range_single_batched(&tlb, vma, address, size, details);
+ zap_page_range_single_batched(&tlb, vma, address, size, NULL);
tlb_finish_mmu(&tlb);
}
@@ -2187,7 +2186,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
!(vma->vm_flags & VM_PFNMAP))
return;
- zap_page_range_single(vma, address, size, NULL);
+ zap_page_range_single(vma, address, size);
}
EXPORT_SYMBOL_GPL(zap_vma_ptes);
@@ -2963,7 +2962,7 @@ static int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long add
* maintain page reference counts, and callers may free
* pages due to the error. So zap it early.
*/
- zap_page_range_single(vma, addr, size, NULL);
+ zap_page_range_single(vma, addr, size);
return error;
}
@@ -4187,7 +4186,12 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma,
unsigned long start_addr, unsigned long end_addr,
struct zap_details *details)
{
- zap_page_range_single(vma, start_addr, end_addr - start_addr, details);
+ struct mmu_gather tlb;
+
+ tlb_gather_mmu(&tlb, vma->vm_mm);
+ zap_page_range_single_batched(&tlb, vma, start_addr,
+ end_addr - start_addr, details);
+ tlb_finish_mmu(&tlb);
}
static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index d5319ebe2452..9e92c71389f3 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2052,7 +2052,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma,
maybe_zap_len = total_bytes_to_map - /* All bytes to map */
*length + /* Mapped or pending */
(pages_remaining * PAGE_SIZE); /* Failed map. */
- zap_page_range_single(vma, *address, maybe_zap_len, NULL);
+ zap_page_range_single(vma, *address, maybe_zap_len);
err = 0;
}
@@ -2217,8 +2217,7 @@ static int tcp_zerocopy_receive(struct sock *sk,
total_bytes_to_map = avail_len & ~(PAGE_SIZE - 1);
if (total_bytes_to_map) {
if (!(zc->flags & TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT))
- zap_page_range_single(vma, address, total_bytes_to_map,
- NULL);
+ zap_page_range_single(vma, address, total_bytes_to_map);
zc->length = total_bytes_to_map;
zc->recv_skip_hint = 0;
} else {
diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
index da21d65ccd20..b8e59e4420f3 100644
--- a/rust/kernel/mm/virt.rs
+++ b/rust/kernel/mm/virt.rs
@@ -124,7 +124,7 @@ pub fn zap_page_range_single(&self, address: usize, size: usize) {
// sufficient for this method call. This method has no requirements on the vma flags. The
// address range is checked to be within the vma.
unsafe {
- bindings::zap_page_range_single(self.as_ptr(), address, size, core::ptr::null_mut())
+ bindings::zap_page_range_single(self.as_ptr(), address, size)
};
}
--
2.43.0
--
Cheers,
David
On Thu, Feb 05, 2026 at 12:57:04PM +0100, David Hildenbrand (arm) wrote:
Dude you have to capitalise that 'a' in Arm it's driving me crazy ;)
Then again should it be ARM? OK this is tricky
[snip]
>
> The following should compile :)
Err... yeah. OK my R-b obviously depends on the code being compiling + working
:P But still feel free to add when you break it out + _test_ it ;)
Cheers, Lorenzo
>
> From b1c35afb1b819a42f4ec1119564b3b37cceb9968 Mon Sep 17 00:00:00 2001
> From: "David Hildenbrand (arm)" <david@kernel.org>
> Date: Thu, 5 Feb 2026 12:42:09 +0100
> Subject: [PATCH] mm/memory: remove "zap_details" parameter from
> zap_page_range_single()
>
> Nobody except memory.c should really set that parameter to non-NULL. So
> let's just drop it and make unmap_mapping_range_vma() use
> zap_page_range_single_batched() instead.
>
> Signed-off-by: David Hildenbrand (arm) <david@kernel.org>
> ---
> arch/s390/mm/gmap_helpers.c | 2 +-
> drivers/android/binder_alloc.c | 2 +-
> include/linux/mm.h | 5 ++---
> kernel/bpf/arena.c | 3 +--
> kernel/events/core.c | 2 +-
> mm/madvise.c | 3 +--
> mm/memory.c | 16 ++++++++++------
> net/ipv4/tcp.c | 5 ++---
> rust/kernel/mm/virt.rs | 2 +-
> 9 files changed, 20 insertions(+), 20 deletions(-)
>
> diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c
> index d41b19925a5a..859f5570c3dc 100644
> --- a/arch/s390/mm/gmap_helpers.c
> +++ b/arch/s390/mm/gmap_helpers.c
> @@ -102,7 +102,7 @@ void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsigned lo
> if (!vma)
> return;
> if (!is_vm_hugetlb_page(vma))
> - zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr, NULL);
> + zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr);
> vmaddr = vma->vm_end;
> }
> }
> diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> index 979c96b74cad..b0201bc6893a 100644
> --- a/drivers/android/binder_alloc.c
> +++ b/drivers/android/binder_alloc.c
> @@ -1186,7 +1186,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
> if (vma) {
> trace_binder_unmap_user_start(alloc, index);
> - zap_page_range_single(vma, page_addr, PAGE_SIZE, NULL);
> + zap_page_range_single(vma, page_addr, PAGE_SIZE);
> trace_binder_unmap_user_end(alloc, index);
> }
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f0d5be9dc736..5764991546bb 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2621,11 +2621,10 @@ struct page *vm_normal_page_pud(struct vm_area_struct *vma, unsigned long addr,
> void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
> unsigned long size);
> void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> - unsigned long size, struct zap_details *details);
> + unsigned long size);
> static inline void zap_vma_pages(struct vm_area_struct *vma)
> {
> - zap_page_range_single(vma, vma->vm_start,
> - vma->vm_end - vma->vm_start, NULL);
> + zap_page_range_single(vma, vma->vm_start, vma->vm_end - vma->vm_start);
> }
> void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
> struct vm_area_struct *start_vma, unsigned long start,
> diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> index 872dc0e41c65..242c931d3740 100644
> --- a/kernel/bpf/arena.c
> +++ b/kernel/bpf/arena.c
> @@ -503,8 +503,7 @@ static void zap_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
> struct vma_list *vml;
> list_for_each_entry(vml, &arena->vma_list, head)
> - zap_page_range_single(vml->vma, uaddr,
> - PAGE_SIZE * page_cnt, NULL);
> + zap_page_range_single(vml->vma, uaddr, PAGE_SIZE * page_cnt);
> }
> static void arena_free_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 8cca80094624..1dfb33c39c2f 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -6926,7 +6926,7 @@ static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
> #ifdef CONFIG_MMU
> /* Clear any partial mappings on error. */
> if (err)
> - zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE, NULL);
> + zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE);
> #endif
> return err;
> diff --git a/mm/madvise.c b/mm/madvise.c
> index b617b1be0f53..abcbfd1f0662 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -1200,8 +1200,7 @@ static long madvise_guard_install(struct madvise_behavior *madv_behavior)
> * OK some of the range have non-guard pages mapped, zap
> * them. This leaves existing guard pages in place.
> */
> - zap_page_range_single(vma, range->start,
> - range->end - range->start, NULL);
> + zap_page_range_single(vma, range->start, range->end - range->start);
> }
> /*
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a4..82985da5f7e6 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2155,17 +2155,16 @@ void zap_page_range_single_batched(struct mmu_gather *tlb,
> * @vma: vm_area_struct holding the applicable pages
> * @address: starting address of pages to zap
> * @size: number of bytes to zap
> - * @details: details of shared cache invalidation
> *
> * The range must fit into one VMA.
> */
> void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> - unsigned long size, struct zap_details *details)
> + unsigned long size)
> {
> struct mmu_gather tlb;
> tlb_gather_mmu(&tlb, vma->vm_mm);
> - zap_page_range_single_batched(&tlb, vma, address, size, details);
> + zap_page_range_single_batched(&tlb, vma, address, size, NULL);
> tlb_finish_mmu(&tlb);
> }
> @@ -2187,7 +2186,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
> !(vma->vm_flags & VM_PFNMAP))
> return;
> - zap_page_range_single(vma, address, size, NULL);
> + zap_page_range_single(vma, address, size);
> }
> EXPORT_SYMBOL_GPL(zap_vma_ptes);
> @@ -2963,7 +2962,7 @@ static int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long add
> * maintain page reference counts, and callers may free
> * pages due to the error. So zap it early.
> */
> - zap_page_range_single(vma, addr, size, NULL);
> + zap_page_range_single(vma, addr, size);
> return error;
> }
> @@ -4187,7 +4186,12 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma,
> unsigned long start_addr, unsigned long end_addr,
> struct zap_details *details)
> {
> - zap_page_range_single(vma, start_addr, end_addr - start_addr, details);
> + struct mmu_gather tlb;
> +
> + tlb_gather_mmu(&tlb, vma->vm_mm);
> + zap_page_range_single_batched(&tlb, vma, start_addr,
> + end_addr - start_addr, details);
> + tlb_finish_mmu(&tlb);
> }
> static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index d5319ebe2452..9e92c71389f3 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -2052,7 +2052,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma,
> maybe_zap_len = total_bytes_to_map - /* All bytes to map */
> *length + /* Mapped or pending */
> (pages_remaining * PAGE_SIZE); /* Failed map. */
> - zap_page_range_single(vma, *address, maybe_zap_len, NULL);
> + zap_page_range_single(vma, *address, maybe_zap_len);
> err = 0;
> }
> @@ -2217,8 +2217,7 @@ static int tcp_zerocopy_receive(struct sock *sk,
> total_bytes_to_map = avail_len & ~(PAGE_SIZE - 1);
> if (total_bytes_to_map) {
> if (!(zc->flags & TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT))
> - zap_page_range_single(vma, address, total_bytes_to_map,
> - NULL);
> + zap_page_range_single(vma, address, total_bytes_to_map);
> zc->length = total_bytes_to_map;
> zc->recv_skip_hint = 0;
> } else {
> diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
> index da21d65ccd20..b8e59e4420f3 100644
> --- a/rust/kernel/mm/virt.rs
> +++ b/rust/kernel/mm/virt.rs
> @@ -124,7 +124,7 @@ pub fn zap_page_range_single(&self, address: usize, size: usize) {
> // sufficient for this method call. This method has no requirements on the vma flags. The
> // address range is checked to be within the vma.
> unsafe {
> - bindings::zap_page_range_single(self.as_ptr(), address, size, core::ptr::null_mut())
> + bindings::zap_page_range_single(self.as_ptr(), address, size)
> };
> }
> --
> 2.43.0
>
>
> --
> Cheers,
>
> David
On 2/5/26 13:01, Lorenzo Stoakes wrote: > On Thu, Feb 05, 2026 at 12:57:04PM +0100, David Hildenbrand (arm) wrote: > > Dude you have to capitalise that 'a' in Arm it's driving me crazy ;) > > Then again should it be ARM? OK this is tricky Yeah, I should likely adjust that at some point. For the time being, I enjoy driving you crazy :P > > [snip] > >> >> The following should compile :) > > Err... yeah. OK my R-b obviously depends on the code being compiling + working > :P But still feel free to add when you break it out + _test_ it ;) Better to add your R-b when I send it out officially :) -- Cheers, David
On Thu, Feb 05, 2026 at 01:06:10PM +0100, David Hildenbrand (arm) wrote: > On 2/5/26 13:01, Lorenzo Stoakes wrote: > > On Thu, Feb 05, 2026 at 12:57:04PM +0100, David Hildenbrand (arm) wrote: > > > > Dude you have to capitalise that 'a' in Arm it's driving me crazy ;) > > > > Then again should it be ARM? OK this is tricky > > Yeah, I should likely adjust that at some point. For the time being, I enjoy > driving you crazy :P :D > > > > > [snip] > > > > > > > > The following should compile :) > > > > Err... yeah. OK my R-b obviously depends on the code being compiling + working > > :P But still feel free to add when you break it out + _test_ it ;) > > Better to add your R-b when I send it out officially :) Haha yeah maybe... > > -- > Cheers, > > David
On 2/5/26 11:51, Alice Ryhl wrote:
> These are the functions needed by Binder's shrinker.
>
> Binder uses zap_page_range_single in the shrinker path to remove an
> unused page from the mmap'd region. Note that pages are only removed
> from the mmap'd region lazily when shrinker asks for it.
>
> Binder uses list_lru_add/del to keep track of the shrinker lru list, and
> it can't use _obj because the list head is not stored inline in the page
> actually being lru freed, so page_to_nid(virt_to_page(item)) on the list
> head computes the nid of the wrong page.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
> mm/list_lru.c | 2 ++
> mm/memory.c | 1 +
> 2 files changed, 3 insertions(+)
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> unlock_list_lru(l, false);
> return false;
> }
> +EXPORT_SYMBOL_GPL(list_lru_add);
>
> bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
> {
> @@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> unlock_list_lru(l, false);
> return false;
> }
> +EXPORT_SYMBOL_GPL(list_lru_del);
>
> bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> {
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> zap_page_range_single_batched(&tlb, vma, address, size, details);
> tlb_finish_mmu(&tlb);
> }
> +EXPORT_SYMBOL(zap_page_range_single);
Why not EXPORT_SYMBOL_GPL?
--
Cheers,
David
On Thu, Feb 05, 2026 at 11:59:47AM +0100, David Hildenbrand (arm) wrote:
> On 2/5/26 11:51, Alice Ryhl wrote:
> > These are the functions needed by Binder's shrinker.
> >
> > Binder uses zap_page_range_single in the shrinker path to remove an
> > unused page from the mmap'd region. Note that pages are only removed
> > from the mmap'd region lazily when shrinker asks for it.
> >
> > Binder uses list_lru_add/del to keep track of the shrinker lru list, and
> > it can't use _obj because the list head is not stored inline in the page
> > actually being lru freed, so page_to_nid(virt_to_page(item)) on the list
> > head computes the nid of the wrong page.
> >
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> > ---
> > mm/list_lru.c | 2 ++
> > mm/memory.c | 1 +
> > 2 files changed, 3 insertions(+)
> >
> > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > index ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
> > --- a/mm/list_lru.c
> > +++ b/mm/list_lru.c
> > @@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > unlock_list_lru(l, false);
> > return false;
> > }
> > +EXPORT_SYMBOL_GPL(list_lru_add);
> > bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
> > {
> > @@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > unlock_list_lru(l, false);
> > return false;
> > }
> > +EXPORT_SYMBOL_GPL(list_lru_del);
> > bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> > {
> > diff --git a/mm/memory.c b/mm/memory.c
> > index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> > zap_page_range_single_batched(&tlb, vma, address, size, details);
> > tlb_finish_mmu(&tlb);
> > }
> > +EXPORT_SYMBOL(zap_page_range_single);
>
> Why not EXPORT_SYMBOL_GPL?
I just tried to match other symbols in the same file.
Alice
On 2/5/26 12:04, Alice Ryhl wrote:
> On Thu, Feb 05, 2026 at 11:59:47AM +0100, David Hildenbrand (arm) wrote:
>> On 2/5/26 11:51, Alice Ryhl wrote:
>>> These are the functions needed by Binder's shrinker.
>>>
>>> Binder uses zap_page_range_single in the shrinker path to remove an
>>> unused page from the mmap'd region. Note that pages are only removed
>>> from the mmap'd region lazily when shrinker asks for it.
>>>
>>> Binder uses list_lru_add/del to keep track of the shrinker lru list, and
>>> it can't use _obj because the list head is not stored inline in the page
>>> actually being lru freed, so page_to_nid(virt_to_page(item)) on the list
>>> head computes the nid of the wrong page.
>>>
>>> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
>>> ---
>>> mm/list_lru.c | 2 ++
>>> mm/memory.c | 1 +
>>> 2 files changed, 3 insertions(+)
>>>
>>> diff --git a/mm/list_lru.c b/mm/list_lru.c
>>> index ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
>>> --- a/mm/list_lru.c
>>> +++ b/mm/list_lru.c
>>> @@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
>>> unlock_list_lru(l, false);
>>> return false;
>>> }
>>> +EXPORT_SYMBOL_GPL(list_lru_add);
>>> bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
>>> {
>>> @@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
>>> unlock_list_lru(l, false);
>>> return false;
>>> }
>>> +EXPORT_SYMBOL_GPL(list_lru_del);
>>> bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
>>> {
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
>>> zap_page_range_single_batched(&tlb, vma, address, size, details);
>>> tlb_finish_mmu(&tlb);
>>> }
>>> +EXPORT_SYMBOL(zap_page_range_single);
>>
>> Why not EXPORT_SYMBOL_GPL?
>
> I just tried to match other symbols in the same file.
We were probably a bit too sloppy with some of these in the past. But:
davhil01@e142025:~/git/linux$ grep -c "EXPORT_SYMBOL(" mm/memory.c
12
davhil01@e142025:~/git/linux$ grep -c "EXPORT_SYMBOL_GPL(" mm/memory.c
10
So just go with EXPORT_SYMBOL_GPL unless there is a good reason why not.
--
Cheers,
David
On Thu, Feb 05, 2026 at 12:12:23PM +0100, David Hildenbrand (arm) wrote:
> On 2/5/26 12:04, Alice Ryhl wrote:
> > On Thu, Feb 05, 2026 at 11:59:47AM +0100, David Hildenbrand (arm) wrote:
> > > On 2/5/26 11:51, Alice Ryhl wrote:
> > > > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > > > index ec48b5dadf519a5296ac14cda035c067f9e448f8..bf95d73c9815548a19db6345f856cee9baad22e3 100644
> > > > --- a/mm/list_lru.c
> > > > +++ b/mm/list_lru.c
> > > > @@ -179,6 +179,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > > > unlock_list_lru(l, false);
> > > > return false;
> > > > }
> > > > +EXPORT_SYMBOL_GPL(list_lru_add);
> > > > bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
> > > > {
> > > > @@ -216,6 +217,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > > > unlock_list_lru(l, false);
> > > > return false;
> > > > }
> > > > +EXPORT_SYMBOL_GPL(list_lru_del);
> > > > bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> > > > {
> > > > diff --git a/mm/memory.c b/mm/memory.c
> > > > index da360a6eb8a48e29293430d0c577fb4b6ec58099..64083ace239a2caf58e1645dd5d91a41d61492c4 100644
> > > > --- a/mm/memory.c
> > > > +++ b/mm/memory.c
> > > > @@ -2168,6 +2168,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> > > > zap_page_range_single_batched(&tlb, vma, address, size, details);
> > > > tlb_finish_mmu(&tlb);
> > > > }
> > > > +EXPORT_SYMBOL(zap_page_range_single);
> > >
> > > Why not EXPORT_SYMBOL_GPL?
> >
> > I just tried to match other symbols in the same file.
>
> We were probably a bit too sloppy with some of these in the past. But:
>
> davhil01@e142025:~/git/linux$ grep -c "EXPORT_SYMBOL(" mm/memory.c
> 12
> davhil01@e142025:~/git/linux$ grep -c "EXPORT_SYMBOL_GPL(" mm/memory.c
> 10
>
> So just go with EXPORT_SYMBOL_GPL unless there is a good reason why not.
Sounds good, I'll do that in the next version.
Alice
On 2/5/26 12:18, Alice Ryhl wrote:
> On Thu, Feb 05, 2026 at 12:12:23PM +0100, David Hildenbrand (arm) wrote:
>> On 2/5/26 12:04, Alice Ryhl wrote:
>>>
>>> I just tried to match other symbols in the same file.
>>
>> We were probably a bit too sloppy with some of these in the past. But:
>>
>> davhil01@e142025:~/git/linux$ grep -c "EXPORT_SYMBOL(" mm/memory.c
>> 12
>> davhil01@e142025:~/git/linux$ grep -c "EXPORT_SYMBOL_GPL(" mm/memory.c
>> 10
>>
>> So just go with EXPORT_SYMBOL_GPL unless there is a good reason why not.
>
> Sounds good, I'll do that in the next version.
Feel free to add my
Acked-by: David Hildenbrand (arm) <david@kernel.org>
--
Cheers,
David
© 2016 - 2026 Red Hat, Inc.