The mmap_miss counter in do_sync_mmap_readahead() tracks whether
readahead is useful for mmap'd file access. It is incremented by 1 on
every page cache miss in do_sync_mmap_readahead(), and decremented in
two places:
- filemap_map_pages(): decremented by N for each of N pages
successfully mapped via fault-around (pages found already in cache,
evidence readahead was useful). Only pages not in the workingset
count as hits.
- do_async_mmap_readahead(): decremented by 1 when a page with
PG_readahead is found in cache.
When the counter exceeds MMAP_LOTSAMISS (100), all readahead is
disabled, including the targeted VM_EXEC readahead [1] that requests
arch-preferred folio orders for contpte mapping.
On arm64 with 64K base pages, both decrement paths are inactive:
1. filemap_map_pages() is never called because fault_around_pages
(65536 >> PAGE_SHIFT = 1) disables should_fault_around(), which
requires fault_around_pages > 1. With only 1 page in the
fault-around window, there is nothing "around" to map.
2. do_async_mmap_readahead() never fires for exec mappings because
exec readahead sets async_size = 0, so no PG_readahead markers
are placed.
With no decrements, mmap_miss monotonically increases past
MMAP_LOTSAMISS after 100 page faults, disabling all subsequent
exec readahead.
Fix this by moving the VM_EXEC readahead block above the mmap_miss
check. The exec readahead path is targeted. It reads a single folio at
the fault location with async_size=0, not speculative prefetch, so the
mmap_miss heuristic designed to throttle wasteful speculative readahead
should not gate it. The page would need to be faulted in regardless,
the only question is at what order.
[1] https://lore.kernel.org/all/20250430145920.3748738-6-ryan.roberts@arm.com/
Signed-off-by: Usama Arif <usama.arif@linux.dev>
---
mm/filemap.c | 72 ++++++++++++++++++++++++++++------------------------
1 file changed, 39 insertions(+), 33 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 6cd7974d4adab..c064f31ecec5a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3331,6 +3331,37 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
}
}
+ if (vm_flags & VM_EXEC) {
+ /*
+ * Allow arch to request a preferred minimum folio order for
+ * executable memory. This can often be beneficial to
+ * performance if (e.g.) arm64 can contpte-map the folio.
+ * Executable memory rarely benefits from readahead, due to its
+ * random access nature, so set async_size to 0.
+ *
+ * Limit to the boundaries of the VMA to avoid reading in any
+ * pad that might exist between sections, which would be a waste
+ * of memory.
+ *
+ * This is targeted readahead (one folio at the fault location),
+ * not speculative prefetch, so bypass the mmap_miss heuristic
+ * which would otherwise disable it after MMAP_LOTSAMISS faults.
+ */
+ struct vm_area_struct *vma = vmf->vma;
+ unsigned long start = vma->vm_pgoff;
+ unsigned long end = start + vma_pages(vma);
+ unsigned long ra_end;
+
+ ra->order = exec_folio_order();
+ ra->start = round_down(vmf->pgoff, 1UL << ra->order);
+ ra->start = max(ra->start, start);
+ ra_end = round_up(ra->start + ra->ra_pages, 1UL << ra->order);
+ ra_end = min(ra_end, end);
+ ra->size = ra_end - ra->start;
+ ra->async_size = 0;
+ goto do_readahead;
+ }
+
if (!(vm_flags & VM_SEQ_READ)) {
/* Avoid banging the cache line if not needed */
mmap_miss = READ_ONCE(ra->mmap_miss);
@@ -3361,40 +3392,15 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
return fpin;
}
- if (vm_flags & VM_EXEC) {
- /*
- * Allow arch to request a preferred minimum folio order for
- * executable memory. This can often be beneficial to
- * performance if (e.g.) arm64 can contpte-map the folio.
- * Executable memory rarely benefits from readahead, due to its
- * random access nature, so set async_size to 0.
- *
- * Limit to the boundaries of the VMA to avoid reading in any
- * pad that might exist between sections, which would be a waste
- * of memory.
- */
- struct vm_area_struct *vma = vmf->vma;
- unsigned long start = vma->vm_pgoff;
- unsigned long end = start + vma_pages(vma);
- unsigned long ra_end;
-
- ra->order = exec_folio_order();
- ra->start = round_down(vmf->pgoff, 1UL << ra->order);
- ra->start = max(ra->start, start);
- ra_end = round_up(ra->start + ra->ra_pages, 1UL << ra->order);
- ra_end = min(ra_end, end);
- ra->size = ra_end - ra->start;
- ra->async_size = 0;
- } else {
- /*
- * mmap read-around
- */
- ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
- ra->size = ra->ra_pages;
- ra->async_size = ra->ra_pages / 4;
- ra->order = 0;
- }
+ /*
+ * mmap read-around
+ */
+ ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
+ ra->size = ra->ra_pages;
+ ra->async_size = ra->ra_pages / 4;
+ ra->order = 0;
+do_readahead:
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
ractl._index = ra->start;
page_cache_ra_order(&ractl, ra);
--
2.47.3
On Tue 10-03-26 07:51:15, Usama Arif wrote:
> The mmap_miss counter in do_sync_mmap_readahead() tracks whether
> readahead is useful for mmap'd file access. It is incremented by 1 on
> every page cache miss in do_sync_mmap_readahead(), and decremented in
> two places:
>
> - filemap_map_pages(): decremented by N for each of N pages
> successfully mapped via fault-around (pages found already in cache,
> evidence readahead was useful). Only pages not in the workingset
> count as hits.
>
> - do_async_mmap_readahead(): decremented by 1 when a page with
> PG_readahead is found in cache.
>
> When the counter exceeds MMAP_LOTSAMISS (100), all readahead is
> disabled, including the targeted VM_EXEC readahead [1] that requests
> arch-preferred folio orders for contpte mapping.
>
> On arm64 with 64K base pages, both decrement paths are inactive:
>
> 1. filemap_map_pages() is never called because fault_around_pages
> (65536 >> PAGE_SHIFT = 1) disables should_fault_around(), which
> requires fault_around_pages > 1. With only 1 page in the
> fault-around window, there is nothing "around" to map.
>
> 2. do_async_mmap_readahead() never fires for exec mappings because
> exec readahead sets async_size = 0, so no PG_readahead markers
> are placed.
>
> With no decrements, mmap_miss monotonically increases past
> MMAP_LOTSAMISS after 100 page faults, disabling all subsequent
> exec readahead.
>
> Fix this by moving the VM_EXEC readahead block above the mmap_miss
> check. The exec readahead path is targeted. It reads a single folio at
> the fault location with async_size=0, not speculative prefetch, so the
> mmap_miss heuristic designed to throttle wasteful speculative readahead
> should not gate it. The page would need to be faulted in regardless,
> the only question is at what order.
>
> [1] https://lore.kernel.org/all/20250430145920.3748738-6-ryan.roberts@arm.com/
>
> Signed-off-by: Usama Arif <usama.arif@linux.dev>
I can see the problem but I'm not sure what you propose is the right fix.
If you move the VM_EXEC logic earlier, you'll effectively disable
VM_HUGEPAGE handling for VM_EXEC vmas which I don't think we want. So
shouldn't we rather disable mmap_miss logic for VM_EXEC vmas like:
if (!(vm_flags & (VM_SEQ_READ | VM_EXEC))) {
...
}
Honza
> ---
> mm/filemap.c | 72 ++++++++++++++++++++++++++++------------------------
> 1 file changed, 39 insertions(+), 33 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 6cd7974d4adab..c064f31ecec5a 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3331,6 +3331,37 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
> }
> }
>
> + if (vm_flags & VM_EXEC) {
> + /*
> + * Allow arch to request a preferred minimum folio order for
> + * executable memory. This can often be beneficial to
> + * performance if (e.g.) arm64 can contpte-map the folio.
> + * Executable memory rarely benefits from readahead, due to its
> + * random access nature, so set async_size to 0.
> + *
> + * Limit to the boundaries of the VMA to avoid reading in any
> + * pad that might exist between sections, which would be a waste
> + * of memory.
> + *
> + * This is targeted readahead (one folio at the fault location),
> + * not speculative prefetch, so bypass the mmap_miss heuristic
> + * which would otherwise disable it after MMAP_LOTSAMISS faults.
> + */
> + struct vm_area_struct *vma = vmf->vma;
> + unsigned long start = vma->vm_pgoff;
> + unsigned long end = start + vma_pages(vma);
> + unsigned long ra_end;
> +
> + ra->order = exec_folio_order();
> + ra->start = round_down(vmf->pgoff, 1UL << ra->order);
> + ra->start = max(ra->start, start);
> + ra_end = round_up(ra->start + ra->ra_pages, 1UL << ra->order);
> + ra_end = min(ra_end, end);
> + ra->size = ra_end - ra->start;
> + ra->async_size = 0;
> + goto do_readahead;
> + }
> +
> if (!(vm_flags & VM_SEQ_READ)) {
> /* Avoid banging the cache line if not needed */
> mmap_miss = READ_ONCE(ra->mmap_miss);
> @@ -3361,40 +3392,15 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
> return fpin;
> }
>
> - if (vm_flags & VM_EXEC) {
> - /*
> - * Allow arch to request a preferred minimum folio order for
> - * executable memory. This can often be beneficial to
> - * performance if (e.g.) arm64 can contpte-map the folio.
> - * Executable memory rarely benefits from readahead, due to its
> - * random access nature, so set async_size to 0.
> - *
> - * Limit to the boundaries of the VMA to avoid reading in any
> - * pad that might exist between sections, which would be a waste
> - * of memory.
> - */
> - struct vm_area_struct *vma = vmf->vma;
> - unsigned long start = vma->vm_pgoff;
> - unsigned long end = start + vma_pages(vma);
> - unsigned long ra_end;
> -
> - ra->order = exec_folio_order();
> - ra->start = round_down(vmf->pgoff, 1UL << ra->order);
> - ra->start = max(ra->start, start);
> - ra_end = round_up(ra->start + ra->ra_pages, 1UL << ra->order);
> - ra_end = min(ra_end, end);
> - ra->size = ra_end - ra->start;
> - ra->async_size = 0;
> - } else {
> - /*
> - * mmap read-around
> - */
> - ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
> - ra->size = ra->ra_pages;
> - ra->async_size = ra->ra_pages / 4;
> - ra->order = 0;
> - }
> + /*
> + * mmap read-around
> + */
> + ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
> + ra->size = ra->ra_pages;
> + ra->async_size = ra->ra_pages / 4;
> + ra->order = 0;
>
> +do_readahead:
> fpin = maybe_unlock_mmap_for_io(vmf, fpin);
> ractl._index = ra->start;
> page_cache_ra_order(&ractl, ra);
> --
> 2.47.3
>
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
On 3/18/26 17:43, Jan Kara wrote:
> On Tue 10-03-26 07:51:15, Usama Arif wrote:
>> The mmap_miss counter in do_sync_mmap_readahead() tracks whether
>> readahead is useful for mmap'd file access. It is incremented by 1 on
>> every page cache miss in do_sync_mmap_readahead(), and decremented in
>> two places:
>>
>> - filemap_map_pages(): decremented by N for each of N pages
>> successfully mapped via fault-around (pages found already in cache,
>> evidence readahead was useful). Only pages not in the workingset
>> count as hits.
>>
>> - do_async_mmap_readahead(): decremented by 1 when a page with
>> PG_readahead is found in cache.
>>
>> When the counter exceeds MMAP_LOTSAMISS (100), all readahead is
>> disabled, including the targeted VM_EXEC readahead [1] that requests
>> arch-preferred folio orders for contpte mapping.
>>
>> On arm64 with 64K base pages, both decrement paths are inactive:
>>
>> 1. filemap_map_pages() is never called because fault_around_pages
>> (65536 >> PAGE_SHIFT = 1) disables should_fault_around(), which
>> requires fault_around_pages > 1. With only 1 page in the
>> fault-around window, there is nothing "around" to map.
>>
>> 2. do_async_mmap_readahead() never fires for exec mappings because
>> exec readahead sets async_size = 0, so no PG_readahead markers
>> are placed.
>>
>> With no decrements, mmap_miss monotonically increases past
>> MMAP_LOTSAMISS after 100 page faults, disabling all subsequent
>> exec readahead.
>>
>> Fix this by moving the VM_EXEC readahead block above the mmap_miss
>> check. The exec readahead path is targeted. It reads a single folio at
>> the fault location with async_size=0, not speculative prefetch, so the
>> mmap_miss heuristic designed to throttle wasteful speculative readahead
>> should not gate it. The page would need to be faulted in regardless,
>> the only question is at what order.
>>
>> [1] https://lore.kernel.org/all/20250430145920.3748738-6-ryan.roberts@arm.com/
>>
>> Signed-off-by: Usama Arif <usama.arif@linux.dev>
>
> I can see the problem but I'm not sure what you propose is the right fix.
> If you move the VM_EXEC logic earlier, you'll effectively disable
> VM_HUGEPAGE handling for VM_EXEC vmas which I don't think we want. So
> shouldn't we rather disable mmap_miss logic for VM_EXEC vmas like:
>
> if (!(vm_flags & (VM_SEQ_READ | VM_EXEC))) {
> ...
> }
>
That sounds reasonable to me.
--
Cheers,
David
© 2016 - 2026 Red Hat, Inc.