From: Kiryl Shutsemau <kas@kernel.org>
The finish_fault() function uses per-page fault for file folios. This
only occurs for file folios smaller than PMD_SIZE.
The comment suggests that this approach prevents RSS inflation.
However, it only prevents RSS accounting. The folio is still mapped to
the process, and the fact that it is mapped by a single PTE does not
affect memory pressure. Additionally, the kernel's ability to map
large folios as PMD if they are large enough does not support this
argument.
When possible, map large folios in one shot. This reduces the number of
minor page faults and allows for TLB coalescing.
Mapping large folios at once will allow the rmap code to mlock it on
add, as it will recognize that it is fully mapped and mlocking is safe.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
---
mm/memory.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 0ba4f6b71847..812a7d9f6531 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5386,13 +5386,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
nr_pages = folio_nr_pages(folio);
- /*
- * Using per-page fault to maintain the uffd semantics, and same
- * approach also applies to non shmem/tmpfs faults to avoid
- * inflating the RSS of the process.
- */
- if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
- unlikely(needs_fallback)) {
+ /* Using per-page fault to maintain the uffd semantics */
+ if (unlikely(userfaultfd_armed(vma)) || unlikely(needs_fallback)) {
nr_pages = 1;
} else if (nr_pages > 1) {
pgoff_t idx = folio_page_idx(folio, page);
--
2.50.1
On 18.09.25 13:21, kirill@shutemov.name wrote: > From: Kiryl Shutsemau <kas@kernel.org> > > The finish_fault() function uses per-page fault for file folios. This > only occurs for file folios smaller than PMD_SIZE. > > The comment suggests that this approach prevents RSS inflation. > However, it only prevents RSS accounting. The folio is still mapped to > the process, and the fact that it is mapped by a single PTE does not > affect memory pressure. Additionally, the kernel's ability to map > large folios as PMD if they are large enough does not support this > argument. > > When possible, map large folios in one shot. This reduces the number of > minor page faults and allows for TLB coalescing. > > Mapping large folios at once will allow the rmap code to mlock it on > add, as it will recognize that it is fully mapped and mlocking is safe. > > Signed-off-by: Kiryl Shutsemau <kas@kernel.org> > --- > mm/memory.c | 9 ++------- > 1 file changed, 2 insertions(+), 7 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 0ba4f6b71847..812a7d9f6531 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5386,13 +5386,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > > nr_pages = folio_nr_pages(folio); > > - /* > - * Using per-page fault to maintain the uffd semantics, and same > - * approach also applies to non shmem/tmpfs faults to avoid > - * inflating the RSS of the process. > - */ > - if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) || > - unlikely(needs_fallback)) { > + /* Using per-page fault to maintain the uffd semantics */ > + if (unlikely(userfaultfd_armed(vma)) || unlikely(needs_fallback)) { > nr_pages = 1; > } else if (nr_pages > 1) { > pgoff_t idx = folio_page_idx(folio, page); I could have sworn that we recently discussed that. Ah yes, there it is https://lkml.kernel.org/r/a1c9ba0f-544d-4204-ad3b-60fe1be2ab32@linux.alibaba.com CCing Baolin as he wanted to look into this. -- Cheers David / dhildenb
On Thu, Sep 18, 2025 at 01:30:32PM +0200, David Hildenbrand wrote: > On 18.09.25 13:21, kirill@shutemov.name wrote: > > From: Kiryl Shutsemau <kas@kernel.org> > > > > The finish_fault() function uses per-page fault for file folios. This > > only occurs for file folios smaller than PMD_SIZE. > > > > The comment suggests that this approach prevents RSS inflation. > > However, it only prevents RSS accounting. The folio is still mapped to > > the process, and the fact that it is mapped by a single PTE does not > > affect memory pressure. Additionally, the kernel's ability to map > > large folios as PMD if they are large enough does not support this > > argument. > > > > When possible, map large folios in one shot. This reduces the number of > > minor page faults and allows for TLB coalescing. > > > > Mapping large folios at once will allow the rmap code to mlock it on > > add, as it will recognize that it is fully mapped and mlocking is safe. > > > > Signed-off-by: Kiryl Shutsemau <kas@kernel.org> > > --- > > mm/memory.c | 9 ++------- > > 1 file changed, 2 insertions(+), 7 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 0ba4f6b71847..812a7d9f6531 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -5386,13 +5386,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > > nr_pages = folio_nr_pages(folio); > > - /* > > - * Using per-page fault to maintain the uffd semantics, and same > > - * approach also applies to non shmem/tmpfs faults to avoid > > - * inflating the RSS of the process. > > - */ > > - if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) || > > - unlikely(needs_fallback)) { > > + /* Using per-page fault to maintain the uffd semantics */ > > + if (unlikely(userfaultfd_armed(vma)) || unlikely(needs_fallback)) { > > nr_pages = 1; > > } else if (nr_pages > 1) { > > pgoff_t idx = folio_page_idx(folio, page); > > I could have sworn that we recently discussed that. > > Ah yes, there it is > > https://lkml.kernel.org/r/a1c9ba0f-544d-4204-ad3b-60fe1be2ab32@linux.alibaba.com > > CCing Baolin as he wanted to look into this. > > -- > Cheers > > David / dhildenb > Yeah Baolin already did work here [0] so let's get his input first I think! :) [0]:https://lore.kernel.org/linux-mm/440940e78aeb7430c5cc8b6d2088ae98265b9809.1751599072.git.baolin.wang@linux.alibaba.com/
On 2025/9/18 21:13, Lorenzo Stoakes wrote: > On Thu, Sep 18, 2025 at 01:30:32PM +0200, David Hildenbrand wrote: >> On 18.09.25 13:21, kirill@shutemov.name wrote: >>> From: Kiryl Shutsemau <kas@kernel.org> >>> >>> The finish_fault() function uses per-page fault for file folios. This >>> only occurs for file folios smaller than PMD_SIZE. >>> >>> The comment suggests that this approach prevents RSS inflation. >>> However, it only prevents RSS accounting. The folio is still mapped to >>> the process, and the fact that it is mapped by a single PTE does not >>> affect memory pressure. Additionally, the kernel's ability to map >>> large folios as PMD if they are large enough does not support this >>> argument. >>> >>> When possible, map large folios in one shot. This reduces the number of >>> minor page faults and allows for TLB coalescing. >>> >>> Mapping large folios at once will allow the rmap code to mlock it on >>> add, as it will recognize that it is fully mapped and mlocking is safe. >>> >>> Signed-off-by: Kiryl Shutsemau <kas@kernel.org> >>> --- >>> mm/memory.c | 9 ++------- >>> 1 file changed, 2 insertions(+), 7 deletions(-) >>> >>> diff --git a/mm/memory.c b/mm/memory.c >>> index 0ba4f6b71847..812a7d9f6531 100644 >>> --- a/mm/memory.c >>> +++ b/mm/memory.c >>> @@ -5386,13 +5386,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf) >>> nr_pages = folio_nr_pages(folio); >>> - /* >>> - * Using per-page fault to maintain the uffd semantics, and same >>> - * approach also applies to non shmem/tmpfs faults to avoid >>> - * inflating the RSS of the process. >>> - */ >>> - if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) || >>> - unlikely(needs_fallback)) { >>> + /* Using per-page fault to maintain the uffd semantics */ >>> + if (unlikely(userfaultfd_armed(vma)) || unlikely(needs_fallback)) { >>> nr_pages = 1; >>> } else if (nr_pages > 1) { >>> pgoff_t idx = folio_page_idx(folio, page); >> >> I could have sworn that we recently discussed that. >> >> Ah yes, there it is >> >> https://lkml.kernel.org/r/a1c9ba0f-544d-4204-ad3b-60fe1be2ab32@linux.alibaba.com >> >> CCing Baolin as he wanted to look into this. >> >> -- >> Cheers >> >> David / dhildenb >> > > Yeah Baolin already did work here [0] so let's get his input first I think! :) > > [0]:https://lore.kernel.org/linux-mm/440940e78aeb7430c5cc8b6d2088ae98265b9809.1751599072.git.baolin.wang@linux.alibaba.com/ Thanks CCing me. Also CCing Hugh. Hugh previously suggested adding restrictions to the mapping of file folios (using fault_around_bytes). However, personally, I am not inclined to use fault_around_bytes to control, because: 1. This doesn't cause serious write amplification issues. 2. It will inflate the RSS of the process, but does it matter? It seems not very important. 3. The default configuration for 'fault_around_bytes' is 65536 (16 pages), which is too small for mapping large file folios. 4. We could try adjusting 'fault_around_bytes' to a larger value, but we've found in real customer environments that 'fault_around_bytes' can lead to more aggressive readahead, impacting performance. So if 'fault_around_bytes' controls more, it will bring more different intersecting factors into play. Therefore, I personally prefer Kiryl's patch (it's what I intended to do, but I haven't had the time:().
© 2016 - 2025 Red Hat, Inc.