[PATCH v4 11/15] userfaultfd: mfill_atomic(): remove retry logic

Mike Rapoport posted 15 patches 1 day, 21 hours ago
[PATCH v4 11/15] userfaultfd: mfill_atomic(): remove retry logic
Posted by Mike Rapoport 1 day, 21 hours ago
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>

Since __mfill_atomic_pte() handles the retry for both anonymous and shmem,
there is no need to retry copying the date from the userspace in the loop
in mfill_atomic().

Drop the retry logic from mfill_atomic().

Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
 mm/userfaultfd.c | 24 ------------------------
 1 file changed, 24 deletions(-)

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index e672a9e45d0c..935a3f6ebeed 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -29,7 +29,6 @@ struct mfill_state {
 	struct vm_area_struct *vma;
 	unsigned long src_addr;
 	unsigned long dst_addr;
-	struct folio *folio;
 	pmd_t *pmd;
 };
 
@@ -899,7 +898,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
 	VM_WARN_ON_ONCE(src_start + len <= src_start);
 	VM_WARN_ON_ONCE(dst_start + len <= dst_start);
 
-retry:
 	err = mfill_get_vma(&state);
 	if (err)
 		goto out;
@@ -926,26 +924,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
 		err = mfill_atomic_pte(&state);
 		cond_resched();
 
-		if (unlikely(err == -ENOENT)) {
-			void *kaddr;
-
-			mfill_put_vma(&state);
-			VM_WARN_ON_ONCE(!state.folio);
-
-			kaddr = kmap_local_folio(state.folio, 0);
-			err = copy_from_user(kaddr,
-					     (const void __user *)state.src_addr,
-					     PAGE_SIZE);
-			kunmap_local(kaddr);
-			if (unlikely(err)) {
-				err = -EFAULT;
-				goto out;
-			}
-			flush_dcache_folio(state.folio);
-			goto retry;
-		} else
-			VM_WARN_ON_ONCE(state.folio);
-
 		if (!err) {
 			state.dst_addr += PAGE_SIZE;
 			state.src_addr += PAGE_SIZE;
@@ -960,8 +938,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
 
 	mfill_put_vma(&state);
 out:
-	if (state.folio)
-		folio_put(state.folio);
 	VM_WARN_ON_ONCE(copied < 0);
 	VM_WARN_ON_ONCE(err > 0);
 	VM_WARN_ON_ONCE(!copied && !err);
-- 
2.53.0
Re: [PATCH v4 11/15] userfaultfd: mfill_atomic(): remove retry logic
Posted by Mike Rapoport 1 day, 12 hours ago
On Thu, Apr 02, 2026 at 07:11:52AM +0300, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
> 
> Since __mfill_atomic_pte() handles the retry for both anonymous and shmem,
> there is no need to retry copying the date from the userspace in the loop
> in mfill_atomic().
> 
> Drop the retry logic from mfill_atomic().
> 
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
>  mm/userfaultfd.c | 24 ------------------------
>  1 file changed, 24 deletions(-)

After discussion with David Carlier about potential replacement of VMA in
mfill_copy_folio_retry(), I looked again in the code and realized that
after all the rebases I didn't remove the bit that temporarily prevented
returning ENOENT from __mfill_atomic_pte().

Andrew, can you please fold this into "userfaultfd: mfill_atomic(): remove
retry logic"?

For a change it applies cleanly :)

commit 5173c8f4fd32f314907b3804217ef57d4e3a2220
Author: Mike Rapoport (Microsoft) <rppt@kernel.org>
Date:   Thu Apr 2 16:38:39 2026 +0300

    userfaultfd: remove safety mesaure of not returning ENOENT from _copy
    
    Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 935a3f6ebeed..dfd7094b1c40 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -530,9 +530,6 @@ static int __mfill_atomic_pte(struct mfill_state *state,
 		ops->filemap_remove(folio, state->vma);
 err_folio_put:
 	folio_put(folio);
-	/* Don't return -ENOENT so that our caller won't retry */
-	if (ret == -ENOENT)
-		ret = -EFAULT;
 	return ret;
 }
 
 

-- 
Sincerely yours,
Mike.
Re: [PATCH v4 11/15] userfaultfd: mfill_atomic(): remove retry logic
Posted by Andrew Morton 1 day, 7 hours ago
On Thu, 2 Apr 2026 16:47:28 +0300 Mike Rapoport <rppt@kernel.org> wrote:

> > Drop the retry logic from mfill_atomic().
> > 
> > Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> > ---
> >  mm/userfaultfd.c | 24 ------------------------
> >  1 file changed, 24 deletions(-)
> 
> After discussion with David Carlier about potential replacement of VMA in
> mfill_copy_folio_retry(), I looked again in the code and realized that
> after all the rebases I didn't remove the bit that temporarily prevented
> returning ENOENT from __mfill_atomic_pte().
> 
> Andrew, can you please fold this into "userfaultfd: mfill_atomic(): remove
> retry logic"?
> 
> For a change it applies cleanly :)

done.

> commit 5173c8f4fd32f314907b3804217ef57d4e3a2220
> Author: Mike Rapoport (Microsoft) <rppt@kernel.org>
> Date:   Thu Apr 2 16:38:39 2026 +0300
> 
>     userfaultfd: remove safety mesaure of not returning ENOENT from _copy

s/mesaure/measure/

Was "_copy" intended?

>     
>     Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

I instafolded this.  End result:


From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Subject: userfaultfd: mfill_atomic(): remove retry logic
Date: Thu, 2 Apr 2026 07:11:52 +0300

Since __mfill_atomic_pte() handles the retry for both anonymous and shmem,
there is no need to retry copying the date from the userspace in the loop
in mfill_atomic().

Drop the retry logic from mfill_atomic().

[rppt@kernel.org: remove safety measure of not returning ENOENT from _copy]
  Link: https://lkml.kernel.org/r/ac5zcDUY8CFHr6Lw@kernel.org
Link: https://lkml.kernel.org/r/20260402041156.1377214-12-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrei Vagin <avagin@google.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: David Hildenbrand (Arm) <david@kernel.org>
Cc: Harry Yoo <harry.yoo@oracle.com>
Cc: Harry Yoo (Oracle) <harry@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: James Houghton <jthoughton@google.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Nikita Kalyazin <kalyazin@amazon.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/userfaultfd.c |   27 ---------------------------
 1 file changed, 27 deletions(-)

--- a/mm/userfaultfd.c~userfaultfd-mfill_atomic-remove-retry-logic
+++ a/mm/userfaultfd.c
@@ -29,7 +29,6 @@ struct mfill_state {
 	struct vm_area_struct *vma;
 	unsigned long src_addr;
 	unsigned long dst_addr;
-	struct folio *folio;
 	pmd_t *pmd;
 };
 
@@ -531,9 +530,6 @@ err_filemap_remove:
 		ops->filemap_remove(folio, state->vma);
 err_folio_put:
 	folio_put(folio);
-	/* Don't return -ENOENT so that our caller won't retry */
-	if (ret == -ENOENT)
-		ret = -EFAULT;
 	return ret;
 }
 
@@ -899,7 +895,6 @@ static __always_inline ssize_t mfill_ato
 	VM_WARN_ON_ONCE(src_start + len <= src_start);
 	VM_WARN_ON_ONCE(dst_start + len <= dst_start);
 
-retry:
 	err = mfill_get_vma(&state);
 	if (err)
 		goto out;
@@ -926,26 +921,6 @@ retry:
 		err = mfill_atomic_pte(&state);
 		cond_resched();
 
-		if (unlikely(err == -ENOENT)) {
-			void *kaddr;
-
-			mfill_put_vma(&state);
-			VM_WARN_ON_ONCE(!state.folio);
-
-			kaddr = kmap_local_folio(state.folio, 0);
-			err = copy_from_user(kaddr,
-					     (const void __user *)state.src_addr,
-					     PAGE_SIZE);
-			kunmap_local(kaddr);
-			if (unlikely(err)) {
-				err = -EFAULT;
-				goto out;
-			}
-			flush_dcache_folio(state.folio);
-			goto retry;
-		} else
-			VM_WARN_ON_ONCE(state.folio);
-
 		if (!err) {
 			state.dst_addr += PAGE_SIZE;
 			state.src_addr += PAGE_SIZE;
@@ -960,8 +935,6 @@ retry:
 
 	mfill_put_vma(&state);
 out:
-	if (state.folio)
-		folio_put(state.folio);
 	VM_WARN_ON_ONCE(copied < 0);
 	VM_WARN_ON_ONCE(err > 0);
 	VM_WARN_ON_ONCE(!copied && !err);
_