fs/iomap/buffered-io.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-)
From: Jinliang Zheng <alexjlzheng@tencent.com>
Prefaulting the write source buffer incurs an extra userspace access
in the common fast path. Make iomap_write_iter() consistent with
generic_perform_write(): only touch userspace an extra time when
copy_folio_from_iter_atomic() has failed to make progress.
Signed-off-by: Jinliang Zheng <alexjlzheng@tencent.com>
---
Changelog:
v2: update commit message and comment
v1: https://lore.kernel.org/linux-xfs/20250726090955.647131-2-alexjlzheng@tencent.com/
This patch follows commit faa794dd2e17 ("fuse: Move prefaulting out of
hot write path") and commit 665575cff098 ("filemap: move prefaulting out
of hot write path").
---
fs/iomap/buffered-io.c | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index fd827398afd2..54e0fa86ea16 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -967,21 +967,6 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i,
if (bytes > iomap_length(iter))
bytes = iomap_length(iter);
- /*
- * Bring in the user page that we'll copy from _first_.
- * Otherwise there's a nasty deadlock on copying from the
- * same page as we're writing to, without it being marked
- * up-to-date.
- *
- * For async buffered writes the assumption is that the user
- * page has already been faulted in. This can be optimized by
- * faulting the user page.
- */
- if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) {
- status = -EFAULT;
- break;
- }
-
status = iomap_write_begin(iter, write_ops, &folio, &offset,
&bytes);
if (unlikely(status)) {
@@ -996,6 +981,12 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i,
if (mapping_writably_mapped(mapping))
flush_dcache_folio(folio);
+ /*
+ * Faults here on mmap()s can recurse into arbitrary
+ * filesystem code. Lots of locks are held that can
+ * deadlock. Use an atomic copy to avoid deadlocking
+ * in page fault handling.
+ */
copied = copy_folio_from_iter_atomic(folio, offset, bytes, i);
written = iomap_write_end(iter, bytes, copied, folio) ?
copied : 0;
@@ -1034,6 +1025,16 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i,
bytes = copied;
goto retry;
}
+
+ /*
+ * 'folio' is now unlocked and faults on it can be
+ * handled. Ensure forward progress by trying to
+ * fault it in now.
+ */
+ if (fault_in_iov_iter_readable(i, bytes) == bytes) {
+ status = -EFAULT;
+ break;
+ }
} else {
total_written += written;
iomap_iter_advance(iter, &written);
--
2.49.0
On Thu, Jul 31, 2025 at 12:44:09AM +0800, alexjlzheng@gmail.com wrote: > From: Jinliang Zheng <alexjlzheng@tencent.com> > > Prefaulting the write source buffer incurs an extra userspace access > in the common fast path. Make iomap_write_iter() consistent with > generic_perform_write(): only touch userspace an extra time when > copy_folio_from_iter_atomic() has failed to make progress. This is probably a good thing to have, but I'm curous if you did see it making a different for workloads? > + /* > + * Faults here on mmap()s can recurse into arbitrary > + * filesystem code. Lots of locks are held that can > + * deadlock. Use an atomic copy to avoid deadlocking > + * in page fault handling. We can and should use all 80 characters in a line for comments. > + /* > + * 'folio' is now unlocked and faults on it can be > + * handled. Ensure forward progress by trying to > + * fault it in now. > + */ Same here. I really wish we could find a way to share the core write loop between at least iomap and generic_perform_write and maybe also the other copy and pasters. But that's for another time..
On Thu, 31 Jul 2025 07:21:57 -0700, Christoph Hellwig wrote: > On Thu, Jul 32, 2025 at 12:44:09AM +0800, alexjlzheng@gmail.com wrote: > > From: Jinliang Zheng <alexjlzheng@tencent.com> > > > > Prefaulting the write source buffer incurs an extra userspace access > > in the common fast path. Make iomap_write_iter() consistent with > > generic_perform_write(): only touch userspace an extra time when > > copy_folio_from_iter_atomic() has failed to make progress. > > This is probably a good thing to have, but I'm curous if you did see > it making a different for workloads? Yes, there is some improvement. However, I tested it only a few times, so I can't rule out jitter. However, from a design pattern perspective, this patch is a good thing anyway. > > > + /* > > + * Faults here on mmap()s can recurse into arbitrary > > + * filesystem code. Lots of locks are held that can > > + * deadlock. Use an atomic copy to avoid deadlocking > > + * in page fault handling. > > We can and should use all 80 characters in a line for comments. I agree. hahaha :) thanks, Jinliang Zheng. :) :) > > > + /* > > + * 'folio' is now unlocked and faults on it can be > > + * handled. Ensure forward progress by trying to > > + * fault it in now. > > + */ > > Same here. > > I really wish we could find a way to share the core write loop between > at least iomap and generic_perform_write and maybe also the other copy > and pasters. But that's for another time..
© 2016 - 2025 Red Hat, Inc.