[PATCHSET v8 0/12] Uncached buffered IO

Jens Axboe posted 12 patches 11 months, 4 weeks ago
include/linux/fs.h             |  21 ++++++-
include/linux/page-flags.h     |   5 ++
include/linux/pagemap.h        |   3 +
include/trace/events/mmflags.h |   3 +-
include/uapi/linux/fs.h        |   6 +-
mm/filemap.c                   | 102 ++++++++++++++++++++++++++++-----
mm/internal.h                  |   2 +
mm/readahead.c                 |  22 +++++--
mm/swap.c                      |   2 +
mm/truncate.c                  |  53 +++++++++--------
10 files changed, 173 insertions(+), 46 deletions(-)
[PATCHSET v8 0/12] Uncached buffered IO
Posted by Jens Axboe 11 months, 4 weeks ago
Hi,

5 years ago I posted patches adding support for RWF_UNCACHED, as a way
to do buffered IO that isn't page cache persistent. The approach back
then was to have private pages for IO, and then get rid of them once IO
was done. But that then runs into all the issues that O_DIRECT has, in
terms of synchronizing with the page cache.

So here's a new approach to the same concent, but using the page cache
as synchronization. Due to excessive bike shedding on the naming, this
is now named RWF_DONTCACHE, and is less special in that it's just page
cache IO, except it prunes the ranges once IO is completed.

Why do this, you may ask? The tldr is that device speeds are only
getting faster, while reclaim is not. Doing normal buffered IO can be
very unpredictable, and suck up a lot of resources on the reclaim side.
This leads people to use O_DIRECT as a work-around, which has its own
set of restrictions in terms of size, offset, and length of IO. It's
also inherently synchronous, and now you need async IO as well. While
the latter isn't necessarily a big problem as we have good options
available there, it also should not be a requirement when all you want
to do is read or write some data without caching.

Even on desktop type systems, a normal NVMe device can fill the entire
page cache in seconds. On the big system I used for testing, there's a
lot more RAM, but also a lot more devices. As can be seen in some of the
results in the following patches, you can still fill RAM in seconds even
when there's 1TB of it. Hence this problem isn't solely a "big
hyperscaler system" issue, it's common across the board.

Common for both reads and writes with RWF_DONTCACHE is that they use the
page cache for IO. Reads work just like a normal buffered read would,
with the only exception being that the touched ranges will get pruned
after data has been copied. For writes, the ranges will get writeback
kicked off before the syscall returns, and then writeback completion
will prune the range. Hence writes aren't synchronous, and it's easy to
pipeline writes using RWF_DONTCACHE. Folios that aren't instantiated by
RWF_DONTCACHE IO are left untouched. This means you that uncached IO
will take advantage of the page cache for uptodate data, but not leave
anything it instantiated/created in cache.

File systems need to support this. This patchset adds support for the
generic read path, which covers file systems like ext4. Patches exist to
add support for iomap/XFS and btrfs as well, which sit on top of this
series. If RWF_DONTCACHE IO is attempted on a file system that doesn't
support it, -EOPNOTSUPP is returned. Hence the user can rely on it
either working as designed, or flagging and error if that's not the
case. The intent here is to give the application a sensible fallback
path - eg, it may fall back to O_DIRECT if appropriate, or just live
with the fact that uncached IO isn't available and do normal buffered
IO.

Adding "support" to other file systems should be trivial, most of the
time just a one-liner adding FOP_DONTCACHE to the fop_flags in the
file_operations struct, if the file system is using either iomap or
the generic filemap helpers for reading and writing.

Performance results are in patch 8 for reads, and you can find the write
side results in the XFS patch adding support for DONTCACHE writes for
XFS:

https://git.kernel.dk/cgit/linux/commit/?h=buffered-uncached-fs.10&id=257e92de795fdff7d7e256501e024fac6da6a7f4

with the tldr being that I see about a 65% improvement in performance
for both, with fully predictable IO times. CPU reduction is substantial
as well, with no kswapd activity at all for reclaim when using
uncached IO.

Using it from applications is trivial - just set RWF_DONTCACHE for the
read or write, using pwritev2(2) or preadv2(2). For io_uring, same
thing, just set RWF_DONTCACHE in sqe->rw_flags for a buffered read/write
operation. And that's it.

Patches 1..7 are just prep patches, and should have no functional
changes at all. Patch 8 adds support for the filemap path for
RWF_DONTCACHE reads, and patches 9..12 are just prep patches for
supporting the write side of uncached writes. In the below mentioned
branch, there are then patches to adopt uncached reads and writes for
xfs, btrfs, and ext4. The latter currently relies on bit of a hack for
passing whether this is an uncached write or not through
->write_begin(), which can hopefully go away once ext4 adopts iomap for
buffered writes. I say this is a hack as it's not the prettiest way to
do it, however it is fully solid and will work just fine.

Passes full xfstests and fsx overnight runs, no issues observed. That
includes the vm running the testing also using RWF_DONTCACHE on the
host. I'll post fsstress and fsx patches for RWF_DONTCACHE separately.
As far as I'm concerned, no further work needs doing here.

And git tree for the patches is here:

https://git.kernel.dk/cgit/linux/log/?h=buffered-uncached.10

with the file system patches on top adding support for xfs/btrfs/ext4
here:

https://git.kernel.dk/cgit/linux/log/?h=buffered-uncached-fs.10

 include/linux/fs.h             |  21 ++++++-
 include/linux/page-flags.h     |   5 ++
 include/linux/pagemap.h        |   3 +
 include/trace/events/mmflags.h |   3 +-
 include/uapi/linux/fs.h        |   6 +-
 mm/filemap.c                   | 102 ++++++++++++++++++++++++++++-----
 mm/internal.h                  |   2 +
 mm/readahead.c                 |  22 +++++--
 mm/swap.c                      |   2 +
 mm/truncate.c                  |  53 +++++++++--------
 10 files changed, 173 insertions(+), 46 deletions(-)

Since v7
- Rename filemap_uncached_read() to filemap_end_dropbehind_read()
- Rename folio_end_dropbehind() to folio_end_dropbehind_write()
- Make the "mm: add FGP_DONTCACHE folio creation flag" patch part of
  the base patches series, to avoid dependencies with btrfs/xfs/iomap
- Remove now dead IOMAP_F_DONTCACHE define and setting on xfs/iomap
- Re-instate mistakenly deleted VM_BUG_ON_FOLIO() in
  invalidate_inode_pages2_range()
- Add reviewed-by's
- Add separate fs patch branch that sits on top of the core branch
- Rebase on current -git master

-- 
Jens Axboe
Re: [PATCHSET v8 0/12] Uncached buffered IO
Posted by Andrew Morton 11 months, 1 week ago
On Fri, 20 Dec 2024 08:47:38 -0700 Jens Axboe <axboe@kernel.dk> wrote:

> So here's a new approach to the same concent, but using the page cache
> as synchronization. Due to excessive bike shedding on the naming, this
> is now named RWF_DONTCACHE, and is less special in that it's just page
> cache IO, except it prunes the ranges once IO is completed.
> 
> Why do this, you may ask? The tldr is that device speeds are only
> getting faster, while reclaim is not. Doing normal buffered IO can be
> very unpredictable, and suck up a lot of resources on the reclaim side.
> This leads people to use O_DIRECT as a work-around, which has its own
> set of restrictions in terms of size, offset, and length of IO. It's
> also inherently synchronous, and now you need async IO as well. While
> the latter isn't necessarily a big problem as we have good options
> available there, it also should not be a requirement when all you want
> to do is read or write some data without caching.

Of course, we're doing something here which userspace could itself do:
drop the pagecache after reading it (with appropriate chunk sizing) and
for writes, sync the written area then invalidate it.  Possible
added benefits from using separate threads for this.

I suggest that diligence requires that we at least justify an in-kernel
approach at this time, please.

And there's a possible middle-ground implementation where the kernel
itself kicks off threads to do the drop-behind just before the read or
write syscall returns, which will probably be simpler.  Can we please
describe why this also isn't acceptable?


Also, it seems wrong for a read(RWF_DONTCACHE) to drop cache if it was
already present.  Because it was presumably present for a reason.  Does
this implementation already take care of this?  To make an application
which does read(/etc/passwd, RWF_DONTCACHE) less annoying?


Also, consuming a new page flag isn't a minor thing.  It would be nice
to see some justification around this, and some decription of how many
we have left.
Re: [PATCHSET v8 0/12] Uncached buffered IO
Posted by Jens Axboe 11 months ago
(sorry missed this reply!)

On 1/7/25 8:35 PM, Andrew Morton wrote:
> On Fri, 20 Dec 2024 08:47:38 -0700 Jens Axboe <axboe@kernel.dk> wrote:
> 
>> So here's a new approach to the same concent, but using the page cache
>> as synchronization. Due to excessive bike shedding on the naming, this
>> is now named RWF_DONTCACHE, and is less special in that it's just page
>> cache IO, except it prunes the ranges once IO is completed.
>>
>> Why do this, you may ask? The tldr is that device speeds are only
>> getting faster, while reclaim is not. Doing normal buffered IO can be
>> very unpredictable, and suck up a lot of resources on the reclaim side.
>> This leads people to use O_DIRECT as a work-around, which has its own
>> set of restrictions in terms of size, offset, and length of IO. It's
>> also inherently synchronous, and now you need async IO as well. While
>> the latter isn't necessarily a big problem as we have good options
>> available there, it also should not be a requirement when all you want
>> to do is read or write some data without caching.
> 
> Of course, we're doing something here which userspace could itself do:
> drop the pagecache after reading it (with appropriate chunk sizing) and
> for writes, sync the written area then invalidate it.  Possible
> added benefits from using separate threads for this.
> 
> I suggest that diligence requires that we at least justify an in-kernel
> approach at this time, please.

Conceptually yes. But you'd end up doing extra work to do it. Some of
that not so expensive, like system calls, and others more so, like LRU
manipulation. Outside of that, I do think it makes sense to expose as a
generic thing, rather than require applications needing to kick
writeback manually, reclaim manually, etc.

> And there's a possible middle-ground implementation where the kernel
> itself kicks off threads to do the drop-behind just before the read or
> write syscall returns, which will probably be simpler.  Can we please
> describe why this also isn't acceptable?

That's more of an implementation detail. I didn't test anything like
that, though we surely could. If it's better, there's no reason why it
can't just be changed to do that. My gut tells me you want the task/CPU
that just did the page cache additions to do the pruning to, that should
be more efficient than having a kworker or similar do it.

> Also, it seems wrong for a read(RWF_DONTCACHE) to drop cache if it was
> already present.  Because it was presumably present for a reason.  Does
> this implementation already take care of this?  To make an application
> which does read(/etc/passwd, RWF_DONTCACHE) less annoying?

The implementation doesn't drop pages that were already present, only
pages that got created/added to the page cache for the operation. So
that part should already work as you expect.

> Also, consuming a new page flag isn't a minor thing.  It would be nice
> to see some justification around this, and some decription of how many
> we have left.

For sure, though various discussions on this already occurred and Kirill
posted patches for unifying some of this already. It's not something I
wanted to tackle, as I think that should be left to people more familiar
with the page/folio flags and they (sometimes odd) interactions.

-- 
Jens Axboe
Re: [PATCHSET v8 0/12] Uncached buffered IO
Posted by Andrew Morton 11 months ago
On Mon, 13 Jan 2025 08:34:18 -0700 Jens Axboe <axboe@kernel.dk> wrote:

> > 
>
> ...
>
> > Of course, we're doing something here which userspace could itself do:
> > drop the pagecache after reading it (with appropriate chunk sizing) and
> > for writes, sync the written area then invalidate it.  Possible
> > added benefits from using separate threads for this.
> > 
> > I suggest that diligence requires that we at least justify an in-kernel
> > approach at this time, please.
> 
> Conceptually yes. But you'd end up doing extra work to do it. Some of
> that not so expensive, like system calls, and others more so, like LRU
> manipulation. Outside of that, I do think it makes sense to expose as a
> generic thing, rather than require applications needing to kick
> writeback manually, reclaim manually, etc.
> 
> > And there's a possible middle-ground implementation where the kernel
> > itself kicks off threads to do the drop-behind just before the read or
> > write syscall returns, which will probably be simpler.  Can we please
> > describe why this also isn't acceptable?
> 
> That's more of an implementation detail. I didn't test anything like
> that, though we surely could. If it's better, there's no reason why it
> can't just be changed to do that. My gut tells me you want the task/CPU
> that just did the page cache additions to do the pruning to, that should
> be more efficient than having a kworker or similar do it.

Well, gut might be wrong ;)

There may be benefit in using different CPUs to perform the dropbehind,
rather than making the read() caller do this synchronously.

If I understand correctly, the write() dropbehind is performed at
interrupt (write completion) time so that's already async.

> > Also, it seems wrong for a read(RWF_DONTCACHE) to drop cache if it was
> > already present.  Because it was presumably present for a reason.  Does
> > this implementation already take care of this?  To make an application
> > which does read(/etc/passwd, RWF_DONTCACHE) less annoying?
> 
> The implementation doesn't drop pages that were already present, only
> pages that got created/added to the page cache for the operation. So
> that part should already work as you expect.
> 
> > Also, consuming a new page flag isn't a minor thing.  It would be nice
> > to see some justification around this, and some decription of how many
> > we have left.
> 
> For sure, though various discussions on this already occurred and Kirill
> posted patches for unifying some of this already. It's not something I
> wanted to tackle, as I think that should be left to people more familiar
> with the page/folio flags and they (sometimes odd) interactions.

Matthew & Kirill: are you OK with merging this as-is and then
revisiting the page-flag consumption at a later time?
Re: [PATCHSET v8 0/12] Uncached buffered IO
Posted by Kirill A. Shutemov 11 months ago
On Mon, Jan 13, 2025 at 04:46:50PM -0800, Andrew Morton wrote:
> > > Also, consuming a new page flag isn't a minor thing.  It would be nice
> > > to see some justification around this, and some decription of how many
> > > we have left.
> > 
> > For sure, though various discussions on this already occurred and Kirill
> > posted patches for unifying some of this already. It's not something I
> > wanted to tackle, as I think that should be left to people more familiar
> > with the page/folio flags and they (sometimes odd) interactions.
> 
> Matthew & Kirill: are you OK with merging this as-is and then
> revisiting the page-flag consumption at a later time?

I have tried to find a way to avoid adding a new flag bit, but I have not
found one. I am okay with merging it as it is.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCHSET v8 0/12] Uncached buffered IO
Posted by Jens Axboe 11 months ago
On 1/13/25 5:46 PM, Andrew Morton wrote:
> On Mon, 13 Jan 2025 08:34:18 -0700 Jens Axboe <axboe@kernel.dk> wrote:
> 
>>>
>>
>> ...
>>
>>> Of course, we're doing something here which userspace could itself do:
>>> drop the pagecache after reading it (with appropriate chunk sizing) and
>>> for writes, sync the written area then invalidate it.  Possible
>>> added benefits from using separate threads for this.
>>>
>>> I suggest that diligence requires that we at least justify an in-kernel
>>> approach at this time, please.
>>
>> Conceptually yes. But you'd end up doing extra work to do it. Some of
>> that not so expensive, like system calls, and others more so, like LRU
>> manipulation. Outside of that, I do think it makes sense to expose as a
>> generic thing, rather than require applications needing to kick
>> writeback manually, reclaim manually, etc.
>>
>>> And there's a possible middle-ground implementation where the kernel
>>> itself kicks off threads to do the drop-behind just before the read or
>>> write syscall returns, which will probably be simpler.  Can we please
>>> describe why this also isn't acceptable?
>>
>> That's more of an implementation detail. I didn't test anything like
>> that, though we surely could. If it's better, there's no reason why it
>> can't just be changed to do that. My gut tells me you want the task/CPU
>> that just did the page cache additions to do the pruning to, that should
>> be more efficient than having a kworker or similar do it.
> 
> Well, gut might be wrong ;)

A gut this big is rarely wrong ;-)

> There may be benefit in using different CPUs to perform the dropbehind,
> rather than making the read() caller do this synchronously.
> 
> If I understand correctly, the write() dropbehind is performed at
> interrupt (write completion) time so that's already async.

It does, but we could actually get rid of that, at least when called via
io_uring. From the testing I've done, doing it inline it definitely
superior. Though it will depend on if you care about overall efficiency
or just sheer speed/overhead of the read/write itself.

-- 
Jens Axboe