fs/f2fs/data.c | 54 +++++++++++++++++++++++++++++++++----------------- 1 file changed, 36 insertions(+), 18 deletions(-)
When reading immutable, non-compressed files with large folios enabled,
I was able to reproduce readahead hangs while reading sparse files with
holes and heavily fragmented files. The problems were caused by a few
corner cases in the large-folio read loop:
- f2fs_folio_state could be observed with uninitialized field
read_pages_pending
- subpage accounting could become inconsistent with BIO completion,
leading to folios being prematurely unlocked/marked uptodate.
- NULL_ADDR/NEW_ADDR blocks can carry F2FS_MAP_MAPPED, causing the
large-folio read path to treat hole blocks as mapped and to account
them in read_pages_pending.
- in readahead, a folio that never had any subpage queued to a BIO
would not be seen by f2fs_finish_read_bio(), leaving it locked.
- the zeroing path did not advance index/offset before continuing.
This patch series fixes the above issues in f2fs_read_data_large_folio()
introduced by commit 05e65c14ea59 ("f2fs: support large folio for
immutable non-compressed case").
Testing
-------
All patches pass scripts/checkpatch.pl.
I tested the basic large-folio immutable read case described in the
original thread (create a large file, set immutable, drop caches to
reload the inode, then read it), and additionally verified:
- sparse file
- heavily fragmented file
In all cases, reads completed without hangs and data was verified against
the expected contents.
Nanzhe Zhao (5):
f2fs: Zero f2fs_folio_state on allocation
f2fs: Accounting large folio subpages before bio submission
f2fs: add f2fs_block_needs_zeroing() to handle hole blocks
f2fs: add 'folio_in_bio' to handle readahead folios with no BIO
submission
f2fs: advance index and offset after zeroing in large folio read
fs/f2fs/data.c | 54 +++++++++++++++++++++++++++++++++-----------------
1 file changed, 36 insertions(+), 18 deletions(-)
base-commit: 48b5439e04ddf4508ecaf588219012dc81d947c0
--
2.34.1
Hi Nanzhe,
fyi - I applied the beginning two patches first.
Thanks,
On 01/05, Nanzhe Zhao wrote:
> When reading immutable, non-compressed files with large folios enabled,
> I was able to reproduce readahead hangs while reading sparse files with
> holes and heavily fragmented files. The problems were caused by a few
> corner cases in the large-folio read loop:
>
> - f2fs_folio_state could be observed with uninitialized field
> read_pages_pending
> - subpage accounting could become inconsistent with BIO completion,
> leading to folios being prematurely unlocked/marked uptodate.
> - NULL_ADDR/NEW_ADDR blocks can carry F2FS_MAP_MAPPED, causing the
> large-folio read path to treat hole blocks as mapped and to account
> them in read_pages_pending.
> - in readahead, a folio that never had any subpage queued to a BIO
> would not be seen by f2fs_finish_read_bio(), leaving it locked.
> - the zeroing path did not advance index/offset before continuing.
>
> This patch series fixes the above issues in f2fs_read_data_large_folio()
> introduced by commit 05e65c14ea59 ("f2fs: support large folio for
> immutable non-compressed case").
>
> Testing
> -------
>
> All patches pass scripts/checkpatch.pl.
>
> I tested the basic large-folio immutable read case described in the
> original thread (create a large file, set immutable, drop caches to
> reload the inode, then read it), and additionally verified:
>
> - sparse file
> - heavily fragmented file
>
> In all cases, reads completed without hangs and data was verified against
> the expected contents.
>
> Nanzhe Zhao (5):
> f2fs: Zero f2fs_folio_state on allocation
> f2fs: Accounting large folio subpages before bio submission
> f2fs: add f2fs_block_needs_zeroing() to handle hole blocks
> f2fs: add 'folio_in_bio' to handle readahead folios with no BIO
> submission
> f2fs: advance index and offset after zeroing in large folio read
>
> fs/f2fs/data.c | 54 +++++++++++++++++++++++++++++++++-----------------
> 1 file changed, 36 insertions(+), 18 deletions(-)
>
>
> base-commit: 48b5439e04ddf4508ecaf588219012dc81d947c0
> --
> 2.34.1
Hi Kim, At 2026-01-07 11:08:50, "Jaegeuk Kim" <jaegeuk@kernel.org> wrote: >>Hi Nanzhe, >> >>fyi - I applied the beginning two patches first. >> >>Thanks, >> Thanks for applying my small changes. By the way, I’d like to discuss one more thing about testing for large folios. It seems the current xfstests coverage may not be sufficient. Would it be welcome for me to contribute some new test cases upstream? Also, I think large-folio functionality might also need black-box testing such as fault-injection, where we force certain paths to return errors and verify behavior under failures. I’d appreciate your thoughts. Thanks, Nanzhe
On 1/8/2026 10:17 AM, Nanzhe Zhao wrote: > Hi Kim, > At 2026-01-07 11:08:50, "Jaegeuk Kim" <jaegeuk@kernel.org> wrote: >>> Hi Nanzhe, >>> >>> fyi - I applied the beginning two patches first. >>> >>> Thanks, >>> > > Thanks for applying my small changes. > > By the way, I’d like to discuss one more thing about testing for large folios. > It seems the current xfstests coverage may not be sufficient. Would it be > welcome for me to contribute some new test cases upstream? Great, please go ahead, new testcase can be added into tests/f2fs/ directory. > > Also, I think large-folio functionality might also need black-box testing such > as fault-injection, where we force certain paths to return errors and verify > behavior under failures. I’d appreciate your thoughts. It's fine to introduce a new help f2fs_fsverity_verify_page() and inject error there. Thanks, > > Thanks, > Nanzhe > > > > > > > > > >
© 2016 - 2026 Red Hat, Inc.