[PATCH v13 00/10] enable bs > ps in XFS

Pankaj Raghav (Samsung) posted 10 patches 3 weeks, 4 days ago
fs/iomap/buffered-io.c        |   4 +-
fs/iomap/direct-io.c          |  45 +++++++++++--
fs/xfs/libxfs/xfs_attr_leaf.c |  15 ++---
fs/xfs/libxfs/xfs_ialloc.c    |   5 ++
fs/xfs/libxfs/xfs_shared.h    |   3 +
fs/xfs/xfs_icache.c           |   6 +-
fs/xfs/xfs_iops.c             |   2 +-
fs/xfs/xfs_mount.c            |   8 ++-
fs/xfs/xfs_super.c            |  28 +++++---
include/linux/huge_mm.h       |  14 ++--
include/linux/pagemap.h       | 123 ++++++++++++++++++++++++++++++----
mm/filemap.c                  |  36 ++++++----
mm/huge_memory.c              |  60 +++++++++++++++--
mm/readahead.c                |  83 +++++++++++++++++------
14 files changed, 347 insertions(+), 85 deletions(-)
[PATCH v13 00/10] enable bs > ps in XFS
Posted by Pankaj Raghav (Samsung) 3 weeks, 4 days ago
From: Pankaj Raghav <p.raghav@samsung.com>

This is the 13th version of the series that enables block size > page size
(Large Block Size) experimental support in XFS. Please consider this for
the inclusion in 6.12.

The context and motivation can be seen in cover letter of the RFC v1 [0].
We also recorded a talk about this effort at LPC [1], if someone would
like more context on this effort.

Thanks to David Howells, the page cache changes have also been tested on
top of AFS[2] with mapping_min_order set.

A lot of emphasis has been put on testing using kdevops, starting with an XFS
baseline [3]. The testing has been split into regression and progression.

Regression testing:
In regression testing, we ran the whole test suite to check for regressions on
existing profiles due to the page cache changes.

I also ran split_huge_page_test selftest on XFS filesystem to check for
huge page splits in min order chunks is done correctly.

No regressions were found with these patches added on top.

Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.  To
compare it with existing support, an ARM VM with 64k base page system (without
our patches) was used as a reference to check for actual failures due to LBS
support in a 4k base page size system.

No new failures were found with the LBS support.

We've done some preliminary performance tests with fio on XFS on 4k block size
against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
patches applied, and detected no regressions.

We ran sysbench on postgres and mysql for several hours on LBS XFS
without any issues.

We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
is aligned and at least filesystem block size in length.

For those who want this in a git tree we have this up on a kdevops
large-block-minorder-for-next-v13 tag [6].

[0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
[1] https://www.youtube.com/watch?v=ar72r5Xf7x4
[2] https://lore.kernel.org/linux-mm/3792765.1724196264@warthog.procyon.org.uk/
[3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common  XFS asserts or hung
tasks  and 2 memory management asserts.
[4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
[5] https://github.com/iovisor/bcc/pull/4813
[6] https://github.com/linux-kdevops/linux/
[7] https://lore.kernel.org/linux-kernel/Zl20pc-YlIWCSy6Z@casper.infradead.org/#t

Changes since v12:
- Fixed the issue of masking the wrong bits in PATCH 1.
- Collected Tested-by from David Howells.

Dave Chinner (1):
  xfs: use kvmalloc for xattr buffers

Luis Chamberlain (1):
  mm: split a folio in minimum folio order chunks

Matthew Wilcox (Oracle) (1):
  fs: Allow fine-grained control of folio sizes

Pankaj Raghav (7):
  filemap: allocate mapping_min_order folios in the page cache
  readahead: allocate folios with mapping_min_order in readahead
  filemap: cap PTE range to be created to allowed zero fill in
    folio_map_range()
  iomap: fix iomap_dio_zero() for fs bs > system page size
  xfs: expose block size in stat
  xfs: make the calculation generic in xfs_sb_validate_fsb_count()
  xfs: enable block size larger than page size support

 fs/iomap/buffered-io.c        |   4 +-
 fs/iomap/direct-io.c          |  45 +++++++++++--
 fs/xfs/libxfs/xfs_attr_leaf.c |  15 ++---
 fs/xfs/libxfs/xfs_ialloc.c    |   5 ++
 fs/xfs/libxfs/xfs_shared.h    |   3 +
 fs/xfs/xfs_icache.c           |   6 +-
 fs/xfs/xfs_iops.c             |   2 +-
 fs/xfs/xfs_mount.c            |   8 ++-
 fs/xfs/xfs_super.c            |  28 +++++---
 include/linux/huge_mm.h       |  14 ++--
 include/linux/pagemap.h       | 123 ++++++++++++++++++++++++++++++----
 mm/filemap.c                  |  36 ++++++----
 mm/huge_memory.c              |  60 +++++++++++++++--
 mm/readahead.c                |  83 +++++++++++++++++------
 14 files changed, 347 insertions(+), 85 deletions(-)


base-commit: eb8c5ca373cbb018a84eb4db25c863302c9b6314
-- 
2.44.1
Re: [PATCH v13 00/10] enable bs > ps in XFS
Posted by Christian Brauner 3 weeks, 3 days ago
On Thu, 22 Aug 2024 15:50:08 +0200, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
> 
> This is the 13th version of the series that enables block size > page size
> (Large Block Size) experimental support in XFS. Please consider this for
> the inclusion in 6.12.
> 
> The context and motivation can be seen in cover letter of the RFC v1 [0].
> We also recorded a talk about this effort at LPC [1], if someone would
> like more context on this effort.
> 
> [...]

I've rebased this onto v6.11-rc1 and did a test compile for each commit
and ran xfstests for xfs. Looks good so far. Should show up in fs-next
tomorrow.

---

Applied to the vfs.blocksize branch of the vfs/vfs.git tree.
Patches in the vfs.blocksize branch should appear in linux-next soon.

Please report any outstanding bugs that were missed during review in a
new review to the original patch series allowing us to drop it.

It's encouraged to provide Acked-bys and Reviewed-bys even though the
patch has now been applied. If possible patch trailers will be updated.

Note that commit hashes shown below are subject to change due to rebase,
trailer updates or similar. If in doubt, please check the listed branch.

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git
branch: vfs.blocksize

[01/10] fs: Allow fine-grained control of folio sizes
        https://git.kernel.org/vfs/vfs/c/e8201b314c01
[02/10] filemap: allocate mapping_min_order folios in the page cache
        https://git.kernel.org/vfs/vfs/c/c104d25f8c49
[03/10] readahead: allocate folios with mapping_min_order in readahead
        https://git.kernel.org/vfs/vfs/c/7949d4e70aef
[04/10] mm: split a folio in minimum folio order chunks
        https://git.kernel.org/vfs/vfs/c/fd031210c9ce
[05/10] filemap: cap PTE range to be created to allowed zero fill in folio_map_range()
        https://git.kernel.org/vfs/vfs/c/e9f3b433acd0
[06/10] iomap: fix iomap_dio_zero() for fs bs > system page size
        https://git.kernel.org/vfs/vfs/c/d940b3b7b76b
[07/10] xfs: use kvmalloc for xattr buffers
        https://git.kernel.org/vfs/vfs/c/13c9f3c68405
[08/10] xfs: expose block size in stat
        https://git.kernel.org/vfs/vfs/c/4e70eedd93ae
[09/10] xfs: make the calculation generic in xfs_sb_validate_fsb_count()
        https://git.kernel.org/vfs/vfs/c/f8b794f50725
[10/10] xfs: enable block size larger than page size support
        https://git.kernel.org/vfs/vfs/c/0ab3ca31b012
Re: [PATCH v13 00/10] enable bs > ps in XFS
Posted by Luis Chamberlain 3 weeks, 3 days ago
On Thu, Aug 22, 2024 at 03:50:08PM +0200, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
> 
> This is the 13th version of the series that enables block size > page size
> (Large Block Size) experimental support in XFS. Please consider this for
> the inclusion in 6.12.

Christian, Andrew,

we believe this is ready for integration, and at the last XFS BoF we
were wondering what tree this should go through. I see fs-next is
actually just a branch on linux-next with the merge of a few select
trees [0], but this touches mm, so its not clear what tree would be be
most appropriate to consider.

Please let us know what you think, it would be great to get this into
fs-next somehow to get more exposure / testing.

[0] https://lore.kernel.org/all/20240528091629.3b8de7e0@canb.auug.org.au/

  Luis