[PATCH v3 00/17] ext4: better scalability for ext4 block allocation

Baokun Li posted 17 patches 2 months, 3 weeks ago
fs/ext4/balloc.c            |   2 +-
fs/ext4/ext4.h              |  61 +--
fs/ext4/mballoc.c           | 895 ++++++++++++++++++++----------------
fs/ext4/mballoc.h           |   9 +-
include/trace/events/ext4.h |   3 -
5 files changed, 534 insertions(+), 436 deletions(-)
[PATCH v3 00/17] ext4: better scalability for ext4 block allocation
Posted by Baokun Li 2 months, 3 weeks ago
Changes since v2:
 * Collect RVB from Jan Kara. (Thanks for your review!)
 * Add patch 2.
 * Patch 4: Switching to READ_ONCE/WRITE_ONCE (great for single-process)
        over smp_load_acquire/smp_store_release (only slight multi-process
        gain). (Suggested by Jan Kara)
 * Patch 5: The number of global goals is now set to the lesser of the CPU
        count or one-fourth of the group count. This prevents setting too
        many goals for small filesystems, which lead to file dispersion.
        (Suggested by Jan Kara)
 * Patch 5: Directly use kfree() to release s_mb_last_groups instead of
        kvfree(). (Suggested by Julia Lawall)
 * Patch 11: Even without mb_optimize_scan enabled, we now always attempt
        to remove the group from the old order list.(Suggested by Jan Kara)
 * Patch 14-16: Added comments for clarity, refined logic, and removed
        obsolete variables.
 * Update performance test results and indicate raw disk write bandwidth. 

Thanks to Honza for your suggestions!

v2: https://lore.kernel.org/r/20250623073304.3275702-1-libaokun1@huawei.com

Changes since v1:
 * Patch 1: Prioritize checking if a group is busy to avoid unnecessary
       checks and buddy loading. (Thanks to Ojaswin for the suggestion!)
 * Patch 4: Using multiple global goals instead of moving the goal to the
       inode level. (Thanks to Honza for the suggestion!)
 * Collect RVB from Jan Kara and Ojaswin Mujoo.(Thanks for your review!)
 * Add patch 2,3,7-16.
 * Due to the change of test server, the relevant test data was refreshed.

v1: https://lore.kernel.org/r/20250523085821.1329392-1-libaokun@huaweicloud.com

Since servers have more and more CPUs, and we're running more containers
on them, we've been using will-it-scale to test how well ext4 scales. The
fallocate2 test (append 8KB to 1MB, truncate to 0, repeat) run concurrently
on 64 containers revealed significant contention in block allocation/free,
leading to much lower average fallocate OPS compared to a single
container (see below).

   1   |    2   |    4   |    8   |   16   |   32   |   64
-------|--------|--------|--------|--------|--------|-------
295287 | 70665  | 33865  | 19387  | 10104  |  5588  |  3588

Under this test scenario, the primary operations are block allocation
(fallocate) and block deallocation (truncate). The main bottlenecks for
these operations are the group lock and s_md_lock. Therefore, this patch
series primarily focuses on optimizing the code related to these two locks.

The following is a brief overview of the patches, see the patches for
more details.

Patch 1: Add ext4_try_lock_group() to skip busy groups to take advantage
of the large number of ext4 groups.

Patch 2: Separates stream goal hits from s_bal_goals in preparation for
cleanup of s_mb_last_start.

Patches 3-5: Split stream allocation's global goal into multiple goals and
remove the unnecessary and expensive s_md_lock.

Patches 6-7: minor cleanups

Patches 8: Converted s_mb_free_pending to atomic_t and used memory barriers
for consistency, instead of relying on the expensive s_md_lock.

Patches 9: When inserting free extents, we now attempt to merge them with
already inserted extents first, to reduce s_md_lock contention.

Patches 10: Updates bb_avg_fragment_size_order to -1 when a group is out of
free blocks, eliminating efficiency-impacting "zombie groups."

Patches 11: Fix potential largest free orders lists corruption when the
mb_optimize_scan mount option is switched on or off.

Patches 12-17: Convert mb_optimize_scan's existing unordered list traversal
to ordered xarrays, thereby reducing contention between block allocation
and freeing, similar to linear traversal.

"kvm-xfstests -c ext4/all -g auto" has been executed with no new failures.

Here are some performance test data for your reference:

Test: Running will-it-scale/fallocate2 on CPU-bound containers.
Observation: Average fallocate operations per container per second.

|CPU: Kunpeng 920   |          P80           |            P1           |
|Memory: 512GB      |------------------------|-------------------------|
|960GB SSD (0.5GB/s)| base  |    patched     | base   |    patched     |
|-------------------|-------|----------------|--------|----------------|
|mb_optimize_scan=0 | 2667  | 20049 (+651%)  | 314065 | 316724 (+0.8%) |
|mb_optimize_scan=1 | 2643  | 19342 (+631%)  | 316344 | 328324 (+3.7%) |

|CPU: AMD 9654 * 2  |          P96           |             P1          |
|Memory: 1536GB     |------------------------|-------------------------|
|960GB SSD (1GB/s)  | base  |    patched     | base   |    patched     |
|-------------------|-------|----------------|--------|----------------|
|mb_optimize_scan=0 | 3450  | 52125 (+1410%) | 205851 | 215136 (+4.5%) |
|mb_optimize_scan=1 | 3209  | 50331 (+1468%) | 207373 | 209431 (+0.9%) |

Tests also evaluated this patch set's impact on fragmentation: a minor
increase in free space fragmentation for multi-process workloads, but a
significant decrease in file fragmentation:

Test Script:
```shell
#!/bin/bash

dir="/tmp/test"
disk="/dev/sda"

mkdir -p $dir

for scan in 0 1 ; do
    mkfs.ext4 -F -E lazy_itable_init=0,lazy_journal_init=0 \
              -O orphan_file $disk 200G
    mount -o mb_optimize_scan=$scan $disk $dir

    fio -directory=$dir -direct=1 -iodepth 128 -thread -ioengine=falloc \
        -rw=write -bs=4k -fallocate=none -numjobs=64 -file_append=1 \
        -size=1G -group_reporting -name=job1 -cpus_allowed_policy=split

    e2freefrag $disk
    e4defrag -c $dir # Without the patch, this could take 5-6 hours.
    filefrag ${dir}/job* | awk '{print $2}' | \
                           awk '{sum+=$1} END {print sum/NR}'
    umount $dir
done
```

Test results:
-------------------------------------------------------------|
                         |       base      |      patched    |
-------------------------|--------|--------|--------|--------|
mb_optimize_scan         | linear |opt_scan| linear |opt_scan|
-------------------------|--------|--------|--------|--------|
bw(MiB/s)                | 217    | 217    | 5718   | 5626   |
-------------------------|-----------------------------------|
Avg. free extent size(KB)| 1943732| 1943732| 1316212| 1171208|
Num. free extent         | 71     | 71     | 105    | 118    |
-------------------------------------------------------------|
Avg. extents per file    | 261967 | 261973 | 588    | 570    |
Avg. size per extent(KB) | 4      | 4      | 1780   | 1837   |
Fragmentation score      | 100    | 100    | 2      | 2      |
-------------------------------------------------------------|

Comments and questions are, as always, welcome.

Thanks,
Baokun

Baokun Li (17):
  ext4: add ext4_try_lock_group() to skip busy groups
  ext4: separate stream goal hits from s_bal_goals for better tracking
  ext4: remove unnecessary s_mb_last_start
  ext4: remove unnecessary s_md_lock on update s_mb_last_group
  ext4: utilize multiple global goals to reduce contention
  ext4: get rid of some obsolete EXT4_MB_HINT flags
  ext4: fix typo in CR_GOAL_LEN_SLOW comment
  ext4: convert sbi->s_mb_free_pending to atomic_t
  ext4: merge freed extent with existing extents before insertion
  ext4: fix zombie groups in average fragment size lists
  ext4: fix largest free orders lists corruption on mb_optimize_scan
    switch
  ext4: factor out __ext4_mb_scan_group()
  ext4: factor out ext4_mb_might_prefetch()
  ext4: factor out ext4_mb_scan_group()
  ext4: convert free groups order lists to xarrays
  ext4: refactor choose group to scan group
  ext4: implement linear-like traversal across order xarrays

 fs/ext4/balloc.c            |   2 +-
 fs/ext4/ext4.h              |  61 +--
 fs/ext4/mballoc.c           | 895 ++++++++++++++++++++----------------
 fs/ext4/mballoc.h           |   9 +-
 include/trace/events/ext4.h |   3 -
 5 files changed, 534 insertions(+), 436 deletions(-)

-- 
2.46.1

Re: [PATCH v3 00/17] ext4: better scalability for ext4 block allocation
Posted by Theodore Ts'o 2 months, 2 weeks ago
On Mon, 14 Jul 2025 21:03:10 +0800, Baokun Li wrote:
> Changes since v2:
>  * Collect RVB from Jan Kara. (Thanks for your review!)
>  * Add patch 2.
>  * Patch 4: Switching to READ_ONCE/WRITE_ONCE (great for single-process)
>         over smp_load_acquire/smp_store_release (only slight multi-process
>         gain). (Suggested by Jan Kara)
>  * Patch 5: The number of global goals is now set to the lesser of the CPU
>         count or one-fourth of the group count. This prevents setting too
>         many goals for small filesystems, which lead to file dispersion.
>         (Suggested by Jan Kara)
>  * Patch 5: Directly use kfree() to release s_mb_last_groups instead of
>         kvfree(). (Suggested by Julia Lawall)
>  * Patch 11: Even without mb_optimize_scan enabled, we now always attempt
>         to remove the group from the old order list.(Suggested by Jan Kara)
>  * Patch 14-16: Added comments for clarity, refined logic, and removed
>         obsolete variables.
>  * Update performance test results and indicate raw disk write bandwidth.
> 
> [...]

Applied, thanks!

[01/17] ext4: add ext4_try_lock_group() to skip busy groups
        commit: 68f9a4d4f74ac2f6b8a836600caedb17b1f417e0
[02/17] ext4: separate stream goal hits from s_bal_goals for better tracking
        commit: c6a98dbdff75a960a8976294a56b3366305b4fed
[03/17] ext4: remove unnecessary s_mb_last_start
        commit: 8eb252a81b311d6b2a59176c9ef7e17d731e17e6
[04/17] ext4: remove unnecessary s_md_lock on update s_mb_last_group
        commit: ea906991a494eeaf8b6a4ac82c568071a6b6b52c
[05/17] ext4: utilize multiple global goals to reduce contention
        commit: 174688d2e06ef9e03d5b93ce2386e2e9a5af6e7b
[06/17] ext4: get rid of some obsolete EXT4_MB_HINT flags
        commit: d82c95e546dc57b3cd2d46e38ac216cd08dfab3c
[07/17] ext4: fix typo in CR_GOAL_LEN_SLOW comment
        commit: 1930d818c5ecfd557eae0f581cc9b6392debf9c6
[08/17] ext4: convert sbi->s_mb_free_pending to atomic_t
        commit: 3772fe7b4225f21a1bfe63e4a338702cc3c153de
[09/17] ext4: merge freed extent with existing extents before insertion
        commit: 92ba7b95ef0743c76688fd3d4c644e8ba4fd4cc4
[10/17] ext4: fix zombie groups in average fragment size lists
        commit: 84521ebf83028c0321050b8665e05d5cdef5d0d8
[11/17] ext4: fix largest free orders lists corruption on mb_optimize_scan switch
        commit: bbe11dd13a3ff78ed256b8c66356624284c66f99
[12/17] ext4: factor out __ext4_mb_scan_group()
        commit: 47fb751bf947da35f6669ddf5ab9869f58f991e2
[13/17] ext4: factor out ext4_mb_might_prefetch()
        commit: 12a5b877c314778ddf9a5c603eeb1803a514ab58
[14/17] ext4: factor out ext4_mb_scan_group()
        commit: 6e0275f6e713f55dd3fc23be317ec11f8db1766d
[15/17] ext4: convert free groups order lists to xarrays
        commit: bffe0d5051626a3e6ce4b03e247814af2d595ee2
[16/17] ext4: refactor choose group to scan group
        commit: 56b493f9ac002ee7963eed22eb4131d120d60fd3
[17/17] ext4: implement linear-like traversal across order xarrays
        commit: feffac547fb53d7a3fedd47a50fa91bd2d804d41

Best regards,
-- 
Theodore Ts'o <tytso@mit.edu>
Re: [PATCH v3 00/17] ext4: better scalability for ext4 block allocation
Posted by Zhang Yi 2 months, 3 weeks ago
On 2025/7/14 21:03, Baokun Li wrote:
> Changes since v2:
>  * Collect RVB from Jan Kara. (Thanks for your review!)
>  * Add patch 2.
>  * Patch 4: Switching to READ_ONCE/WRITE_ONCE (great for single-process)
>         over smp_load_acquire/smp_store_release (only slight multi-process
>         gain). (Suggested by Jan Kara)
>  * Patch 5: The number of global goals is now set to the lesser of the CPU
>         count or one-fourth of the group count. This prevents setting too
>         many goals for small filesystems, which lead to file dispersion.
>         (Suggested by Jan Kara)
>  * Patch 5: Directly use kfree() to release s_mb_last_groups instead of
>         kvfree(). (Suggested by Julia Lawall)
>  * Patch 11: Even without mb_optimize_scan enabled, we now always attempt
>         to remove the group from the old order list.(Suggested by Jan Kara)
>  * Patch 14-16: Added comments for clarity, refined logic, and removed
>         obsolete variables.
>  * Update performance test results and indicate raw disk write bandwidth. 
> 
> Thanks to Honza for your suggestions!

This is a nice improvement! Overall, the series looks good to me!

Reviewed-by: Zhang Yi <yi.zhang@huawei.com>

> 
> v2: https://lore.kernel.org/r/20250623073304.3275702-1-libaokun1@huawei.com
> 
> Changes since v1:
>  * Patch 1: Prioritize checking if a group is busy to avoid unnecessary
>        checks and buddy loading. (Thanks to Ojaswin for the suggestion!)
>  * Patch 4: Using multiple global goals instead of moving the goal to the
>        inode level. (Thanks to Honza for the suggestion!)
>  * Collect RVB from Jan Kara and Ojaswin Mujoo.(Thanks for your review!)
>  * Add patch 2,3,7-16.
>  * Due to the change of test server, the relevant test data was refreshed.
> 
> v1: https://lore.kernel.org/r/20250523085821.1329392-1-libaokun@huaweicloud.com
> 
> Since servers have more and more CPUs, and we're running more containers
> on them, we've been using will-it-scale to test how well ext4 scales. The
> fallocate2 test (append 8KB to 1MB, truncate to 0, repeat) run concurrently
> on 64 containers revealed significant contention in block allocation/free,
> leading to much lower average fallocate OPS compared to a single
> container (see below).
> 
>    1   |    2   |    4   |    8   |   16   |   32   |   64
> -------|--------|--------|--------|--------|--------|-------
> 295287 | 70665  | 33865  | 19387  | 10104  |  5588  |  3588
> 
> Under this test scenario, the primary operations are block allocation
> (fallocate) and block deallocation (truncate). The main bottlenecks for
> these operations are the group lock and s_md_lock. Therefore, this patch
> series primarily focuses on optimizing the code related to these two locks.
> 
> The following is a brief overview of the patches, see the patches for
> more details.
> 
> Patch 1: Add ext4_try_lock_group() to skip busy groups to take advantage
> of the large number of ext4 groups.
> 
> Patch 2: Separates stream goal hits from s_bal_goals in preparation for
> cleanup of s_mb_last_start.
> 
> Patches 3-5: Split stream allocation's global goal into multiple goals and
> remove the unnecessary and expensive s_md_lock.
> 
> Patches 6-7: minor cleanups
> 
> Patches 8: Converted s_mb_free_pending to atomic_t and used memory barriers
> for consistency, instead of relying on the expensive s_md_lock.
> 
> Patches 9: When inserting free extents, we now attempt to merge them with
> already inserted extents first, to reduce s_md_lock contention.
> 
> Patches 10: Updates bb_avg_fragment_size_order to -1 when a group is out of
> free blocks, eliminating efficiency-impacting "zombie groups."
> 
> Patches 11: Fix potential largest free orders lists corruption when the
> mb_optimize_scan mount option is switched on or off.
> 
> Patches 12-17: Convert mb_optimize_scan's existing unordered list traversal
> to ordered xarrays, thereby reducing contention between block allocation
> and freeing, similar to linear traversal.
> 
> "kvm-xfstests -c ext4/all -g auto" has been executed with no new failures.
> 
> Here are some performance test data for your reference:
> 
> Test: Running will-it-scale/fallocate2 on CPU-bound containers.
> Observation: Average fallocate operations per container per second.
> 
> |CPU: Kunpeng 920   |          P80           |            P1           |
> |Memory: 512GB      |------------------------|-------------------------|
> |960GB SSD (0.5GB/s)| base  |    patched     | base   |    patched     |
> |-------------------|-------|----------------|--------|----------------|
> |mb_optimize_scan=0 | 2667  | 20049 (+651%)  | 314065 | 316724 (+0.8%) |
> |mb_optimize_scan=1 | 2643  | 19342 (+631%)  | 316344 | 328324 (+3.7%) |
> 
> |CPU: AMD 9654 * 2  |          P96           |             P1          |
> |Memory: 1536GB     |------------------------|-------------------------|
> |960GB SSD (1GB/s)  | base  |    patched     | base   |    patched     |
> |-------------------|-------|----------------|--------|----------------|
> |mb_optimize_scan=0 | 3450  | 52125 (+1410%) | 205851 | 215136 (+4.5%) |
> |mb_optimize_scan=1 | 3209  | 50331 (+1468%) | 207373 | 209431 (+0.9%) |
> 
> Tests also evaluated this patch set's impact on fragmentation: a minor
> increase in free space fragmentation for multi-process workloads, but a
> significant decrease in file fragmentation:
> 
> Test Script:
> ```shell
> #!/bin/bash
> 
> dir="/tmp/test"
> disk="/dev/sda"
> 
> mkdir -p $dir
> 
> for scan in 0 1 ; do
>     mkfs.ext4 -F -E lazy_itable_init=0,lazy_journal_init=0 \
>               -O orphan_file $disk 200G
>     mount -o mb_optimize_scan=$scan $disk $dir
> 
>     fio -directory=$dir -direct=1 -iodepth 128 -thread -ioengine=falloc \
>         -rw=write -bs=4k -fallocate=none -numjobs=64 -file_append=1 \
>         -size=1G -group_reporting -name=job1 -cpus_allowed_policy=split
> 
>     e2freefrag $disk
>     e4defrag -c $dir # Without the patch, this could take 5-6 hours.
>     filefrag ${dir}/job* | awk '{print $2}' | \
>                            awk '{sum+=$1} END {print sum/NR}'
>     umount $dir
> done
> ```
> 
> Test results:
> -------------------------------------------------------------|
>                          |       base      |      patched    |
> -------------------------|--------|--------|--------|--------|
> mb_optimize_scan         | linear |opt_scan| linear |opt_scan|
> -------------------------|--------|--------|--------|--------|
> bw(MiB/s)                | 217    | 217    | 5718   | 5626   |
> -------------------------|-----------------------------------|
> Avg. free extent size(KB)| 1943732| 1943732| 1316212| 1171208|
> Num. free extent         | 71     | 71     | 105    | 118    |
> -------------------------------------------------------------|
> Avg. extents per file    | 261967 | 261973 | 588    | 570    |
> Avg. size per extent(KB) | 4      | 4      | 1780   | 1837   |
> Fragmentation score      | 100    | 100    | 2      | 2      |
> -------------------------------------------------------------|
> 
> Comments and questions are, as always, welcome.
> 
> Thanks,
> Baokun
> 
> Baokun Li (17):
>   ext4: add ext4_try_lock_group() to skip busy groups
>   ext4: separate stream goal hits from s_bal_goals for better tracking
>   ext4: remove unnecessary s_mb_last_start
>   ext4: remove unnecessary s_md_lock on update s_mb_last_group
>   ext4: utilize multiple global goals to reduce contention
>   ext4: get rid of some obsolete EXT4_MB_HINT flags
>   ext4: fix typo in CR_GOAL_LEN_SLOW comment
>   ext4: convert sbi->s_mb_free_pending to atomic_t
>   ext4: merge freed extent with existing extents before insertion
>   ext4: fix zombie groups in average fragment size lists
>   ext4: fix largest free orders lists corruption on mb_optimize_scan
>     switch
>   ext4: factor out __ext4_mb_scan_group()
>   ext4: factor out ext4_mb_might_prefetch()
>   ext4: factor out ext4_mb_scan_group()
>   ext4: convert free groups order lists to xarrays
>   ext4: refactor choose group to scan group
>   ext4: implement linear-like traversal across order xarrays
> 
>  fs/ext4/balloc.c            |   2 +-
>  fs/ext4/ext4.h              |  61 +--
>  fs/ext4/mballoc.c           | 895 ++++++++++++++++++++----------------
>  fs/ext4/mballoc.h           |   9 +-
>  include/trace/events/ext4.h |   3 -
>  5 files changed, 534 insertions(+), 436 deletions(-)
>