include/linux/slab.h | 6 - mm/Kconfig | 11 - mm/internal.h | 1 + mm/page_alloc.c | 5 + mm/slab.h | 65 +- mm/slab_common.c | 61 +- mm/slub.c | 2689 ++++++++++++++++++-------------------------------- 7 files changed, 1031 insertions(+), 1807 deletions(-)
Percpu sheaves caching was introduced as opt-in but the goal was to
eventually move all caches to them. This is the next step, enabling
sheaves for all caches (except the two bootstrap ones) and then removing
the per cpu (partial) slabs and lots of associated code.
Besides (hopefully) improved performance, this removes the rather
complicated code related to the lockless fastpaths (using
this_cpu_try_cmpxchg128/64) and its complications with PREEMPT_RT or
kmalloc_nolock().
The lockless slab freelist+counters update operation using
try_cmpxchg128/64 remains and is crucial for freeing remote NUMA objects
without repeating the "alien" array flushing of SLUB, and to allow
flushing objects from sheaves to slabs mostly without the node
list_lock.
Sending this v4 because various changes accumulated in the branch due to
review and -next exposure (see the list below). Thanks for all the
reviews!
Git branch for the v4
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=sheaves-for-all-v4
Which is a snapshot of:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=b4/sheaves-for-all
Based on:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/log/?h=slab/for-7.0/sheaves-base
- includes a sheaves optimization that seemed minor but there was lkp
test robot result with significant improvements:
https://lore.kernel.org/all/202512291555.56ce2e53-lkp@intel.com/
(could be an uncommon corner case workload though)
- includes the kmalloc_nolock() fix commit a4ae75d1b6a2 that is undone
as part of this series
Significant (but not critical) remaining TODOs:
- Integration of rcu sheaves handling with kfree_rcu batching.
- Currently the kfree_rcu batching is almost completely bypassed. I'm
thinking it could be adjusted to handle rcu sheaves in addition to
individual objects, to get the best of both.
- Performance evaluation. Petr Tesarik has been doing that on the RFC
with some promising results (thanks!) and also found a memory leak.
Note that as many things, this caching scheme change is a tradeoff, as
summarized by Christoph:
https://lore.kernel.org/all/f7c33974-e520-387e-9e2f-1e523bfe1545@gentwo.org/
- Objects allocated from sheaves should have better temporal locality
(likely recently freed, thus cache hot) but worse spatial locality
(likely from many different slabs, increasing memory usage and
possibly TLB pressure on kernel's direct map).
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
Changes in v4:
- Fix up both missing and spurious r-b tags from v3, and add new ones
(big thanks to Hao Li, Harry, and Suren!)
- Fix infinite recursion with kmemleak (Breno Leitao)
- Use cache_has_sheaves() in pcs_destroy() (Suren)
- Use cache_has_sheaves() in kvfree_rcu_barrier_on_cache() (Hao Li)
- Bypass sheaf for remote object free also in kfree_nolock() (Harry)
- WRITE_ONCE slab->counters in __update_freelist_slow() so
get_partial_node_bulk() can stop being paranoid (Harry)
- Tweak conditions in alloc_from_new_slab() (Hao Li, Suren)
- Rename get_partial*() functions to get_from_partial*() (Suren)
- Rename variable freelist to object in ___slab_alloc() (Suren)
- Separate struct partial_bulk_context instead of extending.
- Rename flush_cpu_slab() to flush_cpu_sheaves() (Hao Li)
- Add "mm/slab: fix false lockdep warning in __kfree_rcu_sheaf()" from
Harry.
- Add counting of FREE_SLOWPATH stat to some missing places (Suren, Hao
Li)
- Link to v3: https://patch.msgid.link/20260116-sheaves-for-all-v3-0-5595cb000772@suse.cz
Changes in v3:
- Rebase to current slab/for-7.0/sheaves which itself is rebased to
slab/for-next-fixes to include commit a4ae75d1b6a2 ("slab: fix
kmalloc_nolock() context check for PREEMPT_RT")
- Revert a4ae75d1b6a2 as part of "slab: simplify kmalloc_nolock()" as
it's no longer necessary.
- Add cache_has_sheaves() helper to test for s->sheaf_capacity, use it
in more places instead of s->cpu_sheaves tests that were missed
(Hao Li)
- Fix a bug where kmalloc_nolock() could end up trying to allocate empty
sheaf (not compatible with !allow_spin) in __pcs_replace_full_main()
(Hao Li)
- Fix missing inc_slabs_node() in ___slab_alloc() ->
alloc_from_new_slab() path. (Hao Li)
- Also a bug where refill_objects() -> alloc_from_new_slab ->
free_new_slab_nolock() (previously defer_deactivate_slab()) would
do inc_slabs_node() without matching dec_slabs_node()
- Make __free_slab call free_frozen_pages_nolock() when !allow_spin.
This was correct in the first RFC. (Hao Li)
- Add patch to make SLAB_CONSISTENCY_CHECKS prevent merging.
- Add tags from sveral people (thanks!)
- Fix checkpatch warnings.
- Link to v2: https://patch.msgid.link/20260112-sheaves-for-all-v2-0-98225cfb50cf@suse.cz
Changes in v2:
- Rebased to v6.19-rc1+slab.git slab/for-7.0/sheaves
- Some of the preliminary patches from the RFC went in there.
- Incorporate feedback/reports from many people (thanks!), including:
- Make caches with sheaves mergeable.
- Fix a major memory leak.
- Cleanup of stat items.
- Link to v1: https://patch.msgid.link/20251023-sheaves-for-all-v1-0-6ffa2c9941c0@suse.cz
---
Harry Yoo (1):
mm/slab: fix false lockdep warning in __kfree_rcu_sheaf()
Vlastimil Babka (21):
mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache()
slab: add SLAB_CONSISTENCY_CHECKS to SLAB_NEVER_MERGE
mm/slab: move and refactor __kmem_cache_alias()
mm/slab: make caches with sheaves mergeable
slab: add sheaves to most caches
slab: introduce percpu sheaves bootstrap
slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock()
slab: handle kmalloc sheaves bootstrap
slab: add optimized sheaf refill from partial list
slab: remove cpu (partial) slabs usage from allocation paths
slab: remove SLUB_CPU_PARTIAL
slab: remove the do_slab_free() fastpath
slab: remove defer_deactivate_slab()
slab: simplify kmalloc_nolock()
slab: remove struct kmem_cache_cpu
slab: remove unused PREEMPT_RT specific macros
slab: refill sheaves from all nodes
slab: update overview comments
slab: remove frozen slab checks from __slab_free()
mm/slub: remove DEACTIVATE_TO_* stat items
mm/slub: cleanup and repurpose some stat items
include/linux/slab.h | 6 -
mm/Kconfig | 11 -
mm/internal.h | 1 +
mm/page_alloc.c | 5 +
mm/slab.h | 65 +-
mm/slab_common.c | 61 +-
mm/slub.c | 2689 ++++++++++++++++++--------------------------------
7 files changed, 1031 insertions(+), 1807 deletions(-)
---
base-commit: a66f9c0f1ba2dd05fa994c800ebc63f265155f91
change-id: 20251002-sheaves-for-all-86ac13dc47a5
Best regards,
--
Vlastimil Babka <vbabka@suse.cz>
Hi Vlastimil,
I conducted a detailed performance evaluation of the each patch on my setup.
During my tests, I observed two points in the series where performance
regressions occurred:
Patch 10: I noticed a ~16% regression in my environment. My hypothesis is
that with this patch, the allocation fast path bypasses the percpu partial
list, leading to increased contention on the node list.
Patch 12: This patch seems to introduce an additional ~9.7% regression. I
suspect this might be because the free path also loses buffering from the
percpu partial list, further exacerbating node list contention.
These are the only two patches in the series where I observed noticeable
regressions. The rest of the patches did not show significant performance
changes in my tests.
I hope these test results are helpful.
--
Thanks,
Hao
On 1/29/26 16:18, Hao Li wrote: > Hi Vlastimil, > > I conducted a detailed performance evaluation of the each patch on my setup. Thanks! What was the benchmark(s) used? Importantly, does it rely on vma/maple_node objects? So previously those would become kind of double cached by both sheaves and cpu (partial) slabs (and thus hopefully benefited more than they should) since sheaves introduction in 6.18, and now they are not double cached anymore? > During my tests, I observed two points in the series where performance > regressions occurred: > > Patch 10: I noticed a ~16% regression in my environment. My hypothesis is > that with this patch, the allocation fast path bypasses the percpu partial > list, leading to increased contention on the node list. That makes sense. > Patch 12: This patch seems to introduce an additional ~9.7% regression. I > suspect this might be because the free path also loses buffering from the > percpu partial list, further exacerbating node list contention. Hmm yeah... we did put the previously full slabs there, avoiding the lock. > These are the only two patches in the series where I observed noticeable > regressions. The rest of the patches did not show significant performance > changes in my tests. > > I hope these test results are helpful. They are, thanks. I'd however hope it's just some particular test that has these regressions, which can be explained by the loss of double caching.
On Thu, Jan 29, 2026 at 04:28:01PM +0100, Vlastimil Babka wrote:
>
> So previously those would become kind of double
> cached by both sheaves and cpu (partial) slabs (and thus hopefully benefited
> more than they should) since sheaves introduction in 6.18, and now they are
> not double cached anymore?
>
I've conducted new tests, and here are the details of three scenarios:
1. Checked out commit 9d4e6ab865c4, which represents the state before the
introduction of the sheaves mechanism.
2. Tested with 6.19-rc5, which includes sheaves but does not yet apply the
"sheaves for all" patchset.
3. Applied the "sheaves for all" patchset and also included the "avoid
list_lock contention" patch.
Results:
For scenario 2 (with sheaves but without "sheaves for all"), there is a
noticeable performance improvement compared to scenario 1:
will-it-scale.128.processes +34.3%
will-it-scale.192.processes +35.4%
will-it-scale.64.processes +31.5%
will-it-scale.per_process_ops +33.7%
For scenario 3 (after applying "sheaves for all"), performance slightly
regressed compared to scenario 1:
will-it-scale.128.processes -1.3%
will-it-scale.192.processes -4.2%
will-it-scale.64.processes -1.2%
will-it-scale.per_process_ops -2.1%
Analysis:
So when the sheaf size for maple nodes is set to 32 by default, the performance
of fully adopting the sheaves mechanism roughly matches the performance of the
previous approach that relied solely on the percpu slab partial list.
The performance regression observed with the "sheaves for all" patchset can
actually be explained as follows: moving from scenario 1 to scenario 2
introduces an additional cache layer, which boosts performance temporarily.
When moving from scenario 2 to scenario 3, this additional cache layer is
removed, then performance reverted to its original level.
So I think the performance of the percpu partial list and the sheaves mechanism
is roughly the same, which is consistent with our expectations.
--
Thanks,
Hao
On 1/30/26 05:50, Hao Li wrote: > On Thu, Jan 29, 2026 at 04:28:01PM +0100, Vlastimil Babka wrote: >> >> So previously those would become kind of double >> cached by both sheaves and cpu (partial) slabs (and thus hopefully benefited >> more than they should) since sheaves introduction in 6.18, and now they are >> not double cached anymore? >> > > I've conducted new tests, and here are the details of three scenarios: > > 1. Checked out commit 9d4e6ab865c4, which represents the state before the > introduction of the sheaves mechanism. > 2. Tested with 6.19-rc5, which includes sheaves but does not yet apply the > "sheaves for all" patchset. > 3. Applied the "sheaves for all" patchset and also included the "avoid > list_lock contention" patch. > > > Results: > > For scenario 2 (with sheaves but without "sheaves for all"), there is a > noticeable performance improvement compared to scenario 1: > > will-it-scale.128.processes +34.3% > will-it-scale.192.processes +35.4% > will-it-scale.64.processes +31.5% > will-it-scale.per_process_ops +33.7% > > For scenario 3 (after applying "sheaves for all"), performance slightly > regressed compared to scenario 1: > > will-it-scale.128.processes -1.3% > will-it-scale.192.processes -4.2% > will-it-scale.64.processes -1.2% > will-it-scale.per_process_ops -2.1% > > Analysis: > > So when the sheaf size for maple nodes is set to 32 by default, the performance > of fully adopting the sheaves mechanism roughly matches the performance of the > previous approach that relied solely on the percpu slab partial list. > > The performance regression observed with the "sheaves for all" patchset can > actually be explained as follows: moving from scenario 1 to scenario 2 > introduces an additional cache layer, which boosts performance temporarily. > When moving from scenario 2 to scenario 3, this additional cache layer is > removed, then performance reverted to its original level. > > So I think the performance of the percpu partial list and the sheaves mechanism > is roughly the same, which is consistent with our expectations. Thanks!
On Wed, 4 Feb 2026, Vlastimil Babka wrote: > > So I think the performance of the percpu partial list and the sheaves mechanism > > is roughly the same, which is consistent with our expectations. > > Thanks! There are other considerations that usually do not show up well in benchmark tests. The sheaves cannot do the spatial optimizations that cpu partial lists provide. Fragmentation in slab caches (and therefore the nubmer of partial slab pages) will increase since 1. The objects are not immediately returned to their slab pages but end up in some queuing structure. 2. Available objects from a single slab page are not allocated in sequence to empty partial pages and remove the page from the partial lists. Objects are put into some queue on free and are processed on a FIFO basis. Objects allocated may come from lots of different slab pages potentially increasing TLB pressure.
On 2/4/26 19:24, Christoph Lameter (Ampere) wrote: > On Wed, 4 Feb 2026, Vlastimil Babka wrote: > >> > So I think the performance of the percpu partial list and the sheaves mechanism >> > is roughly the same, which is consistent with our expectations. >> >> Thanks! > > There are other considerations that usually do not show up well in > benchmark tests. > > The sheaves cannot do the spatial optimizations that cpu partial lists > provide. Fragmentation in slab caches (and therefore the nubmer of > partial slab pages) will increase since > > 1. The objects are not immediately returned to their slab pages but end up > in some queuing structure. > > 2. Available objects from a single slab page are not allocated in sequence > to empty partial pages and remove the page from the partial lists. > > Objects are put into some queue on free and are processed on a FIFO basis. > Objects allocated may come from lots of different slab pages potentially > increasing TLB pressure. IIUC this is what you said before [1] and the cover letter has a link and a summary of it. [1] https://lore.kernel.org/all/f7c33974-e520-387e-9e2f-1e523bfe1545@gentwo.org/
On Fri, Jan 30, 2026 at 12:50:25PM +0800, Hao Li wrote:
> On Thu, Jan 29, 2026 at 04:28:01PM +0100, Vlastimil Babka wrote:
> >
> > So previously those would become kind of double
> > cached by both sheaves and cpu (partial) slabs (and thus hopefully benefited
> > more than they should) since sheaves introduction in 6.18, and now they are
> > not double cached anymore?
> >
>
> I've conducted new tests, and here are the details of three scenarios:
>
> 1. Checked out commit 9d4e6ab865c4, which represents the state before the
> introduction of the sheaves mechanism.
> 2. Tested with 6.19-rc5, which includes sheaves but does not yet apply the
> "sheaves for all" patchset.
> 3. Applied the "sheaves for all" patchset and also included the "avoid
> list_lock contention" patch.
Here is my testing environment information and the raw test data.
Command:
cd will-it-scale/
python3 ./runtest.py mmap2 25 process 0 0 64 128 192
Env:
CPU(s): 192
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 2
NUMA node(s): 4
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
NUMA node2 CPU(s): 96-143
NUMA node3 CPU(s): 144-191
Memory: 1.5T
Raw data:
1. Checked out commit 9d4e6ab865c4, which represents the state before the
introduction of the sheaves mechanism.
{
"time.elapsed_time": 93.88,
"time.elapsed_time.max": 93.88,
"time.file_system_inputs": 2640,
"time.file_system_outputs": 128,
"time.involuntary_context_switches": 417738,
"time.major_page_faults": 54,
"time.maximum_resident_set_size": 90012,
"time.minor_page_faults": 80569,
"time.page_size": 4096,
"time.percent_of_cpu_this_job_got": 5707,
"time.system_time": 5272.97,
"time.user_time": 85.59,
"time.voluntary_context_switches": 2436,
"will-it-scale.128.processes": 28445014,
"will-it-scale.128.processes_idle": 33.89,
"will-it-scale.192.processes": 39899678,
"will-it-scale.192.processes_idle": 1.29,
"will-it-scale.64.processes": 15645502,
"will-it-scale.64.processes_idle": 66.75,
"will-it-scale.per_process_ops": 224832,
"will-it-scale.time.elapsed_time": 93.88,
"will-it-scale.time.elapsed_time.max": 93.88,
"will-it-scale.time.file_system_inputs": 2640,
"will-it-scale.time.file_system_outputs": 128,
"will-it-scale.time.involuntary_context_switches": 417738,
"will-it-scale.time.major_page_faults": 54,
"will-it-scale.time.maximum_resident_set_size": 90012,
"will-it-scale.time.minor_page_faults": 80569,
"will-it-scale.time.page_size": 4096,
"will-it-scale.time.percent_of_cpu_this_job_got": 5707,
"will-it-scale.time.system_time": 5272.97,
"will-it-scale.time.user_time": 85.59,
"will-it-scale.time.voluntary_context_switches": 2436,
"will-it-scale.workload": 83990194
}
2. Tested with 6.19-rc5, which includes sheaves but does not yet apply the
"sheaves for all" patchset.
{
"time.elapsed_time": 93.86000000000001,
"time.elapsed_time.max": 93.86000000000001,
"time.file_system_inputs": 1952,
"time.file_system_outputs": 160,
"time.involuntary_context_switches": 766225,
"time.major_page_faults": 50.666666666666664,
"time.maximum_resident_set_size": 90012,
"time.minor_page_faults": 80635,
"time.page_size": 4096,
"time.percent_of_cpu_this_job_got": 5738,
"time.system_time": 5251.88,
"time.user_time": 134.57666666666665,
"time.voluntary_context_switches": 2539,
"will-it-scale.128.processes": 38223543.333333336,
"will-it-scale.128.processes_idle": 33.833333333333336,
"will-it-scale.192.processes": 54039039,
"will-it-scale.192.processes_idle": 1.26,
"will-it-scale.64.processes": 20579207.666666668,
"will-it-scale.64.processes_idle": 66.74333333333334,
"will-it-scale.per_process_ops": 300541,
"will-it-scale.time.elapsed_time": 93.86000000000001,
"will-it-scale.time.elapsed_time.max": 93.86000000000001,
"will-it-scale.time.file_system_inputs": 1952,
"will-it-scale.time.file_system_outputs": 160,
"will-it-scale.time.involuntary_context_switches": 766225,
"will-it-scale.time.major_page_faults": 50.666666666666664,
"will-it-scale.time.maximum_resident_set_size": 90012,
"will-it-scale.time.minor_page_faults": 80635,
"will-it-scale.time.page_size": 4096,
"will-it-scale.time.percent_of_cpu_this_job_got": 5738,
"will-it-scale.time.system_time": 5251.88,
"will-it-scale.time.user_time": 134.57666666666665,
"will-it-scale.time.voluntary_context_switches": 2539,
"will-it-scale.workload": 112841790
}
3. Applied the "sheaves for all" patchset and also included the "avoid
list_lock contention" patch.
{
"time.elapsed_time": 93.86666666666667,
"time.elapsed_time.max": 93.86666666666667,
"time.file_system_inputs": 1800,
"time.file_system_outputs": 149.33333333333334,
"time.involuntary_context_switches": 421120,
"time.major_page_faults": 37,
"time.maximum_resident_set_size": 90016,
"time.minor_page_faults": 80645,
"time.page_size": 4096,
"time.percent_of_cpu_this_job_got": 5714.666666666667,
"time.system_time": 5256.176666666667,
"time.user_time": 108.88333333333333,
"time.voluntary_context_switches": 2513,
"will-it-scale.128.processes": 28067051.333333332,
"will-it-scale.128.processes_idle": 33.82,
"will-it-scale.192.processes": 38232965.666666664,
"will-it-scale.192.processes_idle": 1.2733333333333334,
"will-it-scale.64.processes": 15464041.333333334,
"will-it-scale.64.processes_idle": 66.76333333333334,
"will-it-scale.per_process_ops": 220009.33333333334,
"will-it-scale.time.elapsed_time": 93.86666666666667,
"will-it-scale.time.elapsed_time.max": 93.86666666666667,
"will-it-scale.time.file_system_inputs": 1800,
"will-it-scale.time.file_system_outputs": 149.33333333333334,
"will-it-scale.time.involuntary_context_switches": 421120,
"will-it-scale.time.major_page_faults": 37,
"will-it-scale.time.maximum_resident_set_size": 90016,
"will-it-scale.time.minor_page_faults": 80645,
"will-it-scale.time.page_size": 4096,
"will-it-scale.time.percent_of_cpu_this_job_got": 5714.666666666667,
"will-it-scale.time.system_time": 5256.176666666667,
"will-it-scale.time.user_time": 108.88333333333333,
"will-it-scale.time.voluntary_context_switches": 2513,
"will-it-scale.workload": 81764058.33333333
}
>
>
> Results:
>
> For scenario 2 (with sheaves but without "sheaves for all"), there is a
> noticeable performance improvement compared to scenario 1:
>
> will-it-scale.128.processes +34.3%
> will-it-scale.192.processes +35.4%
> will-it-scale.64.processes +31.5%
> will-it-scale.per_process_ops +33.7%
>
> For scenario 3 (after applying "sheaves for all"), performance slightly
> regressed compared to scenario 1:
>
> will-it-scale.128.processes -1.3%
> will-it-scale.192.processes -4.2%
> will-it-scale.64.processes -1.2%
> will-it-scale.per_process_ops -2.1%
>
> Analysis:
>
> So when the sheaf size for maple nodes is set to 32 by default, the performance
> of fully adopting the sheaves mechanism roughly matches the performance of the
> previous approach that relied solely on the percpu slab partial list.
>
> The performance regression observed with the "sheaves for all" patchset can
> actually be explained as follows: moving from scenario 1 to scenario 2
> introduces an additional cache layer, which boosts performance temporarily.
> When moving from scenario 2 to scenario 3, this additional cache layer is
> removed, then performance reverted to its original level.
>
> So I think the performance of the percpu partial list and the sheaves mechanism
> is roughly the same, which is consistent with our expectations.
>
> --
> Thanks,
> Hao
On Thu, Jan 29, 2026 at 04:28:01PM +0100, Vlastimil Babka wrote: > On 1/29/26 16:18, Hao Li wrote: > > Hi Vlastimil, > > > > I conducted a detailed performance evaluation of the each patch on my setup. > > Thanks! What was the benchmark(s) used? I'm currently using the mmap2 test case from will-it-scale. The machine is still an AMD 2-socket system, with 2 nodes per socket, totaling 192 CPUs, with SMT disabled. For each test run, I used 64, 128, and 192 processes respectively. > Importantly, does it rely on vma/maple_node objects? Yes, this test primarily puts a lot of pressure on maple_node. > So previously those would become kind of double > cached by both sheaves and cpu (partial) slabs (and thus hopefully benefited > more than they should) since sheaves introduction in 6.18, and now they are > not double cached anymore? Exactly, since version 6.18, maple_node has indeed benefited from a dual-layer cache. I did wonder if this isn't a performance regression but rather the performance returning to its baseline after removing one layer of caching. However, verifying this idea would require completely disabling the sheaf mechanism on version 6.19-rc5 while leaving the rest of the SLUB code untouched. It would be great to hear any suggestions on how this might be approached. > > > During my tests, I observed two points in the series where performance > > regressions occurred: > > > > Patch 10: I noticed a ~16% regression in my environment. My hypothesis is > > that with this patch, the allocation fast path bypasses the percpu partial > > list, leading to increased contention on the node list. > > That makes sense. > > > Patch 12: This patch seems to introduce an additional ~9.7% regression. I > > suspect this might be because the free path also loses buffering from the > > percpu partial list, further exacerbating node list contention. > > Hmm yeah... we did put the previously full slabs there, avoiding the lock. > > > These are the only two patches in the series where I observed noticeable > > regressions. The rest of the patches did not show significant performance > > changes in my tests. > > > > I hope these test results are helpful. > > They are, thanks. I'd however hope it's just some particular test that has > these regressions, Yes, I hope so too. And the mmap2 test case is indeed quite extreme. > which can be explained by the loss of double caching. If we could compare it with a version that only uses the CPU partial list, the answer might become clearer.
* Hao Li <hao.li@linux.dev> [260129 11:07]: > On Thu, Jan 29, 2026 at 04:28:01PM +0100, Vlastimil Babka wrote: > > On 1/29/26 16:18, Hao Li wrote: > > > Hi Vlastimil, > > > > > > I conducted a detailed performance evaluation of the each patch on my setup. > > > > Thanks! What was the benchmark(s) used? Yes, Thank you for running the benchmarks! > > I'm currently using the mmap2 test case from will-it-scale. The machine is still > an AMD 2-socket system, with 2 nodes per socket, totaling 192 CPUs, with SMT > disabled. For each test run, I used 64, 128, and 192 processes respectively. What about the other tests you ran in the detailed evaluation, were there other regressions? It might be worth including the list of tests that showed issues and some of the raw results (maybe at the end of your email) to show what you saw more clearly. I did notice you had done this previously. Was the regression in the threaded or processes version of mmap2? > > > Importantly, does it rely on vma/maple_node objects? > > Yes, this test primarily puts a lot of pressure on maple_node. > > > So previously those would become kind of double > > cached by both sheaves and cpu (partial) slabs (and thus hopefully benefited > > more than they should) since sheaves introduction in 6.18, and now they are > > not double cached anymore? > > Exactly, since version 6.18, maple_node has indeed benefited from a dual-layer > cache. > > I did wonder if this isn't a performance regression but rather the > performance returning to its baseline after removing one layer of caching. > > However, verifying this idea would require completely disabling the sheaf > mechanism on version 6.19-rc5 while leaving the rest of the SLUB code untouched. > It would be great to hear any suggestions on how this might be approached. You could use perf record to capture the differences on the two kernels. You could also user perf to look at the differences between three kernel versions: 1. pre-sheaves entirely 2. the 'dual layer' cache 3. The final version In these scenarios, it's not worth looking at the numbers, but just the differences since the debug required to get meaningful information makes the results hugely slow and, potentially, not as consistent. Sometimes I run them multiple time to ensure what I'm seeing makes sense for a particular comparison (and the server didn't just rotate the logs or whatever..) > > > > > > During my tests, I observed two points in the series where performance > > > regressions occurred: > > > > > > Patch 10: I noticed a ~16% regression in my environment. My hypothesis is > > > that with this patch, the allocation fast path bypasses the percpu partial > > > list, leading to increased contention on the node list. > > > > That makes sense. > > > > > Patch 12: This patch seems to introduce an additional ~9.7% regression. I > > > suspect this might be because the free path also loses buffering from the > > > percpu partial list, further exacerbating node list contention. > > > > Hmm yeah... we did put the previously full slabs there, avoiding the lock. > > > > > These are the only two patches in the series where I observed noticeable > > > regressions. The rest of the patches did not show significant performance > > > changes in my tests. > > > > > > I hope these test results are helpful. > > > > They are, thanks. I'd however hope it's just some particular test that has > > these regressions, > > Yes, I hope so too. And the mmap2 test case is indeed quite extreme. > > > which can be explained by the loss of double caching. > > If we could compare it with a version that only uses the > CPU partial list, the answer might become clearer. In my experience, micro-benchmarks are good at identifying specific failure points of a patch set, but unless an entire area of benchmarks regress (ie all mmap threaded), then they rarely tell the whole story. Are the benchmarks consistently slower? This specific test is sensitive to alignment because of the 128MB mmap/munmap operation. Sometimes, you will see a huge spike at a particular process/thread count that moves around in tests like this. Was your run consistently lower? Thanks, Liam
On Thu, Jan 29, 2026 at 11:44:21AM -0500, Liam R. Howlett wrote: > * Hao Li <hao.li@linux.dev> [260129 11:07]: > > On Thu, Jan 29, 2026 at 04:28:01PM +0100, Vlastimil Babka wrote: > > > On 1/29/26 16:18, Hao Li wrote: > > > > Hi Vlastimil, > > > > > > > > I conducted a detailed performance evaluation of the each patch on my setup. > > > > > > Thanks! What was the benchmark(s) used? > > Yes, Thank you for running the benchmarks! > > > > > I'm currently using the mmap2 test case from will-it-scale. The machine is still > > an AMD 2-socket system, with 2 nodes per socket, totaling 192 CPUs, with SMT > > disabled. For each test run, I used 64, 128, and 192 processes respectively. > > What about the other tests you ran in the detailed evaluation, were > there other regressions? It might be worth including the list of tests > that showed issues and some of the raw results (maybe at the end of your > email) to show what you saw more clearly. I did notice you had done > this previously. Hi, Liam I only ran the mmap2 use case of will-it-scale. And now I have some new test results, and I will share the raw data later. > > Was the regression in the threaded or processes version of mmap2? It's processes version. > > > > > > Importantly, does it rely on vma/maple_node objects? > > > > Yes, this test primarily puts a lot of pressure on maple_node. > > > > > So previously those would become kind of double > > > cached by both sheaves and cpu (partial) slabs (and thus hopefully benefited > > > more than they should) since sheaves introduction in 6.18, and now they are > > > not double cached anymore? > > > > Exactly, since version 6.18, maple_node has indeed benefited from a dual-layer > > cache. > > > > I did wonder if this isn't a performance regression but rather the > > performance returning to its baseline after removing one layer of caching. > > > > However, verifying this idea would require completely disabling the sheaf > > mechanism on version 6.19-rc5 while leaving the rest of the SLUB code untouched. > > It would be great to hear any suggestions on how this might be approached. > > You could use perf record to capture the differences on the two kernels. > You could also user perf to look at the differences between three kernel > versions: > 1. pre-sheaves entirely > 2. the 'dual layer' cache > 3. The final version That's right, this is exactly the test I just completed. I will send a separate email later. > > In these scenarios, it's not worth looking at the numbers, but just the > differences since the debug required to get meaningful information makes > the results hugely slow and, potentially, not as consistent. Sometimes > I run them multiple time to ensure what I'm seeing makes sense for a > particular comparison (and the server didn't just rotate the logs or > whatever..) Yes, that's right. This is important. I also ran it multiple times to observe data stability and took the average value. > > > > > > > > > > During my tests, I observed two points in the series where performance > > > > regressions occurred: > > > > > > > > Patch 10: I noticed a ~16% regression in my environment. My hypothesis is > > > > that with this patch, the allocation fast path bypasses the percpu partial > > > > list, leading to increased contention on the node list. > > > > > > That makes sense. > > > > > > > Patch 12: This patch seems to introduce an additional ~9.7% regression. I > > > > suspect this might be because the free path also loses buffering from the > > > > percpu partial list, further exacerbating node list contention. > > > > > > Hmm yeah... we did put the previously full slabs there, avoiding the lock. > > > > > > > These are the only two patches in the series where I observed noticeable > > > > regressions. The rest of the patches did not show significant performance > > > > changes in my tests. > > > > > > > > I hope these test results are helpful. > > > > > > They are, thanks. I'd however hope it's just some particular test that has > > > these regressions, > > > > Yes, I hope so too. And the mmap2 test case is indeed quite extreme. > > > > > which can be explained by the loss of double caching. > > > > If we could compare it with a version that only uses the > > CPU partial list, the answer might become clearer. > > In my experience, micro-benchmarks are good at identifying specific > failure points of a patch set, but unless an entire area of benchmarks > regress (ie all mmap threaded), then they rarely tell the whole story. Yes. This make sense to me. > > Are the benchmarks consistently slower? This specific test is sensitive > to alignment because of the 128MB mmap/munmap operation. Sometimes, you > will see a huge spike at a particular process/thread count that moves > around in tests like this. Was your run consistently lower? Yes, my test results have been quite stable, probably because the machine was relatively idle. Thanks for your reply and discuss! -- Thanks, Hao > > Thanks, > Liam >
© 2016 - 2026 Red Hat, Inc.