The changes related to sheaves made the description of locking and other
details outdated. Update it to reflect current state.
Also add a new copyright line due to major changes.
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/slub.c | 141 +++++++++++++++++++++++++++++---------------------------------
1 file changed, 67 insertions(+), 74 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 2c522d2bf547..476a279f1a94 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1,13 +1,15 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * SLUB: A slab allocator that limits cache line use instead of queuing
- * objects in per cpu and per node lists.
+ * SLUB: A slab allocator with low overhead percpu array caches and mostly
+ * lockless freeing of objects to slabs in the slowpath.
*
- * The allocator synchronizes using per slab locks or atomic operations
- * and only uses a centralized lock to manage a pool of partial slabs.
+ * The allocator synchronizes using spin_trylock for percpu arrays in the
+ * fastpath, and cmpxchg_double (or bit spinlock) for slowpath freeing.
+ * Uses a centralized lock to manage a pool of partial slabs.
*
* (C) 2007 SGI, Christoph Lameter
* (C) 2011 Linux Foundation, Christoph Lameter
+ * (C) 2025 SUSE, Vlastimil Babka
*/
#include <linux/mm.h>
@@ -53,11 +55,13 @@
/*
* Lock order:
- * 1. slab_mutex (Global Mutex)
- * 2. node->list_lock (Spinlock)
- * 3. kmem_cache->cpu_slab->lock (Local lock)
- * 4. slab_lock(slab) (Only on some arches)
- * 5. object_map_lock (Only for debugging)
+ * 0. cpu_hotplug_lock
+ * 1. slab_mutex (Global Mutex)
+ * 2a. kmem_cache->cpu_sheaves->lock (Local trylock)
+ * 2b. node->barn->lock (Spinlock)
+ * 2c. node->list_lock (Spinlock)
+ * 3. slab_lock(slab) (Only on some arches)
+ * 4. object_map_lock (Only for debugging)
*
* slab_mutex
*
@@ -78,31 +82,38 @@
* C. slab->objects -> Number of objects in slab
* D. slab->frozen -> frozen state
*
- * Frozen slabs
+ * SL_partial slabs
+ *
+ * Slabs on node partial list have at least one free object. A limited number
+ * of slabs on the list can be fully free (slab->inuse == 0), until we start
+ * discarding them. These slabs are marked with SL_partial, and the flag is
+ * cleared while removing them, usually to grab their freelist afterwards.
+ * This clearing also exempts them from list management. Please see
+ * __slab_free() for more details.
*
- * If a slab is frozen then it is exempt from list management. It is
- * the cpu slab which is actively allocated from by the processor that
- * froze it and it is not on any list. The processor that froze the
- * slab is the one who can perform list operations on the slab. Other
- * processors may put objects onto the freelist but the processor that
- * froze the slab is the only one that can retrieve the objects from the
- * slab's freelist.
+ * Full slabs
*
- * CPU partial slabs
+ * For caches without debugging enabled, full slabs (slab->inuse ==
+ * slab->objects and slab->freelist == NULL) are not placed on any list.
+ * The __slab_free() freeing the first object from such a slab will place
+ * it on the partial list. Caches with debugging enabled place such slab
+ * on the full list and use different allocation and freeing paths.
+ *
+ * Frozen slabs
*
- * The partially empty slabs cached on the CPU partial list are used
- * for performance reasons, which speeds up the allocation process.
- * These slabs are not frozen, but are also exempt from list management,
- * by clearing the SL_partial flag when moving out of the node
- * partial list. Please see __slab_free() for more details.
+ * If a slab is frozen then it is exempt from list management. It is used to
+ * indicate a slab that has failed consistency checks and thus cannot be
+ * allocated from anymore - it is also marked as full. Any previously
+ * allocated objects will be simply leaked upon freeing instead of attempting
+ * to modify the potentially corrupted freelist and metadata.
*
* To sum up, the current scheme is:
- * - node partial slab: SL_partial && !frozen
- * - cpu partial slab: !SL_partial && !frozen
- * - cpu slab: !SL_partial && frozen
- * - full slab: !SL_partial && !frozen
+ * - node partial slab: SL_partial && !full && !frozen
+ * - taken off partial list: !SL_partial && !full && !frozen
+ * - full slab, not on any list: !SL_partial && full && !frozen
+ * - frozen due to inconsistency: !SL_partial && full && frozen
*
- * list_lock
+ * node->list_lock (spinlock)
*
* The list_lock protects the partial and full list on each node and
* the partial slab counter. If taken then no new slabs may be added or
@@ -112,47 +123,46 @@
*
* The list_lock is a centralized lock and thus we avoid taking it as
* much as possible. As long as SLUB does not have to handle partial
- * slabs, operations can continue without any centralized lock. F.e.
- * allocating a long series of objects that fill up slabs does not require
- * the list lock.
+ * slabs, operations can continue without any centralized lock.
*
* For debug caches, all allocations are forced to go through a list_lock
* protected region to serialize against concurrent validation.
*
- * cpu_slab->lock local lock
+ * cpu_sheaves->lock (local_trylock)
*
- * This locks protect slowpath manipulation of all kmem_cache_cpu fields
- * except the stat counters. This is a percpu structure manipulated only by
- * the local cpu, so the lock protects against being preempted or interrupted
- * by an irq. Fast path operations rely on lockless operations instead.
+ * This lock protects fastpath operations on the percpu sheaves. On !RT it
+ * only disables preemption and does no atomic operations. As long as the main
+ * or spare sheaf can handle the allocation or free, there is no other
+ * overhead.
*
- * On PREEMPT_RT, the local lock neither disables interrupts nor preemption
- * which means the lockless fastpath cannot be used as it might interfere with
- * an in-progress slow path operations. In this case the local lock is always
- * taken but it still utilizes the freelist for the common operations.
+ * node->barn->lock (spinlock)
*
- * lockless fastpaths
+ * This lock protects the operations on per-NUMA-node barn. It can quickly
+ * serve an empty or full sheaf if available, and avoid more expensive refill
+ * or flush operation.
*
- * The fast path allocation (slab_alloc_node()) and freeing (do_slab_free())
- * are fully lockless when satisfied from the percpu slab (and when
- * cmpxchg_double is possible to use, otherwise slab_lock is taken).
- * They also don't disable preemption or migration or irqs. They rely on
- * the transaction id (tid) field to detect being preempted or moved to
- * another cpu.
+ * Lockless freeing
+ *
+ * Objects may have to be freed to their slabs when they are from a remote
+ * node (where we want to avoid filling local sheaves with remote objects)
+ * or when there are too many full sheaves. On architectures supporting
+ * cmpxchg_double this is done by a lockless update of slab's freelist and
+ * counters, otherwise slab_lock is taken. This only needs to take the
+ * list_lock if it's a first free to a full slab, or when there are too many
+ * fully free slabs and some need to be discarded.
*
* irq, preemption, migration considerations
*
- * Interrupts are disabled as part of list_lock or local_lock operations, or
+ * Interrupts are disabled as part of list_lock or barn lock operations, or
* around the slab_lock operation, in order to make the slab allocator safe
* to use in the context of an irq.
+ * Preemption is disabled as part of local_trylock operations.
+ * kmalloc_nolock() and kfree_nolock() are safe in NMI context but see
+ * their limitations.
*
- * In addition, preemption (or migration on PREEMPT_RT) is disabled in the
- * allocation slowpath, bulk allocation, and put_cpu_partial(), so that the
- * local cpu doesn't change in the process and e.g. the kmem_cache_cpu pointer
- * doesn't have to be revalidated in each section protected by the local lock.
- *
- * SLUB assigns one slab for allocation to each processor.
- * Allocations only occur from these slabs called cpu slabs.
+ * SLUB assigns two object arrays called sheaves for caching allocation and
+ * frees on each cpu, with a NUMA node shared barn for balancing between cpus.
+ * Allocations and frees are primarily served from these sheaves.
*
* Slabs with free elements are kept on a partial list and during regular
* operations no list for full slabs is used. If an object in a full slab is
@@ -160,25 +170,8 @@
* We track full slabs for debugging purposes though because otherwise we
* cannot scan all objects.
*
- * Slabs are freed when they become empty. Teardown and setup is
- * minimal so we rely on the page allocators per cpu caches for
- * fast frees and allocs.
- *
- * slab->frozen The slab is frozen and exempt from list processing.
- * This means that the slab is dedicated to a purpose
- * such as satisfying allocations for a specific
- * processor. Objects may be freed in the slab while
- * it is frozen but slab_free will then skip the usual
- * list operations. It is up to the processor holding
- * the slab to integrate the slab into the slab lists
- * when the slab is no longer needed.
- *
- * One use of this flag is to mark slabs that are
- * used for allocations. Then such a slab becomes a cpu
- * slab. The cpu slab may be equipped with an additional
- * freelist that allows lockless access to
- * free objects in addition to the regular freelist
- * that requires the slab lock.
+ * Slabs are freed when they become empty. Teardown and setup is minimal so we
+ * rely on the page allocators per cpu caches for fast frees and allocs.
*
* SLAB_DEBUG_FLAGS Slab requires special handling due to debug
* options set. This moves slab handling out of
--
2.52.0
On Fri, Jan 16, 2026 at 03:40:38PM +0100, Vlastimil Babka wrote: > The changes related to sheaves made the description of locking and other > details outdated. Update it to reflect current state. > > Also add a new copyright line due to major changes. > > Reviewed-by: Suren Baghdasaryan <surenb@google.com> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > @@ -112,47 +123,46 @@ > + * node->barn->lock (spinlock) > * > - * lockless fastpaths > + * Lockless freeing > + * > + * Objects may have to be freed to their slabs when they are from a remote > + * node (where we want to avoid filling local sheaves with remote objects) > + * or when there are too many full sheaves. On architectures supporting > + * cmpxchg_double this is done by a lockless update of slab's freelist and > + * counters, otherwise slab_lock is taken. This only needs to take the > + * list_lock if it's a first free to a full slab, or when there are too many > + * fully free slabs and some need to be discarded. nit: "or when a slab becomes empty after the free"? because we don't check nr_partial before acquiring list_lock. With that addressed, Reviewed-by: Harry Yoo <harry.yoo@oracle.com> -- Cheers, Harry / Hyeonggon
On 1/22/26 07:41, Harry Yoo wrote: > On Fri, Jan 16, 2026 at 03:40:38PM +0100, Vlastimil Babka wrote: >> The changes related to sheaves made the description of locking and other >> details outdated. Update it to reflect current state. >> >> Also add a new copyright line due to major changes. >> >> Reviewed-by: Suren Baghdasaryan <surenb@google.com> >> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> >> --- >> @@ -112,47 +123,46 @@ >> + * node->barn->lock (spinlock) >> * >> - * lockless fastpaths >> + * Lockless freeing >> + * >> + * Objects may have to be freed to their slabs when they are from a remote >> + * node (where we want to avoid filling local sheaves with remote objects) >> + * or when there are too many full sheaves. On architectures supporting >> + * cmpxchg_double this is done by a lockless update of slab's freelist and >> + * counters, otherwise slab_lock is taken. This only needs to take the >> + * list_lock if it's a first free to a full slab, or when there are too many >> + * fully free slabs and some need to be discarded. > > nit: "or when a slab becomes empty after the free"? > because we don't check nr_partial before acquiring list_lock. > > With that addressed, > Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Good point, thanks!
On Fri, Jan 16, 2026 at 03:40:38PM +0100, Vlastimil Babka wrote: > The changes related to sheaves made the description of locking and other > details outdated. Update it to reflect current state. > > Also add a new copyright line due to major changes. > > Reviewed-by: Suren Baghdasaryan <surenb@google.com> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/slub.c | 141 +++++++++++++++++++++++++++++--------------------------------- > 1 file changed, 67 insertions(+), 74 deletions(-) > Looks good to me. Reviewed-by: Hao Li <hao.li@linux.dev> -- Thanks, Hao
On Fri, Jan 16, 2026 at 2:41 PM Vlastimil Babka <vbabka@suse.cz> wrote: > > The changes related to sheaves made the description of locking and other > details outdated. Update it to reflect current state. > > Also add a new copyright line due to major changes. > > Reviewed-by: Suren Baghdasaryan <surenb@google.com> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Suren Baghdasaryan <surenb@google.com> > --- > mm/slub.c | 141 +++++++++++++++++++++++++++++--------------------------------- > 1 file changed, 67 insertions(+), 74 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 2c522d2bf547..476a279f1a94 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1,13 +1,15 @@ > // SPDX-License-Identifier: GPL-2.0 > /* > - * SLUB: A slab allocator that limits cache line use instead of queuing > - * objects in per cpu and per node lists. > + * SLUB: A slab allocator with low overhead percpu array caches and mostly > + * lockless freeing of objects to slabs in the slowpath. > * > - * The allocator synchronizes using per slab locks or atomic operations > - * and only uses a centralized lock to manage a pool of partial slabs. > + * The allocator synchronizes using spin_trylock for percpu arrays in the > + * fastpath, and cmpxchg_double (or bit spinlock) for slowpath freeing. > + * Uses a centralized lock to manage a pool of partial slabs. > * > * (C) 2007 SGI, Christoph Lameter > * (C) 2011 Linux Foundation, Christoph Lameter > + * (C) 2025 SUSE, Vlastimil Babka > */ > > #include <linux/mm.h> > @@ -53,11 +55,13 @@ > > /* > * Lock order: > - * 1. slab_mutex (Global Mutex) > - * 2. node->list_lock (Spinlock) > - * 3. kmem_cache->cpu_slab->lock (Local lock) > - * 4. slab_lock(slab) (Only on some arches) > - * 5. object_map_lock (Only for debugging) > + * 0. cpu_hotplug_lock > + * 1. slab_mutex (Global Mutex) > + * 2a. kmem_cache->cpu_sheaves->lock (Local trylock) > + * 2b. node->barn->lock (Spinlock) > + * 2c. node->list_lock (Spinlock) > + * 3. slab_lock(slab) (Only on some arches) > + * 4. object_map_lock (Only for debugging) > * > * slab_mutex > * > @@ -78,31 +82,38 @@ > * C. slab->objects -> Number of objects in slab > * D. slab->frozen -> frozen state > * > - * Frozen slabs > + * SL_partial slabs > + * > + * Slabs on node partial list have at least one free object. A limited number > + * of slabs on the list can be fully free (slab->inuse == 0), until we start > + * discarding them. These slabs are marked with SL_partial, and the flag is > + * cleared while removing them, usually to grab their freelist afterwards. > + * This clearing also exempts them from list management. Please see > + * __slab_free() for more details. > * > - * If a slab is frozen then it is exempt from list management. It is > - * the cpu slab which is actively allocated from by the processor that > - * froze it and it is not on any list. The processor that froze the > - * slab is the one who can perform list operations on the slab. Other > - * processors may put objects onto the freelist but the processor that > - * froze the slab is the only one that can retrieve the objects from the > - * slab's freelist. > + * Full slabs > * > - * CPU partial slabs > + * For caches without debugging enabled, full slabs (slab->inuse == > + * slab->objects and slab->freelist == NULL) are not placed on any list. > + * The __slab_free() freeing the first object from such a slab will place > + * it on the partial list. Caches with debugging enabled place such slab > + * on the full list and use different allocation and freeing paths. > + * > + * Frozen slabs > * > - * The partially empty slabs cached on the CPU partial list are used > - * for performance reasons, which speeds up the allocation process. > - * These slabs are not frozen, but are also exempt from list management, > - * by clearing the SL_partial flag when moving out of the node > - * partial list. Please see __slab_free() for more details. > + * If a slab is frozen then it is exempt from list management. It is used to > + * indicate a slab that has failed consistency checks and thus cannot be > + * allocated from anymore - it is also marked as full. Any previously > + * allocated objects will be simply leaked upon freeing instead of attempting > + * to modify the potentially corrupted freelist and metadata. > * > * To sum up, the current scheme is: > - * - node partial slab: SL_partial && !frozen > - * - cpu partial slab: !SL_partial && !frozen > - * - cpu slab: !SL_partial && frozen > - * - full slab: !SL_partial && !frozen > + * - node partial slab: SL_partial && !full && !frozen > + * - taken off partial list: !SL_partial && !full && !frozen > + * - full slab, not on any list: !SL_partial && full && !frozen > + * - frozen due to inconsistency: !SL_partial && full && frozen > * > - * list_lock > + * node->list_lock (spinlock) > * > * The list_lock protects the partial and full list on each node and > * the partial slab counter. If taken then no new slabs may be added or > @@ -112,47 +123,46 @@ > * > * The list_lock is a centralized lock and thus we avoid taking it as > * much as possible. As long as SLUB does not have to handle partial > - * slabs, operations can continue without any centralized lock. F.e. > - * allocating a long series of objects that fill up slabs does not require > - * the list lock. > + * slabs, operations can continue without any centralized lock. > * > * For debug caches, all allocations are forced to go through a list_lock > * protected region to serialize against concurrent validation. > * > - * cpu_slab->lock local lock > + * cpu_sheaves->lock (local_trylock) > * > - * This locks protect slowpath manipulation of all kmem_cache_cpu fields > - * except the stat counters. This is a percpu structure manipulated only by > - * the local cpu, so the lock protects against being preempted or interrupted > - * by an irq. Fast path operations rely on lockless operations instead. > + * This lock protects fastpath operations on the percpu sheaves. On !RT it > + * only disables preemption and does no atomic operations. As long as the main > + * or spare sheaf can handle the allocation or free, there is no other > + * overhead. > * > - * On PREEMPT_RT, the local lock neither disables interrupts nor preemption > - * which means the lockless fastpath cannot be used as it might interfere with > - * an in-progress slow path operations. In this case the local lock is always > - * taken but it still utilizes the freelist for the common operations. > + * node->barn->lock (spinlock) > * > - * lockless fastpaths > + * This lock protects the operations on per-NUMA-node barn. It can quickly > + * serve an empty or full sheaf if available, and avoid more expensive refill > + * or flush operation. > * > - * The fast path allocation (slab_alloc_node()) and freeing (do_slab_free()) > - * are fully lockless when satisfied from the percpu slab (and when > - * cmpxchg_double is possible to use, otherwise slab_lock is taken). > - * They also don't disable preemption or migration or irqs. They rely on > - * the transaction id (tid) field to detect being preempted or moved to > - * another cpu. > + * Lockless freeing > + * > + * Objects may have to be freed to their slabs when they are from a remote > + * node (where we want to avoid filling local sheaves with remote objects) > + * or when there are too many full sheaves. On architectures supporting > + * cmpxchg_double this is done by a lockless update of slab's freelist and > + * counters, otherwise slab_lock is taken. This only needs to take the > + * list_lock if it's a first free to a full slab, or when there are too many > + * fully free slabs and some need to be discarded. > * > * irq, preemption, migration considerations > * > - * Interrupts are disabled as part of list_lock or local_lock operations, or > + * Interrupts are disabled as part of list_lock or barn lock operations, or > * around the slab_lock operation, in order to make the slab allocator safe > * to use in the context of an irq. > + * Preemption is disabled as part of local_trylock operations. > + * kmalloc_nolock() and kfree_nolock() are safe in NMI context but see > + * their limitations. > * > - * In addition, preemption (or migration on PREEMPT_RT) is disabled in the > - * allocation slowpath, bulk allocation, and put_cpu_partial(), so that the > - * local cpu doesn't change in the process and e.g. the kmem_cache_cpu pointer > - * doesn't have to be revalidated in each section protected by the local lock. > - * > - * SLUB assigns one slab for allocation to each processor. > - * Allocations only occur from these slabs called cpu slabs. > + * SLUB assigns two object arrays called sheaves for caching allocation and s/allocation/allocations > + * frees on each cpu, with a NUMA node shared barn for balancing between cpus. > + * Allocations and frees are primarily served from these sheaves. > * > * Slabs with free elements are kept on a partial list and during regular > * operations no list for full slabs is used. If an object in a full slab is > @@ -160,25 +170,8 @@ > * We track full slabs for debugging purposes though because otherwise we > * cannot scan all objects. > * > - * Slabs are freed when they become empty. Teardown and setup is > - * minimal so we rely on the page allocators per cpu caches for > - * fast frees and allocs. > - * > - * slab->frozen The slab is frozen and exempt from list processing. > - * This means that the slab is dedicated to a purpose > - * such as satisfying allocations for a specific > - * processor. Objects may be freed in the slab while > - * it is frozen but slab_free will then skip the usual > - * list operations. It is up to the processor holding > - * the slab to integrate the slab into the slab lists > - * when the slab is no longer needed. > - * > - * One use of this flag is to mark slabs that are > - * used for allocations. Then such a slab becomes a cpu > - * slab. The cpu slab may be equipped with an additional > - * freelist that allows lockless access to > - * free objects in addition to the regular freelist > - * that requires the slab lock. > + * Slabs are freed when they become empty. Teardown and setup is minimal so we > + * rely on the page allocators per cpu caches for fast frees and allocs. > * > * SLAB_DEBUG_FLAGS Slab requires special handling due to debug > * options set. This moves slab handling out of > > -- > 2.52.0 >
© 2016 - 2026 Red Hat, Inc.