From nobody Tue Feb 10 14:32:09 2026 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E74FE33E347 for ; Fri, 23 Jan 2026 06:53:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769151230; cv=none; b=mM24q9KpvzM7fBbHeGqIlqoKsSBHgFJsKHmBo7wD8lGIiy8Ecjes9KSJhpbjPGvY5ML7qjZpAfkQa/K4kK+eS3iQbe9C9xqZU2eal3FiS0uV6u2J3Qf5713/RRCMPkJba0r3/ztjvn6Zh74VFknbyKIRl0icMkWZL4pYo+nGtsI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769151230; c=relaxed/simple; bh=HSQRBgaQhvSt14sT3asSSrQv2J9u7Espet716Zl4zto=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=I6RAnVNJpiVe7+9ytjKycp73eAjB8neqoXsWcOjIHjk3bx9kDDOwsEl0ZfGAtumw32tFZEP3xNyKZo3YHRbgOq9d6cGD4guHiP8bSuW3JntaFbb29HSGBlteAEggKJEyieLiaQFSlj0S6r1Zj5/z61hrdNFWTOfiZ/SjYVqIioE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C79315BCD0; Fri, 23 Jan 2026 06:53:10 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 9539D13A01; Fri, 23 Jan 2026 06:53:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id yBQnJNYac2k4YgAAD6G6ig (envelope-from ); Fri, 23 Jan 2026 06:53:10 +0000 From: Vlastimil Babka Date: Fri, 23 Jan 2026 07:52:52 +0100 Subject: [PATCH v4 14/22] slab: remove defer_deactivate_slab() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260123-sheaves-for-all-v4-14-041323d506f7@suse.cz> References: <20260123-sheaves-for-all-v4-0-041323d506f7@suse.cz> In-Reply-To: <20260123-sheaves-for-all-v4-0-041323d506f7@suse.cz> To: Harry Yoo , Petr Tesarik , Christoph Lameter , David Rientjes , Roman Gushchin Cc: Hao Li , Andrew Morton , Uladzislau Rezki , "Liam R. Howlett" , Suren Baghdasaryan , Sebastian Andrzej Siewior , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.14.3 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spam-Score: -4.00 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Queue-Id: C79315BCD0 X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spam-Level: X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)] X-Spam-Flag: NO There are no more cpu slabs so we don't need their deferred deactivation. The function is now only used from places where we allocate a new slab but then can't spin on node list_lock to put it on the partial list. Instead of the deferred action we can free it directly via __free_slab(), we just need to tell it to use _nolock() freeing of the underlying pages and take care of the accounting. Since free_frozen_pages_nolock() variant does not yet exist for code outside of the page allocator, create it as a trivial wrapper for __free_frozen_pages(..., FPI_TRYLOCK). Reviewed-by: Harry Yoo Reviewed-by: Hao Li Reviewed-by: Suren Baghdasaryan Signed-off-by: Vlastimil Babka Acked-by: Alexei Starovoitov --- mm/internal.h | 1 + mm/page_alloc.c | 5 +++++ mm/slab.h | 8 +------- mm/slub.c | 58 +++++++++++++++++++++--------------------------------= ---- 4 files changed, 28 insertions(+), 44 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index e430da900430..1f44ccb4badf 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -846,6 +846,7 @@ static inline struct page *alloc_frozen_pages_noprof(gf= p_t gfp, unsigned int ord struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, un= signed int order); #define alloc_frozen_pages_nolock(...) \ alloc_hooks(alloc_frozen_pages_nolock_noprof(__VA_ARGS__)) +void free_frozen_pages_nolock(struct page *page, unsigned int order); =20 extern void zone_pcp_reset(struct zone *zone); extern void zone_pcp_disable(struct zone *zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c380f063e8b7..0127e9d661ad 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2981,6 +2981,11 @@ void free_frozen_pages(struct page *page, unsigned i= nt order) __free_frozen_pages(page, order, FPI_NONE); } =20 +void free_frozen_pages_nolock(struct page *page, unsigned int order) +{ + __free_frozen_pages(page, order, FPI_TRYLOCK); +} + /* * Free a batch of folios */ diff --git a/mm/slab.h b/mm/slab.h index 0fbe13bec864..37090a7dffb6 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -71,13 +71,7 @@ struct slab { struct kmem_cache *slab_cache; union { struct { - union { - struct list_head slab_list; - struct { /* For deferred deactivate_slab() */ - struct llist_node llnode; - void *flush_freelist; - }; - }; + struct list_head slab_list; /* Double-word boundary */ struct freelist_counters; }; diff --git a/mm/slub.c b/mm/slub.c index a63a0eed2c55..82950c2bc26d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3262,7 +3262,7 @@ static struct slab *new_slab(struct kmem_cache *s, gf= p_t flags, int node) flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node); } =20 -static void __free_slab(struct kmem_cache *s, struct slab *slab) +static void __free_slab(struct kmem_cache *s, struct slab *slab, bool allo= w_spin) { struct page *page =3D slab_page(slab); int order =3D compound_order(page); @@ -3273,14 +3273,26 @@ static void __free_slab(struct kmem_cache *s, struc= t slab *slab) __ClearPageSlab(page); mm_account_reclaimed_pages(pages); unaccount_slab(slab, order, s); - free_frozen_pages(page, order); + if (allow_spin) + free_frozen_pages(page, order); + else + free_frozen_pages_nolock(page, order); +} + +static void free_new_slab_nolock(struct kmem_cache *s, struct slab *slab) +{ + /* + * Since it was just allocated, we can skip the actions in + * discard_slab() and free_slab(). + */ + __free_slab(s, slab, false); } =20 static void rcu_free_slab(struct rcu_head *h) { struct slab *slab =3D container_of(h, struct slab, rcu_head); =20 - __free_slab(slab->slab_cache, slab); + __free_slab(slab->slab_cache, slab, true); } =20 static void free_slab(struct kmem_cache *s, struct slab *slab) @@ -3296,7 +3308,7 @@ static void free_slab(struct kmem_cache *s, struct sl= ab *slab) if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) call_rcu(&slab->rcu_head, rcu_free_slab); else - __free_slab(s, slab); + __free_slab(s, slab, true); } =20 static void discard_slab(struct kmem_cache *s, struct slab *slab) @@ -3389,8 +3401,6 @@ static void *alloc_single_from_partial(struct kmem_ca= che *s, return object; } =20 -static void defer_deactivate_slab(struct slab *slab, void *flush_freelist); - /* * Called only for kmem_cache_debug() caches to allocate from a freshly * allocated slab. Allocate a single object instead of whole freelist @@ -3406,8 +3416,8 @@ static void *alloc_single_from_new_slab(struct kmem_c= ache *s, struct slab *slab, void *object; =20 if (!allow_spin && !spin_trylock_irqsave(&n->list_lock, flags)) { - /* Unlucky, discard newly allocated slab */ - defer_deactivate_slab(slab, NULL); + /* Unlucky, discard newly allocated slab. */ + free_new_slab_nolock(s, slab); return NULL; } =20 @@ -4279,7 +4289,7 @@ static unsigned int alloc_from_new_slab(struct kmem_c= ache *s, struct slab *slab, =20 if (!spin_trylock_irqsave(&n->list_lock, flags)) { /* Unlucky, discard newly allocated slab */ - defer_deactivate_slab(slab, NULL); + free_new_slab_nolock(s, slab); return 0; } } @@ -6056,7 +6066,6 @@ static void free_to_pcs_bulk(struct kmem_cache *s, si= ze_t size, void **p) =20 struct defer_free { struct llist_head objects; - struct llist_head slabs; struct irq_work work; }; =20 @@ -6064,23 +6073,21 @@ static void free_deferred_objects(struct irq_work *= work); =20 static DEFINE_PER_CPU(struct defer_free, defer_free_objects) =3D { .objects =3D LLIST_HEAD_INIT(objects), - .slabs =3D LLIST_HEAD_INIT(slabs), .work =3D IRQ_WORK_INIT(free_deferred_objects), }; =20 /* * In PREEMPT_RT irq_work runs in per-cpu kthread, so it's safe - * to take sleeping spin_locks from __slab_free() and deactivate_slab(). + * to take sleeping spin_locks from __slab_free(). * In !PREEMPT_RT irq_work will run after local_unlock_irqrestore(). */ static void free_deferred_objects(struct irq_work *work) { struct defer_free *df =3D container_of(work, struct defer_free, work); struct llist_head *objs =3D &df->objects; - struct llist_head *slabs =3D &df->slabs; struct llist_node *llnode, *pos, *t; =20 - if (llist_empty(objs) && llist_empty(slabs)) + if (llist_empty(objs)) return; =20 llnode =3D llist_del_all(objs); @@ -6104,16 +6111,6 @@ static void free_deferred_objects(struct irq_work *w= ork) =20 __slab_free(s, slab, x, x, 1, _THIS_IP_); } - - llnode =3D llist_del_all(slabs); - llist_for_each_safe(pos, t, llnode) { - struct slab *slab =3D container_of(pos, struct slab, llnode); - - if (slab->frozen) - deactivate_slab(slab->slab_cache, slab, slab->flush_freelist); - else - free_slab(slab->slab_cache, slab); - } } =20 static void defer_free(struct kmem_cache *s, void *head) @@ -6129,19 +6126,6 @@ static void defer_free(struct kmem_cache *s, void *h= ead) irq_work_queue(&df->work); } =20 -static void defer_deactivate_slab(struct slab *slab, void *flush_freelist) -{ - struct defer_free *df; - - slab->flush_freelist =3D flush_freelist; - - guard(preempt)(); - - df =3D this_cpu_ptr(&defer_free_objects); - if (llist_add(&slab->llnode, &df->slabs)) - irq_work_queue(&df->work); -} - void defer_free_barrier(void) { int cpu; --=20 2.52.0