From nobody Tue Feb 10 22:18:15 2026 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F6C5320A3B for ; Thu, 23 Oct 2025 13:53:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761227600; cv=none; b=cwotlWNqyatamjQg5r9Pw2D0NYg8ZRQS29njOHZAE+NgcRyM6MUNLJMQSUDwG0K0czy3IRKH/TVOkh8S1TocL+YAqIU48rYXjc9qTJq9ql7XeodWtnXDrpXNXYQV6872NzO5aPFspt3pn4vi7T1J8P0whYrbwJkN7fnLnHX5Was= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761227600; c=relaxed/simple; bh=Lma2eKGP0HiaxQEfl86obobTpaIpPNZjuauyneAY5A4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LG3eRkLR0Vbg8NivJHyuymtRNY+dua4dctdKyKj7ZVMPMLosgKcKi7PG1PFx2mKVBKRwPogRJn6XbMEDz4ExFdqyVEKAadN4vPv32ljeufwF3Jzk8EA17J4Y95DpDjsFV/Tku7EifhTyrEV4nsYVUuBNDMsTYQRqfBETItZr76w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 9E44C1F7C7; Thu, 23 Oct 2025 13:53:01 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 31AD813B0A; Thu, 23 Oct 2025 13:52:54 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id UAPYCzYz+mjvQQAAD6G6ig (envelope-from ); Thu, 23 Oct 2025 13:52:54 +0000 From: Vlastimil Babka Date: Thu, 23 Oct 2025 15:52:34 +0200 Subject: [PATCH RFC 12/19] slab: remove the do_slab_free() fastpath Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251023-sheaves-for-all-v1-12-6ffa2c9941c0@suse.cz> References: <20251023-sheaves-for-all-v1-0-6ffa2c9941c0@suse.cz> In-Reply-To: <20251023-sheaves-for-all-v1-0-6ffa2c9941c0@suse.cz> To: Andrew Morton , Christoph Lameter , David Rientjes , Roman Gushchin , Harry Yoo Cc: Uladzislau Rezki , "Liam R. Howlett" , Suren Baghdasaryan , Sebastian Andrzej Siewior , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.14.3 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Queue-Id: 9E44C1F7C7 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Spam-Flag: NO X-Spam-Score: -4.00 X-Spam-Level: X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] We have removed cpu slab usage from allocation paths. Now remove do_slab_free() which was freeing objects to the cpu slab when the object belonged to it. Instead call __slab_free() directly, which was previously the fallback. This simplifies kfree_nolock() - when freeing to percpu sheaf fails, we can call defer_free() directly. Also remove functions that became unused. Signed-off-by: Vlastimil Babka --- mm/slub.c | 149 ++++++----------------------------------------------------= ---- 1 file changed, 13 insertions(+), 136 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d8891d852a8f..a35eb397caa9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3671,29 +3671,6 @@ static inline unsigned int init_tid(int cpu) return cpu; } =20 -static inline void note_cmpxchg_failure(const char *n, - const struct kmem_cache *s, unsigned long tid) -{ -#ifdef SLUB_DEBUG_CMPXCHG - unsigned long actual_tid =3D __this_cpu_read(s->cpu_slab->tid); - - pr_info("%s %s: cmpxchg redo ", n, s->name); - - if (IS_ENABLED(CONFIG_PREEMPTION) && - tid_to_cpu(tid) !=3D tid_to_cpu(actual_tid)) { - pr_warn("due to cpu change %d -> %d\n", - tid_to_cpu(tid), tid_to_cpu(actual_tid)); - } else if (tid_to_event(tid) !=3D tid_to_event(actual_tid)) { - pr_warn("due to cpu running other code. Event %ld->%ld\n", - tid_to_event(tid), tid_to_event(actual_tid)); - } else { - pr_warn("for unknown reason: actual=3D%lx was=3D%lx target=3D%lx\n", - actual_tid, tid, next_tid(tid)); - } -#endif - stat(s, CMPXCHG_DOUBLE_CPU_FAIL); -} - static void init_kmem_cache_cpus(struct kmem_cache *s) { #ifdef CONFIG_PREEMPT_RT @@ -4231,18 +4208,6 @@ static inline bool pfmemalloc_match(struct slab *sla= b, gfp_t gfpflags) return true; } =20 -static inline bool -__update_cpu_freelist_fast(struct kmem_cache *s, - void *freelist_old, void *freelist_new, - unsigned long tid) -{ - freelist_aba_t old =3D { .freelist =3D freelist_old, .counter =3D tid }; - freelist_aba_t new =3D { .freelist =3D freelist_new, .counter =3D next_ti= d(tid) }; - - return this_cpu_try_cmpxchg_freelist(s->cpu_slab->freelist_tid.full, - &old.full, new.full); -} - /* * Get the slab's freelist and do not freeze it. * @@ -6076,99 +6041,6 @@ void defer_free_barrier(void) irq_work_sync(&per_cpu_ptr(&defer_free_objects, cpu)->work); } =20 -/* - * Fastpath with forced inlining to produce a kfree and kmem_cache_free th= at - * can perform fastpath freeing without additional function calls. - * - * The fastpath is only possible if we are freeing to the current cpu slab - * of this processor. This typically the case if we have just allocated - * the item before. - * - * If fastpath is not possible then fall back to __slab_free where we deal - * with all sorts of special processing. - * - * Bulk free of a freelist with several objects (all pointing to the - * same slab) possible by specifying head and tail ptr, plus objects - * count (cnt). Bulk free indicated by tail pointer being set. - */ -static __always_inline void do_slab_free(struct kmem_cache *s, - struct slab *slab, void *head, void *tail, - int cnt, unsigned long addr) -{ - /* cnt =3D=3D 0 signals that it's called from kfree_nolock() */ - bool allow_spin =3D cnt; - struct kmem_cache_cpu *c; - unsigned long tid; - void **freelist; - -redo: - /* - * Determine the currently cpus per cpu slab. - * The cpu may change afterward. However that does not matter since - * data is retrieved via this pointer. If we are on the same cpu - * during the cmpxchg then the free will succeed. - */ - c =3D raw_cpu_ptr(s->cpu_slab); - tid =3D READ_ONCE(c->tid); - - /* Same with comment on barrier() in __slab_alloc_node() */ - barrier(); - - if (unlikely(slab !=3D c->slab)) { - if (unlikely(!allow_spin)) { - /* - * __slab_free() can locklessly cmpxchg16 into a slab, - * but then it might need to take spin_lock - * for further processing. - * Avoid the complexity and simply add to a deferred list. - */ - defer_free(s, head); - } else { - __slab_free(s, slab, head, tail, cnt, addr); - } - return; - } - - if (unlikely(!allow_spin)) { - if ((in_nmi() || !USE_LOCKLESS_FAST_PATH()) && - local_lock_is_locked(&s->cpu_slab->lock)) { - defer_free(s, head); - return; - } - cnt =3D 1; /* restore cnt. kfree_nolock() frees one object at a time */ - } - - if (USE_LOCKLESS_FAST_PATH()) { - freelist =3D READ_ONCE(c->freelist); - - set_freepointer(s, tail, freelist); - - if (unlikely(!__update_cpu_freelist_fast(s, freelist, head, tid))) { - note_cmpxchg_failure("slab_free", s, tid); - goto redo; - } - } else { - __maybe_unused unsigned long flags =3D 0; - - /* Update the free list under the local lock */ - local_lock_cpu_slab(s, flags); - c =3D this_cpu_ptr(s->cpu_slab); - if (unlikely(slab !=3D c->slab)) { - local_unlock_cpu_slab(s, flags); - goto redo; - } - tid =3D c->tid; - freelist =3D c->freelist; - - set_freepointer(s, tail, freelist); - c->freelist =3D head; - c->tid =3D next_tid(tid); - - local_unlock_cpu_slab(s, flags); - } - stat_add(s, FREE_FASTPATH, cnt); -} - static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) @@ -6185,7 +6057,7 @@ void slab_free(struct kmem_cache *s, struct slab *sla= b, void *object, return; } =20 - do_slab_free(s, slab, object, object, 1, addr); + __slab_free(s, slab, object, object, 1, addr); } =20 #ifdef CONFIG_MEMCG @@ -6194,7 +6066,7 @@ static noinline void memcg_alloc_abort_single(struct kmem_cache *s, void *object) { if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) - do_slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_); + __slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_); } #endif =20 @@ -6209,7 +6081,7 @@ void slab_free_bulk(struct kmem_cache *s, struct slab= *slab, void *head, * to remove objects, whose reuse must be delayed. */ if (likely(slab_free_freelist_hook(s, &head, &tail, &cnt))) - do_slab_free(s, slab, head, tail, cnt, addr); + __slab_free(s, slab, head, tail, cnt, addr); } =20 #ifdef CONFIG_SLUB_RCU_DEBUG @@ -6235,14 +6107,14 @@ static void slab_free_after_rcu_debug(struct rcu_he= ad *rcu_head) =20 /* resume freeing */ if (slab_free_hook(s, object, slab_want_init_on_free(s), true)) - do_slab_free(s, slab, object, object, 1, _THIS_IP_); + __slab_free(s, slab, object, object, 1, _THIS_IP_); } #endif /* CONFIG_SLUB_RCU_DEBUG */ =20 #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { - do_slab_free(cache, virt_to_slab(x), x, x, 1, addr); + __slab_free(cache, virt_to_slab(x), x, x, 1, addr); } #endif =20 @@ -6444,8 +6316,13 @@ void kfree_nolock(const void *object) * since kasan quarantine takes locks and not supported from NMI. */ kasan_slab_free(s, x, false, false, /* skip quarantine */true); + /* + * __slab_free() can locklessly cmpxchg16 into a slab, but then it might + * need to take spin_lock for further processing. + * Avoid the complexity and simply add to a deferred list. + */ if (!free_to_pcs(s, x, false)) - do_slab_free(s, slab, x, x, 0, _RET_IP_); + defer_free(s, x); } EXPORT_SYMBOL_GPL(kfree_nolock); =20 @@ -6862,7 +6739,7 @@ static void __kmem_cache_free_bulk(struct kmem_cache = *s, size_t size, void **p) if (kfence_free(df.freelist)) continue; =20 - do_slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt, + __slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt, _RET_IP_); } while (likely(size)); } @@ -6945,7 +6822,7 @@ __refill_objects(struct kmem_cache *s, void **p, gfp_= t gfp, unsigned int min, cnt++; object =3D get_freepointer(s, object); } while (object); - do_slab_free(s, slab, head, tail, cnt, _RET_IP_); + __slab_free(s, slab, head, tail, cnt, _RET_IP_); } =20 if (refilled >=3D max) --=20 2.51.1