From nobody Tue Feb 10 19:09:47 2026 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 112AF36827E for ; Fri, 23 Jan 2026 06:54:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769151271; cv=none; b=h+zlcMDrys+aGF6wTlBcm7/1DIzBz/fjiVow3BzlzBrdoJ77uyV9gPG47WI/y7/BtG9LebfbVac2ih1JX0z0DhWlvh0E7Vq7AhscatJimC6UwixURuzYv4hrje8yNSD9QbzjHo7UuSoUWFwhHTYTCKMMU6c6YXe6iNYcGfza/+I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769151271; c=relaxed/simple; bh=ftWffZ47Z2USsrQbPO+S3+Y5G19VCH2KUIadgIAVFIY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=clyPtn48XZlSqOnxwkkpkw9M5Tr1tFLJiRyeMAhOvlxvZBK4DmHfYwJ0j91C+7p+4xQ42nCB70xnjkgkm9cuvK6Fn4uhI5o+N/g53hxqNRZlZSIaM2LW9PlllkbMKKVIjpWVF0mn+qd6bDr1/x+M7Xok6GyXzfFBqTr/jN3jH58= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 7B7DE5BCD8; Fri, 23 Jan 2026 06:53:11 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 52B08139E8; Fri, 23 Jan 2026 06:53:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id EMnqE9cac2k4YgAAD6G6ig (envelope-from ); Fri, 23 Jan 2026 06:53:11 +0000 From: Vlastimil Babka Date: Fri, 23 Jan 2026 07:52:59 +0100 Subject: [PATCH v4 21/22] mm/slub: remove DEACTIVATE_TO_* stat items Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260123-sheaves-for-all-v4-21-041323d506f7@suse.cz> References: <20260123-sheaves-for-all-v4-0-041323d506f7@suse.cz> In-Reply-To: <20260123-sheaves-for-all-v4-0-041323d506f7@suse.cz> To: Harry Yoo , Petr Tesarik , Christoph Lameter , David Rientjes , Roman Gushchin Cc: Hao Li , Andrew Morton , Uladzislau Rezki , "Liam R. Howlett" , Suren Baghdasaryan , Sebastian Andrzej Siewior , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.14.3 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)] X-Spam-Flag: NO X-Spam-Score: -4.00 X-Rspamd-Queue-Id: 7B7DE5BCD8 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Spam-Level: The cpu slabs and their deactivations were removed, so remove the unused stat items. Weirdly enough the values were also used to control __add_partial() adding to head or tail of the list, so replace that with a new enum add_mode, which is cleaner. Reviewed-by: Suren Baghdasaryan Reviewed-by: Hao Li Signed-off-by: Vlastimil Babka Reviewed-by: Harry Yoo --- mm/slub.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 3009eb7bd8d2..369fb9bbdb75 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -329,6 +329,11 @@ static void debugfs_slab_add(struct kmem_cache *); static inline void debugfs_slab_add(struct kmem_cache *s) { } #endif =20 +enum add_mode { + ADD_TO_HEAD, + ADD_TO_TAIL, +}; + enum stat_item { ALLOC_PCS, /* Allocation from percpu sheaf */ ALLOC_FASTPATH, /* Allocation from cpu slab */ @@ -348,8 +353,6 @@ enum stat_item { CPUSLAB_FLUSH, /* Abandoning of the cpu slab */ DEACTIVATE_FULL, /* Cpu slab was full when deactivated */ DEACTIVATE_EMPTY, /* Cpu slab was empty when deactivated */ - DEACTIVATE_TO_HEAD, /* Cpu slab was moved to the head of partials */ - DEACTIVATE_TO_TAIL, /* Cpu slab was moved to the tail of partials */ DEACTIVATE_REMOTE_FREES,/* Slab contained remotely freed objects */ DEACTIVATE_BYPASS, /* Implicit deactivation */ ORDER_FALLBACK, /* Number of times fallback was necessary */ @@ -3270,10 +3273,10 @@ static inline void slab_clear_node_partial(struct s= lab *slab) * Management of partially allocated slabs. */ static inline void -__add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) +__add_partial(struct kmem_cache_node *n, struct slab *slab, enum add_mode = mode) { n->nr_partial++; - if (tail =3D=3D DEACTIVATE_TO_TAIL) + if (mode =3D=3D ADD_TO_TAIL) list_add_tail(&slab->slab_list, &n->partial); else list_add(&slab->slab_list, &n->partial); @@ -3281,10 +3284,10 @@ __add_partial(struct kmem_cache_node *n, struct sla= b *slab, int tail) } =20 static inline void add_partial(struct kmem_cache_node *n, - struct slab *slab, int tail) + struct slab *slab, enum add_mode mode) { lockdep_assert_held(&n->list_lock); - __add_partial(n, slab, tail); + __add_partial(n, slab, mode); } =20 static inline void remove_partial(struct kmem_cache_node *n, @@ -3377,7 +3380,7 @@ static void *alloc_single_from_new_slab(struct kmem_c= ache *s, struct slab *slab, if (slab->inuse =3D=3D slab->objects) add_full(s, n, slab); else - add_partial(n, slab, DEACTIVATE_TO_HEAD); + add_partial(n, slab, ADD_TO_HEAD); =20 inc_slabs_node(s, nid, slab->objects); spin_unlock_irqrestore(&n->list_lock, flags); @@ -3999,7 +4002,7 @@ static unsigned int alloc_from_new_slab(struct kmem_c= ache *s, struct slab *slab, n =3D get_node(s, slab_nid(slab)); spin_lock_irqsave(&n->list_lock, flags); } - add_partial(n, slab, DEACTIVATE_TO_HEAD); + add_partial(n, slab, ADD_TO_HEAD); spin_unlock_irqrestore(&n->list_lock, flags); } =20 @@ -5070,7 +5073,7 @@ static noinline void free_to_partial_list( /* was on full list */ remove_full(s, n, slab); if (!slab_free) { - add_partial(n, slab, DEACTIVATE_TO_TAIL); + add_partial(n, slab, ADD_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } } else if (slab_free) { @@ -5190,7 +5193,7 @@ static void __slab_free(struct kmem_cache *s, struct = slab *slab, * then add it. */ if (unlikely(was_full)) { - add_partial(n, slab, DEACTIVATE_TO_TAIL); + add_partial(n, slab, ADD_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -6592,7 +6595,7 @@ __refill_objects_node(struct kmem_cache *s, void **p,= gfp_t gfp, unsigned int mi continue; =20 list_del(&slab->slab_list); - add_partial(n, slab, DEACTIVATE_TO_HEAD); + add_partial(n, slab, ADD_TO_HEAD); } =20 spin_unlock_irqrestore(&n->list_lock, flags); @@ -7059,7 +7062,7 @@ static void early_kmem_cache_node_alloc(int node) * No locks need to be taken here as it has just been * initialized and there is no concurrent access. */ - __add_partial(n, slab, DEACTIVATE_TO_HEAD); + __add_partial(n, slab, ADD_TO_HEAD); } =20 static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -8751,8 +8754,6 @@ STAT_ATTR(FREE_SLAB, free_slab); STAT_ATTR(CPUSLAB_FLUSH, cpuslab_flush); STAT_ATTR(DEACTIVATE_FULL, deactivate_full); STAT_ATTR(DEACTIVATE_EMPTY, deactivate_empty); -STAT_ATTR(DEACTIVATE_TO_HEAD, deactivate_to_head); -STAT_ATTR(DEACTIVATE_TO_TAIL, deactivate_to_tail); STAT_ATTR(DEACTIVATE_REMOTE_FREES, deactivate_remote_frees); STAT_ATTR(DEACTIVATE_BYPASS, deactivate_bypass); STAT_ATTR(ORDER_FALLBACK, order_fallback); @@ -8855,8 +8856,6 @@ static struct attribute *slab_attrs[] =3D { &cpuslab_flush_attr.attr, &deactivate_full_attr.attr, &deactivate_empty_attr.attr, - &deactivate_to_head_attr.attr, - &deactivate_to_tail_attr.attr, &deactivate_remote_frees_attr.attr, &deactivate_bypass_attr.attr, &order_fallback_attr.attr, --=20 2.52.0