From nobody Fri Jan 2 01:31:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEE7ACDB474 for ; Tue, 17 Oct 2023 15:45:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235075AbjJQPpn (ORCPT ); Tue, 17 Oct 2023 11:45:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344182AbjJQPpe (ORCPT ); Tue, 17 Oct 2023 11:45:34 -0400 Received: from out-203.mta1.migadu.com (out-203.mta1.migadu.com [95.215.58.203]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0745610D for ; Tue, 17 Oct 2023 08:45:29 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697557528; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SwVUZ0CSOlUV0/uHFVqKY/DwonXk9nrbYMCg7m1DgMw=; b=QTQRTtR4TDMx815F2FxWR0MO4r4tizfmzdquqGgIm6079a5vKw/+Kv5swHYxfc+xr4bwXq Qc4wl110RF7gV0nbPrV0c1kK94qUoAu4HC34L59IfxeRumqBLP9gWTX7ryMGjzSz0nXUJR 3HtIn99NiO2gQzsXZGKH13+VO/pQ1ig= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH 4/5] slub: Don't freeze slabs for cpu partial Date: Tue, 17 Oct 2023 15:44:38 +0000 Message-Id: <20231017154439.3036608-5-chengming.zhou@linux.dev> In-Reply-To: <20231017154439.3036608-1-chengming.zhou@linux.dev> References: <20231017154439.3036608-1-chengming.zhou@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Chengming Zhou Now we will freeze slabs when moving them out of node partial list to cpu partial list, this method needs two cmpxchg_double operations: 1. freeze slab (acquire_slab()) under the node list_lock 2. get_freelist() when pick used in ___slab_alloc() Actually we don't need to freeze when moving slabs out of node partial list, we can delay freeze to get slab freelist in ___slab_alloc(), so we can save one cmpxchg_double(). And there are other good points: 1. The moving of slabs between node partial list and cpu partial list becomes simpler, since we don't need to freeze or unfreeze at all. 2. The node list_lock contention would be less, since we only need to freeze one slab under the node list_lock. (In fact, we can first move slabs out of node partial list, don't need to freeze any slab at all, so the contention on slab won't transfer to the node list_lock contention.) We can achieve this because there is no concurrent path would manipulate the partial slab list except the __slab_free() path, which is serialized using the new introduced slab->flags. Note this patch just change the part of moving the partial slabs for easy code review, we will fix other parts in the following patches. Signed-off-by: Chengming Zhou --- mm/slub.c | 61 ++++++++++++++++--------------------------------------- 1 file changed, 17 insertions(+), 44 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5a9711b35c74..044235bd8a45 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2329,19 +2329,21 @@ static void *get_partial_node(struct kmem_cache *s,= struct kmem_cache_node *n, continue; } =20 - t =3D acquire_slab(s, n, slab, object =3D=3D NULL); - if (!t) - break; - if (!object) { - *pc->slab =3D slab; - stat(s, ALLOC_FROM_PARTIAL); - object =3D t; - } else { - put_cpu_partial(s, slab, 0); - stat(s, CPU_PARTIAL_NODE); - partial_slabs++; + t =3D acquire_slab(s, n, slab, object =3D=3D NULL); + if (t) { + *pc->slab =3D slab; + stat(s, ALLOC_FROM_PARTIAL); + object =3D t; + continue; + } } + + remove_partial(n, slab); + put_cpu_partial(s, slab, 0); + stat(s, CPU_PARTIAL_NODE); + partial_slabs++; + #ifdef CONFIG_SLUB_CPU_PARTIAL if (!kmem_cache_has_cpu_partial(s) || partial_slabs > s->cpu_partial_slabs / 2) @@ -2612,9 +2614,6 @@ static void __unfreeze_partials(struct kmem_cache *s,= struct slab *partial_slab) unsigned long flags =3D 0; =20 while (partial_slab) { - struct slab new; - struct slab old; - slab =3D partial_slab; partial_slab =3D slab->next; =20 @@ -2627,23 +2626,7 @@ static void __unfreeze_partials(struct kmem_cache *s= , struct slab *partial_slab) spin_lock_irqsave(&n->list_lock, flags); } =20 - do { - - old.freelist =3D slab->freelist; - old.counters =3D slab->counters; - VM_BUG_ON(!old.frozen); - - new.counters =3D old.counters; - new.freelist =3D old.freelist; - - new.frozen =3D 0; - - } while (!__slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")); - - if (unlikely(!new.inuse && n->nr_partial >=3D s->min_partial)) { + if (unlikely(!slab->inuse && n->nr_partial >=3D s->min_partial)) { slab->next =3D slab_to_discard; slab_to_discard =3D slab; } else { @@ -3640,18 +3623,8 @@ static void __slab_free(struct kmem_cache *s, struct= slab *slab, was_frozen =3D new.frozen; new.inuse -=3D cnt; if ((!new.inuse || !prior) && !was_frozen) { - - if (kmem_cache_has_cpu_partial(s) && !prior) { - - /* - * Slab was on no list before and will be - * partially empty - * We can defer the list move and instead - * freeze it. - */ - new.frozen =3D 1; - - } else { /* Needs to be taken off a list */ + /* Needs to be taken off a list */ + if (!kmem_cache_has_cpu_partial(s) || prior) { =20 n =3D get_node(s, slab_nid(slab)); /* @@ -3681,7 +3654,7 @@ static void __slab_free(struct kmem_cache *s, struct = slab *slab, * activity can be necessary. */ stat(s, FREE_FROZEN); - } else if (new.frozen) { + } else if (kmem_cache_has_cpu_partial(s) && !prior) { /* * If we just froze the slab then put it onto the * per cpu partial list. --=20 2.40.1