From nobody Sun Feb 8 06:22:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB169C001B0 for ; Tue, 8 Aug 2023 16:37:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233017AbjHHQh4 (ORCPT ); Tue, 8 Aug 2023 12:37:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233273AbjHHQhL (ORCPT ); Tue, 8 Aug 2023 12:37:11 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2FAA47EE for ; Tue, 8 Aug 2023 08:53:19 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id BB09221C21; Tue, 8 Aug 2023 09:53:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1691488432; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X+E0unmCYoA/5q96uLhFby2/QU/JbMexzKmCXmErmbw=; b=JuHIK5yA07vWudODvORFefuFMGecT3WRUyFcG8HubBRT64OpJEFUdDEl52fFQ6gz4H90kv ISXwJNf7MbV3Wd1UHDOlLh6ux3vyjmJtY8aOpXZlfV2j0fAloFWTumEgSdimGKJIpjMF0J glyqRd0MduDW1F5GfTB24ayFXh4xtUo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1691488432; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X+E0unmCYoA/5q96uLhFby2/QU/JbMexzKmCXmErmbw=; b=8lR8OBOOUrNsZP5iXkEKBSPRcGaYP8iGo6k7WS/5Qv5TNNAqaGgIxraAPd11BDjhyHllGM Q5rHU4gmYa6PUnDw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8F110139E9; Tue, 8 Aug 2023 09:53:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +FJRIrAQ0mSBJQAAMHmgww (envelope-from ); Tue, 08 Aug 2023 09:53:52 +0000 From: Vlastimil Babka To: "Liam R. Howlett" , Matthew Wilcox , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Vlastimil Babka Subject: [RFC v1 1/5] mm, slub: fix bulk alloc and free stats Date: Tue, 8 Aug 2023 11:53:44 +0200 Message-ID: <20230808095342.12637-8-vbabka@suse.cz> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230808095342.12637-7-vbabka@suse.cz> References: <20230808095342.12637-7-vbabka@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The SLUB sysfs stats enabled CONFIG_SLUB_STATS have two deficiencies identified wrt bulk alloc/free operations: - Bulk allocations from cpu freelist are not counted. Add the ALLOC_FASTPATH counter there. - Bulk fastpath freeing will count a list of multiple objects with a single FREE_FASTPATH inc. Add a stat_add() variant to count them all. Signed-off-by: Vlastimil Babka --- mm/slub.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index e3b5d5c0eb3a..a9437d48840c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -341,6 +341,14 @@ static inline void stat(const struct kmem_cache *s, en= um stat_item si) #endif } =20 +static inline void stat_add(const struct kmem_cache *s, enum stat_item si,= int v) +{ +#ifdef CONFIG_SLUB_STATS + raw_cpu_add(s->cpu_slab->stat[si], v); +#endif +} + + /* * Tracks for which NUMA nodes we have kmem_cache_nodes allocated. * Corresponds to node_state[N_NORMAL_MEMORY], but can temporarily @@ -3776,7 +3784,7 @@ static __always_inline void do_slab_free(struct kmem_= cache *s, =20 local_unlock(&s->cpu_slab->lock); } - stat(s, FREE_FASTPATH); + stat_add(s, FREE_FASTPATH, cnt); } #else /* CONFIG_SLUB_TINY */ static void do_slab_free(struct kmem_cache *s, @@ -3978,6 +3986,7 @@ static inline int __kmem_cache_alloc_bulk(struct kmem= _cache *s, gfp_t flags, c->freelist =3D get_freepointer(s, object); p[i] =3D object; maybe_wipe_obj_freeptr(s, p[i]); + stat(s, ALLOC_FASTPATH); } c->tid =3D next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, irqflags); --=20 2.41.0 From nobody Sun Feb 8 06:22:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56070C41513 for ; Tue, 8 Aug 2023 18:35:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234401AbjHHSfk (ORCPT ); Tue, 8 Aug 2023 14:35:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235483AbjHHSfW (ORCPT ); Tue, 8 Aug 2023 14:35:22 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B883FD24C7 for ; Tue, 8 Aug 2023 09:29:47 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id EB1E620318; Tue, 8 Aug 2023 09:53:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1691488432; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TBu9LYlHJ8OjXikD6cSMhoFhFKNv3djoLniF9L00kCU=; b=V2Kbm2lIqct91r38K4Ct7TQaHhljTuhoYVMpRcAt3cKRaekudh4nhIcpG85OITNPFcOyyB 9ny32bzS3cJ9kFOEpuYoc/5Az+gqs0imKKtm2YkXP1T1/JudJTJ8/RU4aO2ZNKp10qyzdy 0cT/eGNDsn8TY4oRAkOVz4/R6bWkim8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1691488432; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TBu9LYlHJ8OjXikD6cSMhoFhFKNv3djoLniF9L00kCU=; b=lLwi1ysLAUCxBlvGR5/1aEpWdfJUdaAtB0i8tJnQ/pz/b9FbcFpkHWfHfHfiX6579mtNR+ sIcdcA5cPJNiLFAw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BC8AA13451; Tue, 8 Aug 2023 09:53:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id mIhcLbAQ0mSBJQAAMHmgww (envelope-from ); Tue, 08 Aug 2023 09:53:52 +0000 From: Vlastimil Babka To: "Liam R. Howlett" , Matthew Wilcox , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Vlastimil Babka Subject: [RFC v1 2/5] mm, slub: add opt-in slub_percpu_array Date: Tue, 8 Aug 2023 11:53:45 +0200 Message-ID: <20230808095342.12637-9-vbabka@suse.cz> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230808095342.12637-7-vbabka@suse.cz> References: <20230808095342.12637-7-vbabka@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" kmem_cache_setup_percpu_array() will allocate a per-cpu array for caching alloc/free objects of given size for the cache. The cache has to be created with SLAB_NO_MERGE flag. The array is filled by freeing. When empty for alloc or full for freeing, it's simply bypassed by the operation, there's currently no batch freeing/allocations. The locking is copied from the page allocator's pcplists, based on embedded spin locks. Interrupts are not disabled, only preemption (cpu migration on RT). Trylock is attempted to avoid deadlock due to an intnerrupt, trylock failure means the array is bypassed. Sysfs stat counters alloc_cpu_cache and free_cpu_cache count operations that used the percpu array. Bulk allocation bypasses the array, bulk freeing does not. kmem_cache_prefill_percpu_array() can be called to ensure the array on the current cpu to at least the given number of objects. However this is only opportunistic as there's no cpu pinning and the trylocks may always fail. Therefore allocations cannot rely on the array for success even after the prefill. But misses should be rare enough that e.g. GFP_ATOMIC allocations should be acceptable after the refill. The operation is currently not optimized. More TODO/FIXMEs: - NUMA awareness - preferred node currently ignored, __GFP_THISNODE not honored - slub_debug - will not work for allocations from the array. Normally in SLUB implementation the slub_debug kills all fast paths, but that could lead to depleting the reserves if we ignore the prefill and use GFP_ATOMIC. Needs more thought. --- include/linux/slab.h | 4 + include/linux/slub_def.h | 10 ++ mm/slub.c | 210 ++++++++++++++++++++++++++++++++++++++- 3 files changed, 223 insertions(+), 1 deletion(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 848c7c82ad5a..f6c91cbc1544 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -196,6 +196,8 @@ struct kmem_cache *kmem_cache_create_usercopy(const cha= r *name, void kmem_cache_destroy(struct kmem_cache *s); int kmem_cache_shrink(struct kmem_cache *s); =20 +int kmem_cache_setup_percpu_array(struct kmem_cache *s, unsigned int count= ); + /* * Please use this macro to create slab caches. Simply specify the * name of the structure and maybe some flags that are listed above. @@ -494,6 +496,8 @@ void kmem_cache_free(struct kmem_cache *s, void *objp); void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, = void **p); =20 +int kmem_cache_prefill_percpu_array(struct kmem_cache *s, unsigned int cou= nt, gfp_t gfp); + static __always_inline void kfree_bulk(size_t size, void **p) { kmem_cache_free_bulk(NULL, size, p); diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index deb90cf4bffb..c85434668419 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -13,8 +13,10 @@ #include =20 enum stat_item { + ALLOC_PERCPU_CACHE, /* Allocation from percpu array cache */ ALLOC_FASTPATH, /* Allocation from cpu slab */ ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ + FREE_PERCPU_CACHE, /* Free to percpu array cache */ FREE_FASTPATH, /* Free to cpu slab */ FREE_SLOWPATH, /* Freeing not to cpu slab */ FREE_FROZEN, /* Freeing to frozen slab */ @@ -66,6 +68,13 @@ struct kmem_cache_cpu { }; #endif /* CONFIG_SLUB_TINY */ =20 +struct slub_percpu_array { + spinlock_t lock; + unsigned int count; + unsigned int used; + void * objects[]; +}; + #ifdef CONFIG_SLUB_CPU_PARTIAL #define slub_percpu_partial(c) ((c)->partial) =20 @@ -99,6 +108,7 @@ struct kmem_cache { #ifndef CONFIG_SLUB_TINY struct kmem_cache_cpu __percpu *cpu_slab; #endif + struct slub_percpu_array __percpu *cpu_array; /* Used for retrieving partial slabs, etc. */ slab_flags_t flags; unsigned long min_partial; diff --git a/mm/slub.c b/mm/slub.c index a9437d48840c..7fc9f7c124eb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -188,6 +188,79 @@ do { \ #define USE_LOCKLESS_FAST_PATH() (false) #endif =20 +/* copy/pasted from mm/page_alloc.c */ + +#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT) +/* + * On SMP, spin_trylock is sufficient protection. + * On PREEMPT_RT, spin_trylock is equivalent on both SMP and UP. + */ +#define pcp_trylock_prepare(flags) do { } while (0) +#define pcp_trylock_finish(flag) do { } while (0) +#else + +/* UP spin_trylock always succeeds so disable IRQs to prevent re-entrancy.= */ +#define pcp_trylock_prepare(flags) local_irq_save(flags) +#define pcp_trylock_finish(flags) local_irq_restore(flags) +#endif + +/* + * Locking a pcp requires a PCP lookup followed by a spinlock. To avoid + * a migration causing the wrong PCP to be locked and remote memory being + * potentially allocated, pin the task to the CPU for the lookup+lock. + * preempt_disable is used on !RT because it is faster than migrate_disabl= e. + * migrate_disable is used on RT because otherwise RT spinlock usage is + * interfered with and a high priority task cannot preempt the allocator. + */ +#ifndef CONFIG_PREEMPT_RT +#define pcpu_task_pin() preempt_disable() +#define pcpu_task_unpin() preempt_enable() +#else +#define pcpu_task_pin() migrate_disable() +#define pcpu_task_unpin() migrate_enable() +#endif + +/* + * Generic helper to lookup and a per-cpu variable with an embedded spinlo= ck. + * Return value should be used with equivalent unlock helper. + */ +#define pcpu_spin_lock(type, member, ptr) \ +({ \ + type *_ret; \ + pcpu_task_pin(); \ + _ret =3D this_cpu_ptr(ptr); \ + spin_lock(&_ret->member); \ + _ret; \ +}) + +#define pcpu_spin_trylock(type, member, ptr) \ +({ \ + type *_ret; \ + pcpu_task_pin(); \ + _ret =3D this_cpu_ptr(ptr); \ + if (!spin_trylock(&_ret->member)) { \ + pcpu_task_unpin(); \ + _ret =3D NULL; \ + } \ + _ret; \ +}) + +#define pcpu_spin_unlock(member, ptr) \ +({ \ + spin_unlock(&ptr->member); \ + pcpu_task_unpin(); \ +}) + +/* struct slub_percpu_array specific helpers. */ +#define pca_spin_lock(ptr) \ + pcpu_spin_lock(struct slub_percpu_array, lock, ptr) + +#define pca_spin_trylock(ptr) \ + pcpu_spin_trylock(struct slub_percpu_array, lock, ptr) + +#define pca_spin_unlock(ptr) \ + pcpu_spin_unlock(lock, ptr) + #ifndef CONFIG_SLUB_TINY #define __fastpath_inline __always_inline #else @@ -3326,6 +3399,32 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_= t gfpflags, int node, return p; } =20 +static inline void *alloc_from_pca(struct kmem_cache *s) +{ + unsigned long __maybe_unused UP_flags; + struct slub_percpu_array *pca; + void *object =3D NULL; + + pcp_trylock_prepare(UP_flags); + pca =3D pca_spin_trylock(s->cpu_array); + + if (unlikely(!pca)) + goto failed; + + if (likely(pca->used > 0)) { + object =3D pca->objects[--pca->used]; + pca_spin_unlock(pca); + pcp_trylock_finish(UP_flags); + stat(s, ALLOC_PERCPU_CACHE); + return object; + } + pca_spin_unlock(pca); + +failed: + pcp_trylock_finish(UP_flags); + return NULL; +} + static __always_inline void *__slab_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) { @@ -3465,7 +3564,11 @@ static __fastpath_inline void *slab_alloc_node(struc= t kmem_cache *s, struct list if (unlikely(object)) goto out; =20 - object =3D __slab_alloc_node(s, gfpflags, node, addr, orig_size); + if (s->cpu_array) + object =3D alloc_from_pca(s); + + if (!object) + object =3D __slab_alloc_node(s, gfpflags, node, addr, orig_size); =20 maybe_wipe_obj_freeptr(s, object); init =3D slab_want_init_on_alloc(gfpflags, s); @@ -3715,6 +3818,34 @@ static void __slab_free(struct kmem_cache *s, struct= slab *slab, discard_slab(s, slab); } =20 +static inline bool free_to_pca(struct kmem_cache *s, void *object) +{ + unsigned long __maybe_unused UP_flags; + struct slub_percpu_array *pca; + bool ret =3D false; + + pcp_trylock_prepare(UP_flags); + pca =3D pca_spin_trylock(s->cpu_array); + + if (!pca) { + pcp_trylock_finish(UP_flags); + return false; + } + + if (pca->used < pca->count) { + pca->objects[pca->used++] =3D object; + ret =3D true; + } + + pca_spin_unlock(pca); + pcp_trylock_finish(UP_flags); + + if (ret) + stat(s, FREE_PERCPU_CACHE); + + return ret; +} + #ifndef CONFIG_SLUB_TINY /* * Fastpath with forced inlining to produce a kfree and kmem_cache_free th= at @@ -3740,6 +3871,11 @@ static __always_inline void do_slab_free(struct kmem= _cache *s, unsigned long tid; void **freelist; =20 + if (s->cpu_array && cnt =3D=3D 1) { + if (free_to_pca(s, head)) + return; + } + redo: /* * Determine the currently cpus per cpu slab. @@ -3793,6 +3929,11 @@ static void do_slab_free(struct kmem_cache *s, { void *tail_obj =3D tail ? : head; =20 + if (s->cpu_array && cnt =3D=3D 1) { + if (free_to_pca(s, head)) + return; + } + __slab_free(s, slab, head, tail_obj, cnt, addr); } #endif /* CONFIG_SLUB_TINY */ @@ -4060,6 +4201,45 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_= t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); =20 +int kmem_cache_prefill_percpu_array(struct kmem_cache *s, unsigned int cou= nt, + gfp_t gfp) +{ + struct slub_percpu_array *pca; + void *objects[32]; + unsigned int used; + unsigned int allocated; + + if (!s->cpu_array) + return -EINVAL; + + /* racy but we don't care */ + pca =3D raw_cpu_ptr(s->cpu_array); + + used =3D READ_ONCE(pca->used); + + if (used >=3D count) + return 0; + + if (pca->count < count) + return -EINVAL; + + count -=3D used; + + /* TODO fix later */ + if (count > 32) + count =3D 32; + + for (int i =3D 0; i < count; i++) + objects[i] =3D NULL; + allocated =3D kmem_cache_alloc_bulk(s, gfp, count, &objects[0]); + + for (int i =3D 0; i < count; i++) { + if (objects[i]) { + kmem_cache_free(s, objects[i]); + } + } + return allocated; +} =20 /* * Object placement in a slab is made very easy because we always start at @@ -5131,6 +5311,30 @@ int __kmem_cache_create(struct kmem_cache *s, slab_f= lags_t flags) return 0; } =20 +int kmem_cache_setup_percpu_array(struct kmem_cache *s, unsigned int count) +{ + int cpu; + + if (WARN_ON_ONCE(!(s->flags & SLAB_NO_MERGE))) + return -EINVAL; + + s->cpu_array =3D __alloc_percpu(struct_size(s->cpu_array, objects, count), + sizeof(void *)); + + if (!s->cpu_array) + return -ENOMEM; + + for_each_possible_cpu(cpu) { + struct slub_percpu_array *pca =3D per_cpu_ptr(s->cpu_array, cpu); + + spin_lock_init(&pca->lock); + pca->count =3D count; + pca->used =3D 0; + } + + return 0; +} + #ifdef SLAB_SUPPORTS_SYSFS static int count_inuse(struct slab *slab) { @@ -5908,8 +6112,10 @@ static ssize_t text##_store(struct kmem_cache *s, \ } \ SLAB_ATTR(text); \ =20 +STAT_ATTR(ALLOC_PERCPU_CACHE, alloc_cpu_cache); STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); +STAT_ATTR(FREE_PERCPU_CACHE, free_cpu_cache); STAT_ATTR(FREE_FASTPATH, free_fastpath); STAT_ATTR(FREE_SLOWPATH, free_slowpath); STAT_ATTR(FREE_FROZEN, free_frozen); @@ -5995,8 +6201,10 @@ static struct attribute *slab_attrs[] =3D { &remote_node_defrag_ratio_attr.attr, #endif #ifdef CONFIG_SLUB_STATS + &alloc_cpu_cache_attr.attr, &alloc_fastpath_attr.attr, &alloc_slowpath_attr.attr, + &free_cpu_cache_attr.attr, &free_fastpath_attr.attr, &free_slowpath_attr.attr, &free_frozen_attr.attr, --=20 2.41.0 From nobody Sun Feb 8 06:22:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78BFEC04A6A for ; Tue, 8 Aug 2023 16:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232557AbjHHQV6 (ORCPT ); Tue, 8 Aug 2023 12:21:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232401AbjHHQT4 (ORCPT ); Tue, 8 Aug 2023 12:19:56 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81A219001 for ; Tue, 8 Aug 2023 08:49:00 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 244E520319; Tue, 8 Aug 2023 09:53:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1691488433; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a3DtkVdLGQHDYk90N/OXEPjHa8PNLxLzj/qp7wDJMuQ=; b=TM63PQDL1/kV/0eRyfhNJd9mTnr0x7YZSgjc8gu+OShasFlpsvfTd29e6XuZUTzyIuOzKh epAVCCvXUWmkzTZ+F27l6Dqs/VEedrJReg+43nUQbnAOrW9MhnfDcJEo+daT4fyDfyI7+2 tuJoCLZ60xeq+ygkOsdBQLf9xHRywQI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1691488433; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a3DtkVdLGQHDYk90N/OXEPjHa8PNLxLzj/qp7wDJMuQ=; b=PMYk4+pst66z0H1Ard3n4p3Ej4MlqgpLiQdaPFhRHz+PTuyTYA3lk/x+G2Bsq/DLk5blTG bJlASInXsVjsuJBA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id ED5F6139E9; Tue, 8 Aug 2023 09:53:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id IP9CObAQ0mSBJQAAMHmgww (envelope-from ); Tue, 08 Aug 2023 09:53:52 +0000 From: Vlastimil Babka To: "Liam R. Howlett" , Matthew Wilcox , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Vlastimil Babka Subject: [RFC v1 3/5] maple_tree: use slub percpu array Date: Tue, 8 Aug 2023 11:53:46 +0200 Message-ID: <20230808095342.12637-10-vbabka@suse.cz> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230808095342.12637-7-vbabka@suse.cz> References: <20230808095342.12637-7-vbabka@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Just make sure the maple_node_cache has a percpu array of size 32. Will break with CONFIG_SLAB. --- lib/maple_tree.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 4dd73cf936a6..1196d0a17f03 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -6180,9 +6180,16 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp) =20 void __init maple_tree_init(void) { + int ret; + maple_node_cache =3D kmem_cache_create("maple_node", sizeof(struct maple_node), sizeof(struct maple_node), - SLAB_PANIC, NULL); + SLAB_PANIC | SLAB_NO_MERGE, NULL); + + ret =3D kmem_cache_setup_percpu_array(maple_node_cache, 32); + + if (ret) + pr_warn("error %d creating percpu_array for maple_node_cache\n", ret); } =20 /** --=20 2.41.0 From nobody Sun Feb 8 06:22:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 642CEC001DE for ; Tue, 8 Aug 2023 16:35:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229540AbjHHQfg (ORCPT ); Tue, 8 Aug 2023 12:35:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232972AbjHHQeb (ORCPT ); Tue, 8 Aug 2023 12:34:31 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E62090AE for ; Tue, 8 Aug 2023 08:52:32 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 5CA9E2248A; Tue, 8 Aug 2023 09:53:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1691488433; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i6TQwXZ4v9nk5j9z/G1P0QjfrlA/rpwYdzHk2PVBTRo=; b=sMK735vGrxWU1VYbx6RUmceJoQbzNocoGXJWDnHnQMuYYl5OwU3eFj+3qgZ8Ba0mFBaNLq Oq7pXw3+pm/6X820956mlj1IpRt+1zQo128JYQOLdIXM+09UMZwOMLvmaDtJEvC7VeLAxw oWjOyz1mC/TZfMdOelpD4y7kBG9k37Q= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1691488433; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i6TQwXZ4v9nk5j9z/G1P0QjfrlA/rpwYdzHk2PVBTRo=; b=qySqY3LEh48iTLSTZF1kLGV3kFsl/U5DLBfmwlVvoTvtGqeP3Z4gT7neP5svnR3Q21mOnH hlN2F/r3TLQ+AODw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2649513451; Tue, 8 Aug 2023 09:53:53 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id AFq7CLEQ0mSBJQAAMHmgww (envelope-from ); Tue, 08 Aug 2023 09:53:53 +0000 From: Vlastimil Babka To: "Liam R. Howlett" , Matthew Wilcox , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Vlastimil Babka Subject: [RFC v1 4/5] maple_tree: avoid bulk alloc/free to use percpu array more Date: Tue, 8 Aug 2023 11:53:47 +0200 Message-ID: <20230808095342.12637-11-vbabka@suse.cz> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230808095342.12637-7-vbabka@suse.cz> References: <20230808095342.12637-7-vbabka@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Using bulk alloc/free on a cache with percpu array should not be necessary and the bulk alloc actually bypasses the array (the prefill functionality currently relies on this). The simplest change is just to convert the respective maple tree wrappers to do a loop of normal alloc/free. --- lib/maple_tree.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 1196d0a17f03..7a8e7c467d7c 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -161,12 +161,19 @@ static inline struct maple_node *mt_alloc_one(gfp_t g= fp) =20 static inline int mt_alloc_bulk(gfp_t gfp, size_t size, void **nodes) { - return kmem_cache_alloc_bulk(maple_node_cache, gfp, size, nodes); + int allocated =3D 0; + for (size_t i =3D 0; i < size; i++) { + nodes[i] =3D kmem_cache_alloc(maple_node_cache, gfp); + if (nodes[i]) + allocated++; + } + return allocated; } =20 static inline void mt_free_bulk(size_t size, void __rcu **nodes) { - kmem_cache_free_bulk(maple_node_cache, size, (void **)nodes); + for (size_t i =3D 0; i < size; i++) + kmem_cache_free(maple_node_cache, nodes[i]); } =20 static void mt_free_rcu(struct rcu_head *head) --=20 2.41.0 From nobody Sun Feb 8 06:22:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 951A0C001B0 for ; Tue, 8 Aug 2023 16:35:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233117AbjHHQfe (ORCPT ); Tue, 8 Aug 2023 12:35:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232906AbjHHQea (ORCPT ); Tue, 8 Aug 2023 12:34:30 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F01B990AD for ; Tue, 8 Aug 2023 08:52:31 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 7C5062248B; Tue, 8 Aug 2023 09:53:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1691488433; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=19m9+YAryd9zQqOJKLPS0ScJg55yhuc9Zn7O2SmAkY8=; b=0ypz/M4hFEYVg3RIkyKTZpGOtO0OELdvePk4KsvNp0QXKtSQ1NTkwvJ6x9bkVpsk5ACEo+ VdhEFCaw7TRHrCXiaOhP4fR0hWTfeBY2vexIA+Fg3uiY/xr+RGb+kCzEL8ZCXD6x0fD/My 6dCsKBwvmBeEB1+gOW5PonTGZTXEndk= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1691488433; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=19m9+YAryd9zQqOJKLPS0ScJg55yhuc9Zn7O2SmAkY8=; b=gHwsy0hibpgfl2awLPIMsMc4dKqncic8vqqIN/UioyCepHOo4bgnZRyoxdFoRXKIOTGPXa oVyI5GXsppN4AdBQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 53972139E9; Tue, 8 Aug 2023 09:53:53 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id eKi8E7EQ0mSBJQAAMHmgww (envelope-from ); Tue, 08 Aug 2023 09:53:53 +0000 From: Vlastimil Babka To: "Liam R. Howlett" , Matthew Wilcox , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Vlastimil Babka Subject: [RFC v1 5/5] maple_tree: replace preallocation with slub percpu array prefill Date: Tue, 8 Aug 2023 11:53:48 +0200 Message-ID: <20230808095342.12637-12-vbabka@suse.cz> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230808095342.12637-7-vbabka@suse.cz> References: <20230808095342.12637-7-vbabka@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" With the percpu array we can try not doing the preallocations in maple tree, and instead make sure the percpu array is prefilled, and using GFP_ATOMIC in places that relied on the preallocation (in case we miss or fail trylock on the array), i.e. mas_store_prealloc(). For now simply add __GFP_NOFAIL there as well. First I tried to change mas_node_count_gfp() to not preallocate anything anywhere, but that lead to warns and panics, even though the other caller mas_node_count() uses GFP_NOWAIT | __GFP_NOWARN so it has no guarantees... So I changed just mas_preallocate(). I let it still to truly preallocate a single node, but maybe it's not necessary? --- lib/maple_tree.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 7a8e7c467d7c..5a209d88c318 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -5534,7 +5534,12 @@ void mas_store_prealloc(struct ma_state *mas, void *= entry) =20 mas_wr_store_setup(&wr_mas); trace_ma_write(__func__, mas, 0, entry); + +retry: mas_wr_store_entry(&wr_mas); + if (unlikely(mas_nomem(mas, GFP_ATOMIC | __GFP_NOFAIL))) + goto retry; + MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas)); mas_destroy(mas); } @@ -5550,9 +5555,10 @@ EXPORT_SYMBOL_GPL(mas_store_prealloc); int mas_preallocate(struct ma_state *mas, gfp_t gfp) { int ret; + int count =3D 1 + mas_mt_height(mas) * 3; =20 - mas_node_count_gfp(mas, 1 + mas_mt_height(mas) * 3, gfp); - mas->mas_flags |=3D MA_STATE_PREALLOC; + mas_node_count_gfp(mas, 1, gfp); + kmem_cache_prefill_percpu_array(maple_node_cache, count, gfp); if (likely(!mas_is_err(mas))) return 0; =20 --=20 2.41.0