From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CBE591509AB for ; Wed, 23 Jul 2025 13:35:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277709; cv=none; b=r9HeZZ3DMU0QTSDqAub14xiuNfX6MhxFukkAmInERb8EtJ+wuq2SHgwoneG4r+orvYgklBXSqMXp6q4BMwIEb1u4UA61DfcEenwa52HbQ3Sz+phAjgEIqEgut5dtf4R/2HbfeXYeQfHFaQ0RGqyAQFWiHSvw0FbzTrxyktVTia8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277709; c=relaxed/simple; bh=ng4DsBKtrhGpQ4lSpsSozn/6TzoaG3W+Kp8Ys1bibuI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=n73600e+tixfsS87ek5gcP/s8keowv/WVKF59B/bNbs4HRDoQ31re4HX67ex3/2uNnMFZU0qETZODdRpdwXwxeX39Nmom1qM25dB4q7QnLYPf5TTyN1dA+PWTdvXYYRA3/X7mHjnGxEkNAO6pFz+TQAPx6xOcpIhbZtyoc6aU1s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=xDXYh/Cb; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=lSYeJ0QG; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=HDGPtbQi; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=tYlxqikL; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="xDXYh/Cb"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="lSYeJ0QG"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="HDGPtbQi"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="tYlxqikL" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id EB9A61F747; Wed, 23 Jul 2025 13:35:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f2FWt5DnqqscmchRcqWC6uU8V2oIUyOjxUoEWlldYEk=; b=xDXYh/Cb0h1sVsWEuxP1ibNfuGpQTWzBhhlq0l2lnKOqxOwVV26ujNte9gsNz0REu2Mm/E 2jGUDUoBh2ljd5WqRlZtde0kvH4mPr7xndoEjj6qAFReX76fDxG3xYN9J20RhtTrBxPuai LhnJYXVtvtfcu5JUGDwUw2MftRD6jIU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f2FWt5DnqqscmchRcqWC6uU8V2oIUyOjxUoEWlldYEk=; b=lSYeJ0QGdId1D7kFsJurFHmm1gahchAdlCuH/f1LrPCRzorB0GMukZXT5XYxy107UF5rEr TXSK9EFDuTfhLdBw== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=HDGPtbQi; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tYlxqikL DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277703; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f2FWt5DnqqscmchRcqWC6uU8V2oIUyOjxUoEWlldYEk=; b=HDGPtbQi3sKIkxRzVaz8Gmw73n3I8FY9ltPytUOvgMxSiXQK6sUoJSHS4tA/NTnhcrx4Em 2GT2QQm/CWFeRxZwJkvCCshdWf3Va/UhyjrDb0BuKzW8HzXGsgkcvTGUhPMnDunwfDnPjT ueMdKlAtA6GkSivOdixiDW70cXw7qQI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277703; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f2FWt5DnqqscmchRcqWC6uU8V2oIUyOjxUoEWlldYEk=; b=tYlxqikLvkc6au0XM+gKZctwX9PS6puxac9FSUPAjbiNXWnKSos+7ig3qUGJg5LHHmrMFb MAmWr6+SkSyWumCQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id D222613ADD; Wed, 23 Jul 2025 13:35:03 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 6HL0MgflgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:03 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:34 +0200 Subject: [PATCH v5 01/14] slab: add opt-in caching layer of percpu sheaves Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-1-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: EB9A61F747 X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; FUZZY_RATELIMITED(0.00)[rspamd.com]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[12]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; TO_DN_SOME(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DKIM_TRACE(0.00)[suse.cz:+] X-Spam-Score: -4.51 Specifying a non-zero value for a new struct kmem_cache_args field sheaf_capacity will setup a caching layer of percpu arrays called sheaves of given capacity for the created cache. Allocations from the cache will allocate via the percpu sheaves (main or spare) as long as they have no NUMA node preference. Frees will also put the object back into one of the sheaves. When both percpu sheaves are found empty during an allocation, an empty sheaf may be replaced with a full one from the per-node barn. If none are available and the allocation is allowed to block, an empty sheaf is refilled from slab(s) by an internal bulk alloc operation. When both percpu sheaves are full during freeing, the barn can replace a full one with an empty one, unless over a full sheaves limit. In that case a sheaf is flushed to slab(s) by an internal bulk free operation. Flushing sheaves and barns is also wired to the existing cpu flushing and cache shrinking operations. The sheaves do not distinguish NUMA locality of the cached objects. If an allocation is requested with kmem_cache_alloc_node() (or a mempolicy with strict_numa mode enabled) with a specific node (not NUMA_NO_NODE), the sheaves are bypassed. The bulk operations exposed to slab users also try to utilize the sheaves as long as the necessary (full or empty) sheaves are available on the cpu or in the barn. Once depleted, they will fallback to bulk alloc/free to slabs directly to avoid double copying. The sheaf_capacity value is exported in sysfs for observability. Sysfs CONFIG_SLUB_STATS counters alloc_cpu_sheaf and free_cpu_sheaf count objects allocated or freed using the sheaves (and thus not counting towards the other alloc/free path counters). Counters sheaf_refill and sheaf_flush count objects filled or flushed from or to slab pages, and can be used to assess how effective the caching is. The refill and flush operations will also count towards the usual alloc_fastpath/slowpath, free_fastpath/slowpath and other counters for the backing slabs. For barn operations, barn_get and barn_put count how many full sheaves were get from or put to the barn, the _fail variants count how many such requests could not be satisfied mainly because the barn was either empty or full. While the barn also holds empty sheaves to make some operations easier, these are not as critical to mandate own counters. Finally, there are sheaf_alloc/sheaf_free counters. Access to the percpu sheaves is protected by local_trylock() when potential callers include irq context, and local_lock() otherwise (such as when we already know the gfp flags allow blocking). The trylock failures should be rare and we can easily fallback. Each per-NUMA-node barn has a spin_lock. When slub_debug is enabled for a cache with sheaf_capacity also specified, the latter is ignored so that allocations and frees reach the slow path where debugging hooks are processed. Similarly, we ignore it with CONFIG_SLUB_TINY which prefers low memory usage to performance. Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 31 ++ mm/slab.h | 2 + mm/slab_common.c | 5 +- mm/slub.c | 1101 ++++++++++++++++++++++++++++++++++++++++++++++= +--- 4 files changed, 1092 insertions(+), 47 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index d5a8ab98035cf3e3d9043e3b038e1bebeff05b52..6cfd085907afb8fc6e502ff7a1a= 1830c52ff9125 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -335,6 +335,37 @@ struct kmem_cache_args { * %NULL means no constructor. */ void (*ctor)(void *); + /** + * @sheaf_capacity: Enable sheaves of given capacity for the cache. + * + * With a non-zero value, allocations from the cache go through caching + * arrays called sheaves. Each cpu has a main sheaf that's always + * present, and a spare sheaf thay may be not present. When both become + * empty, there's an attempt to replace an empty sheaf with a full sheaf + * from the per-node barn. + * + * When no full sheaf is available, and gfp flags allow blocking, a + * sheaf is allocated and filled from slab(s) using bulk allocation. + * Otherwise the allocation falls back to the normal operation + * allocating a single object from a slab. + * + * Analogically when freeing and both percpu sheaves are full, the barn + * may replace it with an empty sheaf, unless it's over capacity. In + * that case a sheaf is bulk freed to slab pages. + * + * The sheaves do not enforce NUMA placement of objects, so allocations + * via kmem_cache_alloc_node() with a node specified other than + * NUMA_NO_NODE will bypass them. + * + * Bulk allocation and free operations also try to use the cpu sheaves + * and barn, but fallback to using slab pages directly. + * + * When slub_debug is enabled for the cache, the sheaf_capacity argument + * is ignored. + * + * %0 means no sheaves will be created. + */ + unsigned int sheaf_capacity; }; =20 struct kmem_cache *__kmem_cache_create_args(const char *name, diff --git a/mm/slab.h b/mm/slab.h index 05a21dc796e095e8db934564d559494cd81746ec..1980330c2fcb4a4613a7e4f7efc= 78b349993fd89 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -259,6 +259,7 @@ struct kmem_cache { #ifndef CONFIG_SLUB_TINY struct kmem_cache_cpu __percpu *cpu_slab; #endif + struct slub_percpu_sheaves __percpu *cpu_sheaves; /* Used for retrieving partial slabs, etc. */ slab_flags_t flags; unsigned long min_partial; @@ -272,6 +273,7 @@ struct kmem_cache { /* Number of per cpu partial slabs to keep around */ unsigned int cpu_partial_slabs; #endif + unsigned int sheaf_capacity; struct kmem_cache_order_objects oo; =20 /* Allocation and freeing of slabs */ diff --git a/mm/slab_common.c b/mm/slab_common.c index bfe7c40eeee1a01c175766935c1e3c0304434a53..e2b197e47866c30acdbd1fee415= 9f262a751c5a7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -163,6 +163,9 @@ int slab_unmergeable(struct kmem_cache *s) return 1; #endif =20 + if (s->cpu_sheaves) + return 1; + /* * We may have set a slab to be unmergeable during bootstrap. */ @@ -321,7 +324,7 @@ struct kmem_cache *__kmem_cache_create_args(const char = *name, object_size - args->usersize < args->useroffset)) args->usersize =3D args->useroffset =3D 0; =20 - if (!args->usersize) + if (!args->usersize && !args->sheaf_capacity) s =3D __kmem_cache_alias(name, object_size, args->align, flags, args->ctor); if (s) diff --git a/mm/slub.c b/mm/slub.c index 31e11ef256f90ad8a21d6b090f810f4c991a68d6..6543aaade60b0adaab232b2256d= 65c1042c62e1c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -346,8 +346,10 @@ static inline void debugfs_slab_add(struct kmem_cache = *s) { } #endif =20 enum stat_item { + ALLOC_PCS, /* Allocation from percpu sheaf */ ALLOC_FASTPATH, /* Allocation from cpu slab */ ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ + FREE_PCS, /* Free to percpu sheaf */ FREE_FASTPATH, /* Free to cpu slab */ FREE_SLOWPATH, /* Freeing not to cpu slab */ FREE_FROZEN, /* Freeing to frozen slab */ @@ -372,6 +374,14 @@ enum stat_item { CPU_PARTIAL_FREE, /* Refill cpu partial on free */ CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */ CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ + SHEAF_FLUSH, /* Objects flushed from a sheaf */ + SHEAF_REFILL, /* Objects refilled to a sheaf */ + SHEAF_ALLOC, /* Allocation of an empty sheaf */ + SHEAF_FREE, /* Freeing of an empty sheaf */ + BARN_GET, /* Got full sheaf from barn */ + BARN_GET_FAIL, /* Failed to get full sheaf from barn */ + BARN_PUT, /* Put full sheaf to barn */ + BARN_PUT_FAIL, /* Failed to put full sheaf to barn */ NR_SLUB_STAT_ITEMS }; =20 @@ -418,6 +428,33 @@ void stat_add(const struct kmem_cache *s, enum stat_it= em si, int v) #endif } =20 +#define MAX_FULL_SHEAVES 10 +#define MAX_EMPTY_SHEAVES 10 + +struct node_barn { + spinlock_t lock; + struct list_head sheaves_full; + struct list_head sheaves_empty; + unsigned int nr_full; + unsigned int nr_empty; +}; + +struct slab_sheaf { + union { + struct rcu_head rcu_head; + struct list_head barn_list; + }; + unsigned int size; + void *objects[]; +}; + +struct slub_percpu_sheaves { + local_trylock_t lock; + struct slab_sheaf *main; /* never NULL when unlocked */ + struct slab_sheaf *spare; /* empty or full, may be NULL */ + struct node_barn *barn; +}; + /* * The slab lists for all objects. */ @@ -430,6 +467,7 @@ struct kmem_cache_node { atomic_long_t total_objects; struct list_head full; #endif + struct node_barn *barn; }; =20 static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int n= ode) @@ -453,12 +491,19 @@ static inline struct kmem_cache_node *get_node(struct= kmem_cache *s, int node) */ static nodemask_t slab_nodes; =20 -#ifndef CONFIG_SLUB_TINY /* * Workqueue used for flush_cpu_slab(). */ static struct workqueue_struct *flushwq; -#endif + +struct slub_flush_work { + struct work_struct work; + struct kmem_cache *s; + bool skip; +}; + +static DEFINE_MUTEX(flush_lock); +static DEFINE_PER_CPU(struct slub_flush_work, slub_flush); =20 /******************************************************************** * Core slab cache functions @@ -2437,6 +2482,359 @@ static void *setup_object(struct kmem_cache *s, voi= d *object) return object; } =20 +static struct slab_sheaf *alloc_empty_sheaf(struct kmem_cache *s, gfp_t gf= p) +{ + struct slab_sheaf *sheaf =3D kzalloc(struct_size(sheaf, objects, + s->sheaf_capacity), gfp); + + if (unlikely(!sheaf)) + return NULL; + + stat(s, SHEAF_ALLOC); + + return sheaf; +} + +static void free_empty_sheaf(struct kmem_cache *s, struct slab_sheaf *shea= f) +{ + kfree(sheaf); + + stat(s, SHEAF_FREE); +} + +static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, + size_t size, void **p); + + +static int refill_sheaf(struct kmem_cache *s, struct slab_sheaf *sheaf, + gfp_t gfp) +{ + int to_fill =3D s->sheaf_capacity - sheaf->size; + int filled; + + if (!to_fill) + return 0; + + filled =3D __kmem_cache_alloc_bulk(s, gfp, to_fill, + &sheaf->objects[sheaf->size]); + + sheaf->size +=3D filled; + + stat_add(s, SHEAF_REFILL, filled); + + if (filled < to_fill) + return -ENOMEM; + + return 0; +} + + +static struct slab_sheaf *alloc_full_sheaf(struct kmem_cache *s, gfp_t gfp) +{ + struct slab_sheaf *sheaf =3D alloc_empty_sheaf(s, gfp); + + if (!sheaf) + return NULL; + + if (refill_sheaf(s, sheaf, gfp)) { + free_empty_sheaf(s, sheaf); + return NULL; + } + + return sheaf; +} + +/* + * Maximum number of objects freed during a single flush of main pcs sheaf. + * Translates directly to an on-stack array size. + */ +#define PCS_BATCH_MAX 32U + +static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void= **p); + +/* + * Free all objects from the main sheaf. In order to perform + * __kmem_cache_free_bulk() outside of cpu_sheaves->lock, work in batches = where + * object pointers are moved to a on-stack array under the lock. To bound = the + * stack usage, limit each batch to PCS_BATCH_MAX. + * + * returns true if at least partially flushed + */ +static bool sheaf_flush_main(struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + unsigned int batch, remaining; + void *objects[PCS_BATCH_MAX]; + struct slab_sheaf *sheaf; + bool ret =3D false; + +next_batch: + if (!local_trylock(&s->cpu_sheaves->lock)) + return ret; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + sheaf =3D pcs->main; + + batch =3D min(PCS_BATCH_MAX, sheaf->size); + + sheaf->size -=3D batch; + memcpy(objects, sheaf->objects + sheaf->size, batch * sizeof(void *)); + + remaining =3D sheaf->size; + + local_unlock(&s->cpu_sheaves->lock); + + __kmem_cache_free_bulk(s, batch, &objects[0]); + + stat_add(s, SHEAF_FLUSH, batch); + + ret =3D true; + + if (remaining) + goto next_batch; + + return ret; +} + +/* + * Free all objects from a sheaf that's unused, i.e. not linked to any + * cpu_sheaves, so we need no locking and batching. The locking is also not + * necessary when flushing cpu's sheaves (both spare and main) during cpu + * hotremove as the cpu is not executing anymore. + */ +static void sheaf_flush_unused(struct kmem_cache *s, struct slab_sheaf *sh= eaf) +{ + if (!sheaf->size) + return; + + stat_add(s, SHEAF_FLUSH, sheaf->size); + + __kmem_cache_free_bulk(s, sheaf->size, &sheaf->objects[0]); + + sheaf->size =3D 0; +} + +/* + * Caller needs to make sure migration is disabled in order to fully flush + * single cpu's sheaves + * + * must not be called from an irq + * + * flushing operations are rare so let's keep it simple and flush to slabs + * directly, skipping the barn + */ +static void pcs_flush_all(struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *spare; + + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + spare =3D pcs->spare; + pcs->spare =3D NULL; + + local_unlock(&s->cpu_sheaves->lock); + + if (spare) { + sheaf_flush_unused(s, spare); + free_empty_sheaf(s, spare); + } + + sheaf_flush_main(s); +} + +static void __pcs_flush_all_cpu(struct kmem_cache *s, unsigned int cpu) +{ + struct slub_percpu_sheaves *pcs; + + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + /* The cpu is not executing anymore so we don't need pcs->lock */ + sheaf_flush_unused(s, pcs->main); + if (pcs->spare) { + sheaf_flush_unused(s, pcs->spare); + free_empty_sheaf(s, pcs->spare); + pcs->spare =3D NULL; + } +} + +static void pcs_destroy(struct kmem_cache *s) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct slub_percpu_sheaves *pcs; + + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + /* can happen when unwinding failed create */ + if (!pcs->main) + continue; + + /* + * We have already passed __kmem_cache_shutdown() so everything + * was flushed and there should be no objects allocated from + * slabs, otherwise kmem_cache_destroy() would have aborted. + * Therefore something would have to be really wrong if the + * warnings here trigger, and we should rather leave objects and + * sheaves to leak in that case. + */ + + WARN_ON(pcs->spare); + + if (!WARN_ON(pcs->main->size)) { + free_empty_sheaf(s, pcs->main); + pcs->main =3D NULL; + } + } + + free_percpu(s->cpu_sheaves); + s->cpu_sheaves =3D NULL; +} + +static struct slab_sheaf *barn_get_empty_sheaf(struct node_barn *barn) +{ + struct slab_sheaf *empty =3D NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_empty) { + empty =3D list_first_entry(&barn->sheaves_empty, + struct slab_sheaf, barn_list); + list_del(&empty->barn_list); + barn->nr_empty--; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return empty; +} + +/* + * The following two functions are used mainly in cases where we have to u= ndo an + * intended action due to a race or cpu migration. Thus they do not check = the + * empty or full sheaf limits for simplicity. + */ + +static void barn_put_empty_sheaf(struct node_barn *barn, struct slab_sheaf= *sheaf) +{ + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + list_add(&sheaf->barn_list, &barn->sheaves_empty); + barn->nr_empty++; + + spin_unlock_irqrestore(&barn->lock, flags); +} + +static void barn_put_full_sheaf(struct node_barn *barn, struct slab_sheaf = *sheaf) +{ + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + list_add(&sheaf->barn_list, &barn->sheaves_full); + barn->nr_full++; + + spin_unlock_irqrestore(&barn->lock, flags); +} + +/* + * If a full sheaf is available, return it and put the supplied empty one = to + * barn. We ignore the limit on empty sheaves as the number of sheaves doe= sn't + * change. + */ +static struct slab_sheaf * +barn_replace_empty_sheaf(struct node_barn *barn, struct slab_sheaf *empty) +{ + struct slab_sheaf *full =3D NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full) { + full =3D list_first_entry(&barn->sheaves_full, struct slab_sheaf, + barn_list); + list_del(&full->barn_list); + list_add(&empty->barn_list, &barn->sheaves_empty); + barn->nr_full--; + barn->nr_empty++; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return full; +} +/* + * If an empty sheaf is available, return it and put the supplied full one= to + * barn. But if there are too many full sheaves, reject this with -E2BIG. + */ +static struct slab_sheaf * +barn_replace_full_sheaf(struct node_barn *barn, struct slab_sheaf *full) +{ + struct slab_sheaf *empty; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full >=3D MAX_FULL_SHEAVES) { + empty =3D ERR_PTR(-E2BIG); + } else if (!barn->nr_empty) { + empty =3D ERR_PTR(-ENOMEM); + } else { + empty =3D list_first_entry(&barn->sheaves_empty, struct slab_sheaf, + barn_list); + list_del(&empty->barn_list); + list_add(&full->barn_list, &barn->sheaves_full); + barn->nr_empty--; + barn->nr_full++; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return empty; +} + +static void barn_init(struct node_barn *barn) +{ + spin_lock_init(&barn->lock); + INIT_LIST_HEAD(&barn->sheaves_full); + INIT_LIST_HEAD(&barn->sheaves_empty); + barn->nr_full =3D 0; + barn->nr_empty =3D 0; +} + +static void barn_shrink(struct kmem_cache *s, struct node_barn *barn) +{ + struct list_head empty_list; + struct list_head full_list; + struct slab_sheaf *sheaf, *sheaf2; + unsigned long flags; + + INIT_LIST_HEAD(&empty_list); + INIT_LIST_HEAD(&full_list); + + spin_lock_irqsave(&barn->lock, flags); + + list_splice_init(&barn->sheaves_full, &full_list); + barn->nr_full =3D 0; + list_splice_init(&barn->sheaves_empty, &empty_list); + barn->nr_empty =3D 0; + + spin_unlock_irqrestore(&barn->lock, flags); + + list_for_each_entry_safe(sheaf, sheaf2, &full_list, barn_list) { + sheaf_flush_unused(s, sheaf); + free_empty_sheaf(s, sheaf); + } + + list_for_each_entry_safe(sheaf, sheaf2, &empty_list, barn_list) + free_empty_sheaf(s, sheaf); +} + /* * Slab allocation and freeing */ @@ -3312,11 +3710,42 @@ static inline void __flush_cpu_slab(struct kmem_cac= he *s, int cpu) put_partials_cpu(s, c); } =20 -struct slub_flush_work { - struct work_struct work; - struct kmem_cache *s; - bool skip; -}; +static inline void flush_this_cpu_slab(struct kmem_cache *s) +{ + struct kmem_cache_cpu *c =3D this_cpu_ptr(s->cpu_slab); + + if (c->slab) + flush_slab(s, c); + + put_partials(s); +} + +static bool has_cpu_slab(int cpu, struct kmem_cache *s) +{ + struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); + + return c->slab || slub_percpu_partial(c); +} + +#else /* CONFIG_SLUB_TINY */ +static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { } +static inline bool has_cpu_slab(int cpu, struct kmem_cache *s) { return fa= lse; } +static inline void flush_this_cpu_slab(struct kmem_cache *s) { } +#endif /* CONFIG_SLUB_TINY */ + +static bool has_pcs_used(int cpu, struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + + if (!s->cpu_sheaves) + return false; + + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + return (pcs->spare || pcs->main->size); +} + +static void pcs_flush_all(struct kmem_cache *s); =20 /* * Flush cpu slab. @@ -3326,30 +3755,18 @@ struct slub_flush_work { static void flush_cpu_slab(struct work_struct *w) { struct kmem_cache *s; - struct kmem_cache_cpu *c; struct slub_flush_work *sfw; =20 sfw =3D container_of(w, struct slub_flush_work, work); =20 s =3D sfw->s; - c =3D this_cpu_ptr(s->cpu_slab); =20 - if (c->slab) - flush_slab(s, c); + if (s->cpu_sheaves) + pcs_flush_all(s); =20 - put_partials(s); + flush_this_cpu_slab(s); } =20 -static bool has_cpu_slab(int cpu, struct kmem_cache *s) -{ - struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); - - return c->slab || slub_percpu_partial(c); -} - -static DEFINE_MUTEX(flush_lock); -static DEFINE_PER_CPU(struct slub_flush_work, slub_flush); - static void flush_all_cpus_locked(struct kmem_cache *s) { struct slub_flush_work *sfw; @@ -3360,7 +3777,7 @@ static void flush_all_cpus_locked(struct kmem_cache *= s) =20 for_each_online_cpu(cpu) { sfw =3D &per_cpu(slub_flush, cpu); - if (!has_cpu_slab(cpu, s)) { + if (!has_cpu_slab(cpu, s) && !has_pcs_used(cpu, s)) { sfw->skip =3D true; continue; } @@ -3396,19 +3813,15 @@ static int slub_cpu_dead(unsigned int cpu) struct kmem_cache *s; =20 mutex_lock(&slab_mutex); - list_for_each_entry(s, &slab_caches, list) + list_for_each_entry(s, &slab_caches, list) { __flush_cpu_slab(s, cpu); + if (s->cpu_sheaves) + __pcs_flush_all_cpu(s, cpu); + } mutex_unlock(&slab_mutex); return 0; } =20 -#else /* CONFIG_SLUB_TINY */ -static inline void flush_all_cpus_locked(struct kmem_cache *s) { } -static inline void flush_all(struct kmem_cache *s) { } -static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { } -static inline int slub_cpu_dead(unsigned int cpu) { return 0; } -#endif /* CONFIG_SLUB_TINY */ - /* * Check if the objects in a per cpu structure fit numa * locality expectations. @@ -4158,6 +4571,199 @@ bool slab_post_alloc_hook(struct kmem_cache *s, str= uct list_lru *lru, return memcg_slab_post_alloc_hook(s, lru, flags, size, p); } =20 +static struct slub_percpu_sheaves * +__pcs_handle_empty(struct kmem_cache *s, struct slub_percpu_sheaves *pcs, = gfp_t gfp) +{ + struct slab_sheaf *empty =3D NULL; + struct slab_sheaf *full; + bool can_alloc; + + if (pcs->spare && pcs->spare->size > 0) { + swap(pcs->main, pcs->spare); + return pcs; + } + + full =3D barn_replace_empty_sheaf(pcs->barn, pcs->main); + + if (full) { + stat(s, BARN_GET); + pcs->main =3D full; + return pcs; + } + + stat(s, BARN_GET_FAIL); + + can_alloc =3D gfpflags_allow_blocking(gfp); + + if (can_alloc) { + if (pcs->spare) { + empty =3D pcs->spare; + pcs->spare =3D NULL; + } else { + empty =3D barn_get_empty_sheaf(pcs->barn); + } + } + + local_unlock(&s->cpu_sheaves->lock); + + if (!can_alloc) + return NULL; + + if (empty) { + if (!refill_sheaf(s, empty, gfp)) { + full =3D empty; + } else { + /* + * we must be very low on memory so don't bother + * with the barn + */ + free_empty_sheaf(s, empty); + } + } else { + full =3D alloc_full_sheaf(s, gfp); + } + + if (!full) + return NULL; + + /* + * we can reach here only when gfpflags_allow_blocking + * so this must not be an irq + */ + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + /* + * If we are returning empty sheaf, we either got it from the + * barn or had to allocate one. If we are returning a full + * sheaf, it's due to racing or being migrated to a different + * cpu. Breaching the barn's sheaf limits should be thus rare + * enough so just ignore them to simplify the recovery. + */ + + if (pcs->main->size =3D=3D 0) { + barn_put_empty_sheaf(pcs->barn, pcs->main); + pcs->main =3D full; + return pcs; + } + + if (!pcs->spare) { + pcs->spare =3D full; + return pcs; + } + + if (pcs->spare->size =3D=3D 0) { + barn_put_empty_sheaf(pcs->barn, pcs->spare); + pcs->spare =3D full; + return pcs; + } + + barn_put_full_sheaf(pcs->barn, full); + stat(s, BARN_PUT); + + return pcs; +} + +static __fastpath_inline +void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) +{ + struct slub_percpu_sheaves *pcs; + void *object; + +#ifdef CONFIG_NUMA + if (static_branch_unlikely(&strict_numa)) { + if (current->mempolicy) + return NULL; + } +#endif + + if (!local_trylock(&s->cpu_sheaves->lock)) + return NULL; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size =3D=3D 0)) { + pcs =3D __pcs_handle_empty(s, pcs, gfp); + if (unlikely(!pcs)) + return NULL; + } + + object =3D pcs->main->objects[--pcs->main->size]; + + local_unlock(&s->cpu_sheaves->lock); + + stat(s, ALLOC_PCS); + + return object; +} + +static __fastpath_inline +unsigned int alloc_from_pcs_bulk(struct kmem_cache *s, size_t size, void *= *p) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *main; + unsigned int allocated =3D 0; + unsigned int batch; + +next_batch: + if (!local_trylock(&s->cpu_sheaves->lock)) + return allocated; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size =3D=3D 0)) { + + struct slab_sheaf *full; + + if (pcs->spare && pcs->spare->size > 0) { + swap(pcs->main, pcs->spare); + goto do_alloc; + } + + full =3D barn_replace_empty_sheaf(pcs->barn, pcs->main); + + if (full) { + stat(s, BARN_GET); + pcs->main =3D full; + goto do_alloc; + } + + stat(s, BARN_GET_FAIL); + + local_unlock(&s->cpu_sheaves->lock); + + /* + * Once full sheaves in barn are depleted, let the bulk + * allocation continue from slab pages, otherwise we would just + * be copying arrays of pointers twice. + */ + return allocated; + } + +do_alloc: + + main =3D pcs->main; + batch =3D min(size, main->size); + + main->size -=3D batch; + memcpy(p, main->objects + main->size, batch * sizeof(void *)); + + local_unlock(&s->cpu_sheaves->lock); + + stat_add(s, ALLOC_PCS, batch); + + allocated +=3D batch; + + if (batch < size) { + p +=3D batch; + size -=3D batch; + goto next_batch; + } + + return allocated; +} + + /* * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_allo= c) * have the fastpath folded into their functions. So no function call @@ -4182,7 +4788,11 @@ static __fastpath_inline void *slab_alloc_node(struc= t kmem_cache *s, struct list if (unlikely(object)) goto out; =20 - object =3D __slab_alloc_node(s, gfpflags, node, addr, orig_size); + if (s->cpu_sheaves && node =3D=3D NUMA_NO_NODE) + object =3D alloc_from_pcs(s, gfpflags); + + if (!object) + object =3D __slab_alloc_node(s, gfpflags, node, addr, orig_size); =20 maybe_wipe_obj_freeptr(s, object); init =3D slab_want_init_on_alloc(gfpflags, s); @@ -4554,6 +5164,274 @@ static void __slab_free(struct kmem_cache *s, struc= t slab *slab, discard_slab(s, slab); } =20 +/* + * pcs is locked. We should have get rid of the spare sheaf and obtained an + * empty sheaf, while the main sheaf is full. We want to install the empty= sheaf + * as a main sheaf, and make the current main sheaf a spare sheaf. + * + * However due to having relinquished the cpu_sheaves lock when obtaining + * the empty sheaf, we need to handle some unlikely but possible cases. + * + * If we put any sheaf to barn here, it's because we were interrupted or h= ave + * been migrated to a different cpu, which should be rare enough so just i= gnore + * the barn's limits to simplify the handling. + * + * An alternative scenario that gets us here is when we fail + * barn_replace_full_sheaf(), because there's no empty sheaf available in = the + * barn, so we had to allocate it by alloc_empty_sheaf(). But because we s= aw the + * limit on full sheaves was not exceeded, we assume it didn't change and = just + * put the full sheaf there. + */ +static void __pcs_install_empty_sheaf(struct kmem_cache *s, + struct slub_percpu_sheaves *pcs, struct slab_sheaf *empty) +{ + /* This is what we expect to find if nobody interrupted us. */ + if (likely(!pcs->spare)) { + pcs->spare =3D pcs->main; + pcs->main =3D empty; + return; + } + + /* + * Unlikely because if the main sheaf had space, we would have just + * freed to it. Get rid of our empty sheaf. + */ + if (pcs->main->size < s->sheaf_capacity) { + barn_put_empty_sheaf(pcs->barn, empty); + return; + } + + /* Also unlikely for the same reason/ */ + if (pcs->spare->size < s->sheaf_capacity) { + swap(pcs->main, pcs->spare); + barn_put_empty_sheaf(pcs->barn, empty); + return; + } + + /* + * We probably failed barn_replace_full_sheaf() due to no empty sheaf + * available there, but we allocated one, so finish the job. + */ + barn_put_full_sheaf(pcs->barn, pcs->main); + stat(s, BARN_PUT); + pcs->main =3D empty; +} + +static struct slub_percpu_sheaves * +__pcs_handle_full(struct kmem_cache *s, struct slub_percpu_sheaves *pcs) +{ + struct slab_sheaf *empty; + bool put_fail; + +restart: + put_fail =3D false; + + if (!pcs->spare) { + empty =3D barn_get_empty_sheaf(pcs->barn); + if (empty) { + pcs->spare =3D pcs->main; + pcs->main =3D empty; + return pcs; + } + goto alloc_empty; + } + + if (pcs->spare->size < s->sheaf_capacity) { + swap(pcs->main, pcs->spare); + return pcs; + } + + empty =3D barn_replace_full_sheaf(pcs->barn, pcs->main); + + if (!IS_ERR(empty)) { + stat(s, BARN_PUT); + pcs->main =3D empty; + return pcs; + } + + if (PTR_ERR(empty) =3D=3D -E2BIG) { + /* Since we got here, spare exists and is full */ + struct slab_sheaf *to_flush =3D pcs->spare; + + stat(s, BARN_PUT_FAIL); + + pcs->spare =3D NULL; + local_unlock(&s->cpu_sheaves->lock); + + sheaf_flush_unused(s, to_flush); + empty =3D to_flush; + goto got_empty; + } + + /* + * We could not replace full sheaf because barn had no empty + * sheaves. We can still allocate it and put the full sheaf in + * __pcs_install_empty_sheaf(), but if we fail to allocate it, + * make sure to count the fail. + */ + put_fail =3D true; + +alloc_empty: + local_unlock(&s->cpu_sheaves->lock); + + empty =3D alloc_empty_sheaf(s, GFP_NOWAIT); + if (empty) + goto got_empty; + + if (put_fail) + stat(s, BARN_PUT_FAIL); + + if (!sheaf_flush_main(s)) + return NULL; + + if (!local_trylock(&s->cpu_sheaves->lock)) + return NULL; + + /* + * we flushed the main sheaf so it should be empty now, + * but in case we got preempted or migrated, we need to + * check again + */ + if (pcs->main->size =3D=3D s->sheaf_capacity) + goto restart; + + return pcs; + +got_empty: + if (!local_trylock(&s->cpu_sheaves->lock)) { + barn_put_empty_sheaf(pcs->barn, empty); + return NULL; + } + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + __pcs_install_empty_sheaf(s, pcs, empty); + + return pcs; +} + +/* + * Free an object to the percpu sheaves. + * The object is expected to have passed slab_free_hook() already. + */ +static __fastpath_inline +bool free_to_pcs(struct kmem_cache *s, void *object) +{ + struct slub_percpu_sheaves *pcs; + + if (!local_trylock(&s->cpu_sheaves->lock)) + return false; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size =3D=3D s->sheaf_capacity)) { + + pcs =3D __pcs_handle_full(s, pcs); + if (unlikely(!pcs)) + return false; + } + + pcs->main->objects[pcs->main->size++] =3D object; + + local_unlock(&s->cpu_sheaves->lock); + + stat(s, FREE_PCS); + + return true; +} + +/* + * Bulk free objects to the percpu sheaves. + * Unlike free_to_pcs() this includes the calls to all necessary hooks + * and the fallback to freeing to slab pages. + */ +static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *main, *empty; + unsigned int batch, i =3D 0; + bool init; + + init =3D slab_want_init_on_free(s); + + while (i < size) { + struct slab *slab =3D virt_to_slab(p[i]); + + memcg_slab_free_hook(s, slab, p + i, 1); + alloc_tagging_slab_free_hook(s, slab, p + i, 1); + + if (unlikely(!slab_free_hook(s, p[i], init, false))) { + p[i] =3D p[--size]; + if (!size) + return; + continue; + } + + i++; + } + +next_batch: + if (!local_trylock(&s->cpu_sheaves->lock)) + goto fallback; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (likely(pcs->main->size < s->sheaf_capacity)) + goto do_free; + + if (!pcs->spare) { + empty =3D barn_get_empty_sheaf(pcs->barn); + if (!empty) + goto no_empty; + + pcs->spare =3D pcs->main; + pcs->main =3D empty; + goto do_free; + } + + if (pcs->spare->size < s->sheaf_capacity) { + swap(pcs->main, pcs->spare); + goto do_free; + } + + empty =3D barn_replace_full_sheaf(pcs->barn, pcs->main); + if (IS_ERR(empty)) { + stat(s, BARN_PUT_FAIL); + goto no_empty; + } + + stat(s, BARN_PUT); + pcs->main =3D empty; + +do_free: + main =3D pcs->main; + batch =3D min(size, s->sheaf_capacity - main->size); + + memcpy(main->objects + main->size, p, batch * sizeof(void *)); + main->size +=3D batch; + + local_unlock(&s->cpu_sheaves->lock); + + stat_add(s, FREE_PCS, batch); + + if (batch < size) { + p +=3D batch; + size -=3D batch; + goto next_batch; + } + + return; + +no_empty: + local_unlock(&s->cpu_sheaves->lock); + + /* + * if we depleted all empty sheaves in the barn or there are too + * many full sheaves, free the rest to slab pages + */ +fallback: + __kmem_cache_free_bulk(s, size, p); +} + #ifndef CONFIG_SLUB_TINY /* * Fastpath with forced inlining to produce a kfree and kmem_cache_free th= at @@ -4640,7 +5518,10 @@ void slab_free(struct kmem_cache *s, struct slab *sl= ab, void *object, memcg_slab_free_hook(s, slab, &object, 1); alloc_tagging_slab_free_hook(s, slab, &object, 1); =20 - if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) + if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false)= )) + return; + + if (!s->cpu_sheaves || !free_to_pcs(s, object)) do_slab_free(s, slab, object, object, 1, addr); } =20 @@ -5236,6 +6117,15 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size= _t size, void **p) if (!size) return; =20 + /* + * freeing to sheaves is so incompatible with the detached freelist so + * once we go that way, we have to do everything differently + */ + if (s && s->cpu_sheaves) { + free_to_pcs_bulk(s, size, p); + return; + } + do { struct detached_freelist df; =20 @@ -5354,7 +6244,7 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache = *s, gfp_t flags, int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t= size, void **p) { - int i; + unsigned int i =3D 0; =20 if (!size) return 0; @@ -5363,9 +6253,20 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *= s, gfp_t flags, size_t size, if (unlikely(!s)) return 0; =20 - i =3D __kmem_cache_alloc_bulk(s, flags, size, p); - if (unlikely(i =3D=3D 0)) - return 0; + if (s->cpu_sheaves) + i =3D alloc_from_pcs_bulk(s, size, p); + + if (i < size) { + /* + * If we ran out of memory, don't bother with freeing back to + * the percpu sheaves, we have bigger problems. + */ + if (unlikely(__kmem_cache_alloc_bulk(s, flags, size - i, p + i) =3D=3D 0= )) { + if (i > 0) + __kmem_cache_free_bulk(s, i, p); + return 0; + } + } =20 /* * memcg and kmem_cache debug support and memory initialization. @@ -5375,11 +6276,11 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache = *s, gfp_t flags, size_t size, slab_want_init_on_alloc(flags, s), s->object_size))) { return 0; } - return i; + + return size; } EXPORT_SYMBOL(kmem_cache_alloc_bulk_noprof); =20 - /* * Object placement in a slab is made very easy because we always start at * offset 0. If we tune the size of the object to the alignment then we can @@ -5513,7 +6414,7 @@ static inline int calculate_order(unsigned int size) } =20 static void -init_kmem_cache_node(struct kmem_cache_node *n) +init_kmem_cache_node(struct kmem_cache_node *n, struct node_barn *barn) { n->nr_partial =3D 0; spin_lock_init(&n->list_lock); @@ -5523,6 +6424,9 @@ init_kmem_cache_node(struct kmem_cache_node *n) atomic_long_set(&n->total_objects, 0); INIT_LIST_HEAD(&n->full); #endif + n->barn =3D barn; + if (barn) + barn_init(barn); } =20 #ifndef CONFIG_SLUB_TINY @@ -5553,6 +6457,30 @@ static inline int alloc_kmem_cache_cpus(struct kmem_= cache *s) } #endif /* CONFIG_SLUB_TINY */ =20 +static int init_percpu_sheaves(struct kmem_cache *s) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct slub_percpu_sheaves *pcs; + int nid; + + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + local_trylock_init(&pcs->lock); + + nid =3D cpu_to_mem(cpu); + + pcs->barn =3D get_node(s, nid)->barn; + pcs->main =3D alloc_empty_sheaf(s, GFP_KERNEL); + + if (!pcs->main) + return -ENOMEM; + } + + return 0; +} + static struct kmem_cache *kmem_cache_node; =20 /* @@ -5588,7 +6516,7 @@ static void early_kmem_cache_node_alloc(int node) slab->freelist =3D get_freepointer(kmem_cache_node, n); slab->inuse =3D 1; kmem_cache_node->node[node] =3D n; - init_kmem_cache_node(n); + init_kmem_cache_node(n, NULL); inc_slabs_node(kmem_cache_node, node, slab->objects); =20 /* @@ -5604,6 +6532,13 @@ static void free_kmem_cache_nodes(struct kmem_cache = *s) struct kmem_cache_node *n; =20 for_each_kmem_cache_node(s, node, n) { + if (n->barn) { + WARN_ON(n->barn->nr_full); + WARN_ON(n->barn->nr_empty); + kfree(n->barn); + n->barn =3D NULL; + } + s->node[node] =3D NULL; kmem_cache_free(kmem_cache_node, n); } @@ -5612,6 +6547,8 @@ static void free_kmem_cache_nodes(struct kmem_cache *= s) void __kmem_cache_release(struct kmem_cache *s) { cache_random_seq_destroy(s); + if (s->cpu_sheaves) + pcs_destroy(s); #ifndef CONFIG_SLUB_TINY free_percpu(s->cpu_slab); #endif @@ -5624,20 +6561,29 @@ static int init_kmem_cache_nodes(struct kmem_cache = *s) =20 for_each_node_mask(node, slab_nodes) { struct kmem_cache_node *n; + struct node_barn *barn =3D NULL; =20 if (slab_state =3D=3D DOWN) { early_kmem_cache_node_alloc(node); continue; } + + if (s->cpu_sheaves) { + barn =3D kmalloc_node(sizeof(*barn), GFP_KERNEL, node); + + if (!barn) + return 0; + } + n =3D kmem_cache_alloc_node(kmem_cache_node, GFP_KERNEL, node); - if (!n) { - free_kmem_cache_nodes(s); + kfree(barn); return 0; } =20 - init_kmem_cache_node(n); + init_kmem_cache_node(n, barn); + s->node[node] =3D n; } return 1; @@ -5894,6 +6840,8 @@ int __kmem_cache_shutdown(struct kmem_cache *s) flush_all_cpus_locked(s); /* Attempt to free all objects */ for_each_kmem_cache_node(s, node, n) { + if (n->barn) + barn_shrink(s, n->barn); free_partial(s, n); if (n->nr_partial || node_nr_slabs(n)) return 1; @@ -6097,6 +7045,9 @@ static int __kmem_cache_do_shrink(struct kmem_cache *= s) for (i =3D 0; i < SHRINK_PROMOTE_MAX; i++) INIT_LIST_HEAD(promote + i); =20 + if (n->barn) + barn_shrink(s, n->barn); + spin_lock_irqsave(&n->list_lock, flags); =20 /* @@ -6209,12 +7160,24 @@ static int slab_mem_going_online_callback(void *arg) */ mutex_lock(&slab_mutex); list_for_each_entry(s, &slab_caches, list) { + struct node_barn *barn =3D NULL; + /* * The structure may already exist if the node was previously * onlined and offlined. */ if (get_node(s, nid)) continue; + + if (s->cpu_sheaves) { + barn =3D kmalloc_node(sizeof(*barn), GFP_KERNEL, nid); + + if (!barn) { + ret =3D -ENOMEM; + goto out; + } + } + /* * XXX: kmem_cache_alloc_node will fallback to other nodes * since memory is not yet available from the node that @@ -6222,10 +7185,13 @@ static int slab_mem_going_online_callback(void *arg) */ n =3D kmem_cache_alloc(kmem_cache_node, GFP_KERNEL); if (!n) { + kfree(barn); ret =3D -ENOMEM; goto out; } - init_kmem_cache_node(n); + + init_kmem_cache_node(n, barn); + s->node[nid] =3D n; } /* @@ -6444,6 +7410,17 @@ int do_kmem_cache_create(struct kmem_cache *s, const= char *name, =20 set_cpu_partial(s); =20 + if (args->sheaf_capacity && !IS_ENABLED(CONFIG_SLUB_TINY) + && !(s->flags & SLAB_DEBUG_FLAGS)) { + s->cpu_sheaves =3D alloc_percpu(struct slub_percpu_sheaves); + if (!s->cpu_sheaves) { + err =3D -ENOMEM; + goto out; + } + // TODO: increase capacity to grow slab_sheaf up to next kmalloc size? + s->sheaf_capacity =3D args->sheaf_capacity; + } + #ifdef CONFIG_NUMA s->remote_node_defrag_ratio =3D 1000; #endif @@ -6460,6 +7437,12 @@ int do_kmem_cache_create(struct kmem_cache *s, const= char *name, if (!alloc_kmem_cache_cpus(s)) goto out; =20 + if (s->cpu_sheaves) { + err =3D init_percpu_sheaves(s); + if (err) + goto out; + } + err =3D 0; =20 /* Mutex is not taken during early boot */ @@ -6481,7 +7464,6 @@ int do_kmem_cache_create(struct kmem_cache *s, const = char *name, __kmem_cache_release(s); return err; } - #ifdef SLAB_SUPPORTS_SYSFS static int count_inuse(struct slab *slab) { @@ -6912,6 +7894,12 @@ static ssize_t order_show(struct kmem_cache *s, char= *buf) } SLAB_ATTR_RO(order); =20 +static ssize_t sheaf_capacity_show(struct kmem_cache *s, char *buf) +{ + return sysfs_emit(buf, "%u\n", s->sheaf_capacity); +} +SLAB_ATTR_RO(sheaf_capacity); + static ssize_t min_partial_show(struct kmem_cache *s, char *buf) { return sysfs_emit(buf, "%lu\n", s->min_partial); @@ -7259,8 +8247,10 @@ static ssize_t text##_store(struct kmem_cache *s, \ } \ SLAB_ATTR(text); \ =20 +STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf); STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); +STAT_ATTR(FREE_PCS, free_cpu_sheaf); STAT_ATTR(FREE_FASTPATH, free_fastpath); STAT_ATTR(FREE_SLOWPATH, free_slowpath); STAT_ATTR(FREE_FROZEN, free_frozen); @@ -7285,6 +8275,14 @@ STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc); STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free); STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node); STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain); +STAT_ATTR(SHEAF_FLUSH, sheaf_flush); +STAT_ATTR(SHEAF_REFILL, sheaf_refill); +STAT_ATTR(SHEAF_ALLOC, sheaf_alloc); +STAT_ATTR(SHEAF_FREE, sheaf_free); +STAT_ATTR(BARN_GET, barn_get); +STAT_ATTR(BARN_GET_FAIL, barn_get_fail); +STAT_ATTR(BARN_PUT, barn_put); +STAT_ATTR(BARN_PUT_FAIL, barn_put_fail); #endif /* CONFIG_SLUB_STATS */ =20 #ifdef CONFIG_KFENCE @@ -7315,6 +8313,7 @@ static struct attribute *slab_attrs[] =3D { &object_size_attr.attr, &objs_per_slab_attr.attr, &order_attr.attr, + &sheaf_capacity_attr.attr, &min_partial_attr.attr, &cpu_partial_attr.attr, &objects_partial_attr.attr, @@ -7346,8 +8345,10 @@ static struct attribute *slab_attrs[] =3D { &remote_node_defrag_ratio_attr.attr, #endif #ifdef CONFIG_SLUB_STATS + &alloc_cpu_sheaf_attr.attr, &alloc_fastpath_attr.attr, &alloc_slowpath_attr.attr, + &free_cpu_sheaf_attr.attr, &free_fastpath_attr.attr, &free_slowpath_attr.attr, &free_frozen_attr.attr, @@ -7372,6 +8373,14 @@ static struct attribute *slab_attrs[] =3D { &cpu_partial_free_attr.attr, &cpu_partial_node_attr.attr, &cpu_partial_drain_attr.attr, + &sheaf_flush_attr.attr, + &sheaf_refill_attr.attr, + &sheaf_alloc_attr.attr, + &sheaf_free_attr.attr, + &barn_get_attr.attr, + &barn_get_fail_attr.attr, + &barn_put_attr.attr, + &barn_put_fail_attr.attr, #endif #ifdef CONFIG_FAILSLAB &failslab_attr.attr, --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 096752E92BC for ; Wed, 23 Jul 2025 13:35:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277724; cv=none; b=LQS399YD7P6r1M2jF62wiLUyq2W+jx35BZdcteFnnXTUKCiFU4U8/pY23KI1kwXdwwBMb+Ai3tAJlviWkZ3+B1QFNdf4OFLTF1eNAHFhXKLp8DZC8rx0IKHUPaMudZwNbVr8g3UiCk/FSvWQWmxN1zENFgQDSowHDaHZ+fY0WOg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277724; c=relaxed/simple; bh=YT8w7VwaHmfgK4rEclfK/gMg30FZkEHgEnWpjH01vUA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=tR1bpr2oElwHj5xQMnPW5MqkueumRvN1rwBaSvLfRuYnOXXtRq1bHiSNr6dxiqf47KixkaXlz3ttGyw6PCDTB7gjYQpvfFoj6kjJPoNyTr37bZ7u45LBMB6uwXPChYqDx7U9t5FD+jumV1N+wa4LQrRnj7zkaUFHkITwf/ALH0k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=OuWGX1re; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=5LPiEh6m; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=OuWGX1re; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=5LPiEh6m; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="OuWGX1re"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="5LPiEh6m"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="OuWGX1re"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="5LPiEh6m" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 0B5FD1F78E; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NduNJaepLAgleA88Uf74owzlmCV2Xd25rAPKMSWelSY=; b=OuWGX1reBYwHdcrtB1jJUHDHPAAEO6Lotr5bOhPTd/+WW1SIuk/aqZmQ0loeHF0PpSVNCY 4Xri5mhxdySUk3wSpOJJmktEIgovS1c/PxOdQmX1+CTqzWLHUnaVtKhtTGUZCdNSD1ZGLh PeOPKobxsARx2bH3BoRsn87r4pGGbME= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NduNJaepLAgleA88Uf74owzlmCV2Xd25rAPKMSWelSY=; b=5LPiEh6mfqbWWomKiosa3PatrDcMiL2npq5K5lfve33JyxXEleQ0XgfUGDiPG36gU3IxnW Tf+yfPs/CrubgIDQ== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=OuWGX1re; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=5LPiEh6m DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NduNJaepLAgleA88Uf74owzlmCV2Xd25rAPKMSWelSY=; b=OuWGX1reBYwHdcrtB1jJUHDHPAAEO6Lotr5bOhPTd/+WW1SIuk/aqZmQ0loeHF0PpSVNCY 4Xri5mhxdySUk3wSpOJJmktEIgovS1c/PxOdQmX1+CTqzWLHUnaVtKhtTGUZCdNSD1ZGLh PeOPKobxsARx2bH3BoRsn87r4pGGbME= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NduNJaepLAgleA88Uf74owzlmCV2Xd25rAPKMSWelSY=; b=5LPiEh6mfqbWWomKiosa3PatrDcMiL2npq5K5lfve33JyxXEleQ0XgfUGDiPG36gU3IxnW Tf+yfPs/CrubgIDQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id E640013AF2; Wed, 23 Jul 2025 13:35:03 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id aCjYNwflgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:03 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:35 +0200 Subject: [PATCH v5 02/14] slab: add sheaf support for batching kfree_rcu() operations Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-2-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: 0B5FD1F78E X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_TLS_ALL(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; DKIM_TRACE(0.00)[suse.cz:+] X-Spam-Score: -4.51 Extend the sheaf infrastructure for more efficient kfree_rcu() handling. For caches with sheaves, on each cpu maintain a rcu_free sheaf in addition to main and spare sheaves. kfree_rcu() operations will try to put objects on this sheaf. Once full, the sheaf is detached and submitted to call_rcu() with a handler that will try to put it in the barn, or flush to slab pages using bulk free, when the barn is full. Then a new empty sheaf must be obtained to put more objects there. It's possible that no free sheaves are available to use for a new rcu_free sheaf, and the allocation in kfree_rcu() context can only use GFP_NOWAIT and thus may fail. In that case, fall back to the existing kfree_rcu() implementation. Expected advantages: - batching the kfree_rcu() operations, that could eventually replace the existing batching - sheaves can be reused for allocations via barn instead of being flushed to slabs, which is more efficient - this includes cases where only some cpus are allowed to process rcu callbacks (Android) Possible disadvantage: - objects might be waiting for more than their grace period (it is determined by the last object freed into the sheaf), increasing memory usage - but the existing batching does that too. Only implement this for CONFIG_KVFREE_RCU_BATCHED as the tiny implementation favors smaller memory footprint over performance. Add CONFIG_SLUB_STATS counters free_rcu_sheaf and free_rcu_sheaf_fail to count how many kfree_rcu() used the rcu_free sheaf successfully and how many had to fall back to the existing implementation. Reviewed-by: Harry Yoo Reviewed-by: Suren Baghdasaryan Signed-off-by: Vlastimil Babka --- mm/slab.h | 2 + mm/slab_common.c | 24 +++++++ mm/slub.c | 193 +++++++++++++++++++++++++++++++++++++++++++++++++++= ++-- 3 files changed, 214 insertions(+), 5 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 1980330c2fcb4a4613a7e4f7efc78b349993fd89..44c9b70eaabbd87c06fb39b79df= b791d515acbde 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -459,6 +459,8 @@ static inline bool is_kmalloc_normal(struct kmem_cache = *s) return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT)); } =20 +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj); + #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_PANIC | \ SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS | \ diff --git a/mm/slab_common.c b/mm/slab_common.c index e2b197e47866c30acdbd1fee4159f262a751c5a7..2d806e02568532a1000fd3912db= 6978e945dcfa8 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1608,6 +1608,27 @@ static void kfree_rcu_work(struct work_struct *work) kvfree_rcu_list(head); } =20 +static bool kfree_rcu_sheaf(void *obj) +{ + struct kmem_cache *s; + struct folio *folio; + struct slab *slab; + + if (is_vmalloc_addr(obj)) + return false; + + folio =3D virt_to_folio(obj); + if (unlikely(!folio_test_slab(folio))) + return false; + + slab =3D folio_slab(folio); + s =3D slab->slab_cache; + if (s->cpu_sheaves) + return __kfree_rcu_sheaf(s, obj); + + return false; +} + static bool need_offload_krc(struct kfree_rcu_cpu *krcp) { @@ -1952,6 +1973,9 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) if (!head) might_sleep(); =20 + if (kfree_rcu_sheaf(ptr)) + return; + // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(ptr)) { // Probable double kfree_rcu(), just leak. diff --git a/mm/slub.c b/mm/slub.c index 6543aaade60b0adaab232b2256d65c1042c62e1c..f6d86cd3983533784583f1df6ad= d186c4a74cd97 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -350,6 +350,8 @@ enum stat_item { ALLOC_FASTPATH, /* Allocation from cpu slab */ ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ FREE_PCS, /* Free to percpu sheaf */ + FREE_RCU_SHEAF, /* Free to rcu_free sheaf */ + FREE_RCU_SHEAF_FAIL, /* Failed to free to a rcu_free sheaf */ FREE_FASTPATH, /* Free to cpu slab */ FREE_SLOWPATH, /* Freeing not to cpu slab */ FREE_FROZEN, /* Freeing to frozen slab */ @@ -444,6 +446,7 @@ struct slab_sheaf { struct rcu_head rcu_head; struct list_head barn_list; }; + struct kmem_cache *cache; unsigned int size; void *objects[]; }; @@ -452,6 +455,7 @@ struct slub_percpu_sheaves { local_trylock_t lock; struct slab_sheaf *main; /* never NULL when unlocked */ struct slab_sheaf *spare; /* empty or full, may be NULL */ + struct slab_sheaf *rcu_free; /* for batching kfree_rcu() */ struct node_barn *barn; }; =20 @@ -2490,6 +2494,8 @@ static struct slab_sheaf *alloc_empty_sheaf(struct km= em_cache *s, gfp_t gfp) if (unlikely(!sheaf)) return NULL; =20 + sheaf->cache =3D s; + stat(s, SHEAF_ALLOC); =20 return sheaf; @@ -2614,6 +2620,43 @@ static void sheaf_flush_unused(struct kmem_cache *s,= struct slab_sheaf *sheaf) sheaf->size =3D 0; } =20 +static void __rcu_free_sheaf_prepare(struct kmem_cache *s, + struct slab_sheaf *sheaf) +{ + bool init =3D slab_want_init_on_free(s); + void **p =3D &sheaf->objects[0]; + unsigned int i =3D 0; + + while (i < sheaf->size) { + struct slab *slab =3D virt_to_slab(p[i]); + + memcg_slab_free_hook(s, slab, p + i, 1); + alloc_tagging_slab_free_hook(s, slab, p + i, 1); + + if (unlikely(!slab_free_hook(s, p[i], init, true))) { + p[i] =3D p[--sheaf->size]; + continue; + } + + i++; + } +} + +static void rcu_free_sheaf_nobarn(struct rcu_head *head) +{ + struct slab_sheaf *sheaf; + struct kmem_cache *s; + + sheaf =3D container_of(head, struct slab_sheaf, rcu_head); + s =3D sheaf->cache; + + __rcu_free_sheaf_prepare(s, sheaf); + + sheaf_flush_unused(s, sheaf); + + free_empty_sheaf(s, sheaf); +} + /* * Caller needs to make sure migration is disabled in order to fully flush * single cpu's sheaves @@ -2626,7 +2669,7 @@ static void sheaf_flush_unused(struct kmem_cache *s, = struct slab_sheaf *sheaf) static void pcs_flush_all(struct kmem_cache *s) { struct slub_percpu_sheaves *pcs; - struct slab_sheaf *spare; + struct slab_sheaf *spare, *rcu_free; =20 local_lock(&s->cpu_sheaves->lock); pcs =3D this_cpu_ptr(s->cpu_sheaves); @@ -2634,6 +2677,9 @@ static void pcs_flush_all(struct kmem_cache *s) spare =3D pcs->spare; pcs->spare =3D NULL; =20 + rcu_free =3D pcs->rcu_free; + pcs->rcu_free =3D NULL; + local_unlock(&s->cpu_sheaves->lock); =20 if (spare) { @@ -2641,6 +2687,9 @@ static void pcs_flush_all(struct kmem_cache *s) free_empty_sheaf(s, spare); } =20 + if (rcu_free) + call_rcu(&rcu_free->rcu_head, rcu_free_sheaf_nobarn); + sheaf_flush_main(s); } =20 @@ -2657,6 +2706,11 @@ static void __pcs_flush_all_cpu(struct kmem_cache *s= , unsigned int cpu) free_empty_sheaf(s, pcs->spare); pcs->spare =3D NULL; } + + if (pcs->rcu_free) { + call_rcu(&pcs->rcu_free->rcu_head, rcu_free_sheaf_nobarn); + pcs->rcu_free =3D NULL; + } } =20 static void pcs_destroy(struct kmem_cache *s) @@ -2682,6 +2736,7 @@ static void pcs_destroy(struct kmem_cache *s) */ =20 WARN_ON(pcs->spare); + WARN_ON(pcs->rcu_free); =20 if (!WARN_ON(pcs->main->size)) { free_empty_sheaf(s, pcs->main); @@ -3742,7 +3797,7 @@ static bool has_pcs_used(int cpu, struct kmem_cache *= s) =20 pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); =20 - return (pcs->spare || pcs->main->size); + return (pcs->spare || pcs->rcu_free || pcs->main->size); } =20 static void pcs_flush_all(struct kmem_cache *s); @@ -5339,6 +5394,127 @@ bool free_to_pcs(struct kmem_cache *s, void *object) return true; } =20 +static void rcu_free_sheaf(struct rcu_head *head) +{ + struct slab_sheaf *sheaf; + struct node_barn *barn; + struct kmem_cache *s; + + sheaf =3D container_of(head, struct slab_sheaf, rcu_head); + + s =3D sheaf->cache; + + /* + * This may remove some objects due to slab_free_hook() returning false, + * so that the sheaf might no longer be completely full. But it's easier + * to handle it as full (unless it became completely empty), as the code + * handles it fine. The only downside is that sheaf will serve fewer + * allocations when reused. It only happens due to debugging, which is a + * performance hit anyway. + */ + __rcu_free_sheaf_prepare(s, sheaf); + + barn =3D get_node(s, numa_mem_id())->barn; + + /* due to slab_free_hook() */ + if (unlikely(sheaf->size =3D=3D 0)) + goto empty; + + /* + * Checking nr_full/nr_empty outside lock avoids contention in case the + * barn is at the respective limit. Due to the race we might go over the + * limit but that should be rare and harmless. + */ + + if (data_race(barn->nr_full) < MAX_FULL_SHEAVES) { + stat(s, BARN_PUT); + barn_put_full_sheaf(barn, sheaf); + return; + } + + stat(s, BARN_PUT_FAIL); + sheaf_flush_unused(s, sheaf); + +empty: + if (data_race(barn->nr_empty) < MAX_EMPTY_SHEAVES) { + barn_put_empty_sheaf(barn, sheaf); + return; + } + + free_empty_sheaf(s, sheaf); +} + +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *rcu_sheaf; + + if (!local_trylock(&s->cpu_sheaves->lock)) + goto fail; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(!pcs->rcu_free)) { + + struct slab_sheaf *empty; + + if (pcs->spare && pcs->spare->size =3D=3D 0) { + pcs->rcu_free =3D pcs->spare; + pcs->spare =3D NULL; + goto do_free; + } + + empty =3D barn_get_empty_sheaf(pcs->barn); + + if (empty) { + pcs->rcu_free =3D empty; + goto do_free; + } + + local_unlock(&s->cpu_sheaves->lock); + + empty =3D alloc_empty_sheaf(s, GFP_NOWAIT); + + if (!empty) + goto fail; + + if (!local_trylock(&s->cpu_sheaves->lock)) { + barn_put_empty_sheaf(pcs->barn, empty); + goto fail; + } + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->rcu_free)) + barn_put_empty_sheaf(pcs->barn, empty); + else + pcs->rcu_free =3D empty; + } + +do_free: + + rcu_sheaf =3D pcs->rcu_free; + + rcu_sheaf->objects[rcu_sheaf->size++] =3D obj; + + if (likely(rcu_sheaf->size < s->sheaf_capacity)) + rcu_sheaf =3D NULL; + else + pcs->rcu_free =3D NULL; + + local_unlock(&s->cpu_sheaves->lock); + + if (rcu_sheaf) + call_rcu(&rcu_sheaf->rcu_head, rcu_free_sheaf); + + stat(s, FREE_RCU_SHEAF); + return true; + +fail: + stat(s, FREE_RCU_SHEAF_FAIL); + return false; +} + /* * Bulk free objects to the percpu sheaves. * Unlike free_to_pcs() this includes the calls to all necessary hooks @@ -5348,10 +5524,8 @@ static void free_to_pcs_bulk(struct kmem_cache *s, s= ize_t size, void **p) { struct slub_percpu_sheaves *pcs; struct slab_sheaf *main, *empty; + bool init =3D slab_want_init_on_free(s); unsigned int batch, i =3D 0; - bool init; - - init =3D slab_want_init_on_free(s); =20 while (i < size) { struct slab *slab =3D virt_to_slab(p[i]); @@ -6838,6 +7012,11 @@ int __kmem_cache_shutdown(struct kmem_cache *s) struct kmem_cache_node *n; =20 flush_all_cpus_locked(s); + + /* we might have rcu sheaves in flight */ + if (s->cpu_sheaves) + rcu_barrier(); + /* Attempt to free all objects */ for_each_kmem_cache_node(s, node, n) { if (n->barn) @@ -8251,6 +8430,8 @@ STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf); STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); STAT_ATTR(FREE_PCS, free_cpu_sheaf); +STAT_ATTR(FREE_RCU_SHEAF, free_rcu_sheaf); +STAT_ATTR(FREE_RCU_SHEAF_FAIL, free_rcu_sheaf_fail); STAT_ATTR(FREE_FASTPATH, free_fastpath); STAT_ATTR(FREE_SLOWPATH, free_slowpath); STAT_ATTR(FREE_FROZEN, free_frozen); @@ -8349,6 +8530,8 @@ static struct attribute *slab_attrs[] =3D { &alloc_fastpath_attr.attr, &alloc_slowpath_attr.attr, &free_cpu_sheaf_attr.attr, + &free_rcu_sheaf_attr.attr, + &free_rcu_sheaf_fail_attr.attr, &free_fastpath_attr.attr, &free_slowpath_attr.attr, &free_frozen_attr.attr, --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E22882E7196 for ; Wed, 23 Jul 2025 13:35:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277716; cv=none; b=R0YxtL8JyHfHG1SL3zh3JrORWVS1GxHdYYkOPz4ivMBQjgcN/gANEc8GDxEzY9RtRR6mAIA9AGaurJHSq8sutB1tsvRXyC3/bi4+jza/esPWtKSwlaNcbqjT2fW/IK0pXZ8TKiLouo9B7u+ydnCpDC0F9XfAlGLnMNqbiXGJSmM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277716; c=relaxed/simple; bh=dG1giyj2dsCyVUxqGYUW8p7bs8KuWRTtSFkZAcFcchU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Bhl181Jfsv1iO8n8N58vF75kHbrO7GKto6JJkXilImwdXsUdgVQpKRrebQ3xB+rN9GktarO+8Tq5GBetdCvjYZvvIAO3IPhubzk4NWAQKEFG/ETrC1CPHPiVXXSNXk9ZrCh5Id/eZagyhOIHAqPWsiQB8FpCZZjZ/OHQVNVnLsY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=w+8aWlEV; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=qnViJaVV; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=w+8aWlEV; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=qnViJaVV; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="w+8aWlEV"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="qnViJaVV"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="w+8aWlEV"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="qnViJaVV" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 1DA971F78F; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7WPRX5w6VLJucQ0K8sAKz65Zi/zci5w4tOo+kRZuEhI=; b=w+8aWlEVxjdtasQJNYG4cwAS62pKLVEL254SN4W/E9p62BnWXUpEvNrYeD379SCLpkK9AU R66reAIdkwGTxr6Sp4X3Ywh4fQZ+h/d+z/USoEPB1/a5JsnN8xXaimuNIlFTltPix8lQ1V l4lE+2XHF00vWpJKHjyPv8ylbHK6HBo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7WPRX5w6VLJucQ0K8sAKz65Zi/zci5w4tOo+kRZuEhI=; b=qnViJaVVhHNa5Zhw7P2qp0WTSOuBGcG1Hd8Cn5OyqOGdTXHdM9F3tywUcqNHwE43rCd/sJ ea3e7m82r1YY9BBg== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=w+8aWlEV; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=qnViJaVV DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7WPRX5w6VLJucQ0K8sAKz65Zi/zci5w4tOo+kRZuEhI=; b=w+8aWlEVxjdtasQJNYG4cwAS62pKLVEL254SN4W/E9p62BnWXUpEvNrYeD379SCLpkK9AU R66reAIdkwGTxr6Sp4X3Ywh4fQZ+h/d+z/USoEPB1/a5JsnN8xXaimuNIlFTltPix8lQ1V l4lE+2XHF00vWpJKHjyPv8ylbHK6HBo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7WPRX5w6VLJucQ0K8sAKz65Zi/zci5w4tOo+kRZuEhI=; b=qnViJaVVhHNa5Zhw7P2qp0WTSOuBGcG1Hd8Cn5OyqOGdTXHdM9F3tywUcqNHwE43rCd/sJ ea3e7m82r1YY9BBg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 0560313AFA; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id ABEMAQjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:36 +0200 Subject: [PATCH v5 03/14] slab: sheaf prefilling for guaranteed allocations Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-3-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_TLS_ALL(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; DKIM_TRACE(0.00)[suse.cz:+] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 1DA971F78F X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 Add functions for efficient guaranteed allocations e.g. in a critical section that cannot sleep, when the exact number of allocations is not known beforehand, but an upper limit can be calculated. kmem_cache_prefill_sheaf() returns a sheaf containing at least given number of objects. kmem_cache_alloc_from_sheaf() will allocate an object from the sheaf and is guaranteed not to fail until depleted. kmem_cache_return_sheaf() is for giving the sheaf back to the slab allocator after the critical section. This will also attempt to refill it to cache's sheaf capacity for better efficiency of sheaves handling, but it's not stricly necessary to succeed. kmem_cache_refill_sheaf() can be used to refill a previously obtained sheaf to requested size. If the current size is sufficient, it does nothing. If the requested size exceeds cache's sheaf_capacity and the sheaf's current capacity, the sheaf will be replaced with a new one, hence the indirect pointer parameter. kmem_cache_sheaf_size() can be used to query the current size. The implementation supports requesting sizes that exceed cache's sheaf_capacity, but it is not efficient - such "oversize" sheaves are allocated fresh in kmem_cache_prefill_sheaf() and flushed and freed immediately by kmem_cache_return_sheaf(). kmem_cache_refill_sheaf() might be especially ineffective when replacing a sheaf with a new one of a larger capacity. It is therefore better to size cache's sheaf_capacity accordingly to make oversize sheaves exceptional. CONFIG_SLUB_STATS counters are added for sheaf prefill and return operations. A prefill or return is considered _fast when it is able to grab or return a percpu spare sheaf (even if the sheaf needs a refill to satisfy the request, as those should amortize over time), and _slow otherwise (when the barn or even sheaf allocation/freeing has to be involved). sheaf_prefill_oversize is provided to determine how many prefills were oversize (counter for oversize returns is not necessary as all oversize refills result in oversize returns). When slub_debug is enabled for a cache with sheaves, no percpu sheaves exist for it, but the prefill functionality is still provided simply by all prefilled sheaves becoming oversize. If percpu sheaves are not created for a cache due to not passing the sheaf_capacity argument on cache creation, the prefills also work through oversize sheaves, but there's a WARN_ON_ONCE() to indicate the omission. Reviewed-by: Suren Baghdasaryan Reviewed-by: Harry Yoo Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 16 ++++ mm/slub.c | 265 +++++++++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 281 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 6cfd085907afb8fc6e502ff7a1a1830c52ff9125..3ff70547db49d0880b1b6cb1005= 27936e88ca509 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -829,6 +829,22 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *= s, gfp_t flags, int node) __assume_slab_alignment __malloc; #define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node_nopro= f(__VA_ARGS__)) =20 +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int siz= e); + +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size); + +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf); + +void *kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *cachep, gfp_t = gfp, + struct slab_sheaf *sheaf) __assume_slab_alignment __malloc; +#define kmem_cache_alloc_from_sheaf(...) \ + alloc_hooks(kmem_cache_alloc_from_sheaf_noprof(__VA_ARGS__)) + +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf); + /* * These macros allow declaring a kmem_buckets * parameter alongside size,= which * can be compiled out with CONFIG_SLAB_BUCKETS=3Dn so that a large number= of call diff --git a/mm/slub.c b/mm/slub.c index f6d86cd3983533784583f1df6add186c4a74cd97..8b3093ee2e02c9ff4e149ac5483= 3db4972b414a3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -384,6 +384,11 @@ enum stat_item { BARN_GET_FAIL, /* Failed to get full sheaf from barn */ BARN_PUT, /* Put full sheaf to barn */ BARN_PUT_FAIL, /* Failed to put full sheaf to barn */ + SHEAF_PREFILL_FAST, /* Sheaf prefill grabbed the spare sheaf */ + SHEAF_PREFILL_SLOW, /* Sheaf prefill found no spare sheaf */ + SHEAF_PREFILL_OVERSIZE, /* Allocation of oversize sheaf for prefill */ + SHEAF_RETURN_FAST, /* Sheaf return reattached spare sheaf */ + SHEAF_RETURN_SLOW, /* Sheaf return could not reattach spare */ NR_SLUB_STAT_ITEMS }; =20 @@ -445,6 +450,8 @@ struct slab_sheaf { union { struct rcu_head rcu_head; struct list_head barn_list; + /* only used for prefilled sheafs */ + unsigned int capacity; }; struct kmem_cache *cache; unsigned int size; @@ -2797,6 +2804,30 @@ static void barn_put_full_sheaf(struct node_barn *ba= rn, struct slab_sheaf *sheaf spin_unlock_irqrestore(&barn->lock, flags); } =20 +static struct slab_sheaf *barn_get_full_or_empty_sheaf(struct node_barn *b= arn) +{ + struct slab_sheaf *sheaf =3D NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full) { + sheaf =3D list_first_entry(&barn->sheaves_full, struct slab_sheaf, + barn_list); + list_del(&sheaf->barn_list); + barn->nr_full--; + } else if (barn->nr_empty) { + sheaf =3D list_first_entry(&barn->sheaves_empty, + struct slab_sheaf, barn_list); + list_del(&sheaf->barn_list); + barn->nr_empty--; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return sheaf; +} + /* * If a full sheaf is available, return it and put the supplied empty one = to * barn. We ignore the limit on empty sheaves as the number of sheaves doe= sn't @@ -4919,6 +4950,230 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cach= e *s, gfp_t gfpflags, int nod } EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); =20 +/* + * returns a sheaf that has at least the requested size + * when prefilling is needed, do so with given gfp flags + * + * return NULL if sheaf allocation or prefilling failed + */ +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int siz= e) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *sheaf =3D NULL; + + if (unlikely(size > s->sheaf_capacity)) { + + /* + * slab_debug disables cpu sheaves intentionally so all + * prefilled sheaves become "oversize" and we give up on + * performance for the debugging. Same with SLUB_TINY. + * Creating a cache without sheaves and then requesting a + * prefilled sheaf is however not expected, so warn. + */ + WARN_ON_ONCE(s->sheaf_capacity =3D=3D 0 && + !IS_ENABLED(CONFIG_SLUB_TINY) && + !(s->flags & SLAB_DEBUG_FLAGS)); + + sheaf =3D kzalloc(struct_size(sheaf, objects, size), gfp); + if (!sheaf) + return NULL; + + stat(s, SHEAF_PREFILL_OVERSIZE); + sheaf->cache =3D s; + sheaf->capacity =3D size; + + if (!__kmem_cache_alloc_bulk(s, gfp, size, + &sheaf->objects[0])) { + kfree(sheaf); + return NULL; + } + + sheaf->size =3D size; + + return sheaf; + } + + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (pcs->spare) { + sheaf =3D pcs->spare; + pcs->spare =3D NULL; + stat(s, SHEAF_PREFILL_FAST); + } else { + stat(s, SHEAF_PREFILL_SLOW); + sheaf =3D barn_get_full_or_empty_sheaf(pcs->barn); + if (sheaf && sheaf->size) + stat(s, BARN_GET); + else + stat(s, BARN_GET_FAIL); + } + + local_unlock(&s->cpu_sheaves->lock); + + + if (!sheaf) + sheaf =3D alloc_empty_sheaf(s, gfp); + + if (sheaf && sheaf->size < size) { + if (refill_sheaf(s, sheaf, gfp)) { + sheaf_flush_unused(s, sheaf); + free_empty_sheaf(s, sheaf); + sheaf =3D NULL; + } + } + + if (sheaf) + sheaf->capacity =3D s->sheaf_capacity; + + return sheaf; +} + +/* + * Use this to return a sheaf obtained by kmem_cache_prefill_sheaf() + * + * If the sheaf cannot simply become the percpu spare sheaf, but there's s= pace + * for a full sheaf in the barn, we try to refill the sheaf back to the ca= che's + * sheaf_capacity to avoid handling partially full sheaves. + * + * If the refill fails because gfp is e.g. GFP_NOWAIT, or the barn is full= , the + * sheaf is instead flushed and freed. + */ +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + struct slub_percpu_sheaves *pcs; + struct node_barn *barn; + + if (unlikely(sheaf->capacity !=3D s->sheaf_capacity)) { + sheaf_flush_unused(s, sheaf); + kfree(sheaf); + return; + } + + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (!pcs->spare) { + pcs->spare =3D sheaf; + sheaf =3D NULL; + stat(s, SHEAF_RETURN_FAST); + } + + local_unlock(&s->cpu_sheaves->lock); + + if (!sheaf) + return; + + stat(s, SHEAF_RETURN_SLOW); + + /* Accessing pcs->barn outside local_lock is safe. */ + barn =3D pcs->barn; + + /* + * If the barn has too many full sheaves or we fail to refill the sheaf, + * simply flush and free it. + */ + if (data_race(pcs->barn->nr_full) >=3D MAX_FULL_SHEAVES || + refill_sheaf(s, sheaf, gfp)) { + sheaf_flush_unused(s, sheaf); + free_empty_sheaf(s, sheaf); + return; + } + + barn_put_full_sheaf(barn, sheaf); + stat(s, BARN_PUT); +} + +/* + * refill a sheaf previously returned by kmem_cache_prefill_sheaf to at le= ast + * the given size + * + * the sheaf might be replaced by a new one when requesting more than + * s->sheaf_capacity objects if such replacement is necessary, but the ref= ill + * fails (returning -ENOMEM), the existing sheaf is left intact + * + * In practice we always refill to full sheaf's capacity. + */ +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size) +{ + struct slab_sheaf *sheaf; + + /* + * TODO: do we want to support *sheaf =3D=3D NULL to be equivalent of + * kmem_cache_prefill_sheaf() ? + */ + if (!sheafp || !(*sheafp)) + return -EINVAL; + + sheaf =3D *sheafp; + if (sheaf->size >=3D size) + return 0; + + if (likely(sheaf->capacity >=3D size)) { + if (likely(sheaf->capacity =3D=3D s->sheaf_capacity)) + return refill_sheaf(s, sheaf, gfp); + + if (!__kmem_cache_alloc_bulk(s, gfp, sheaf->capacity - sheaf->size, + &sheaf->objects[sheaf->size])) { + return -ENOMEM; + } + sheaf->size =3D sheaf->capacity; + + return 0; + } + + /* + * We had a regular sized sheaf and need an oversize one, or we had an + * oversize one already but need a larger one now. + * This should be a very rare path so let's not complicate it. + */ + sheaf =3D kmem_cache_prefill_sheaf(s, gfp, size); + if (!sheaf) + return -ENOMEM; + + kmem_cache_return_sheaf(s, gfp, *sheafp); + *sheafp =3D sheaf; + return 0; +} + +/* + * Allocate from a sheaf obtained by kmem_cache_prefill_sheaf() + * + * Guaranteed not to fail as many allocations as was the requested size. + * After the sheaf is emptied, it fails - no fallback to the slab cache it= self. + * + * The gfp parameter is meant only to specify __GFP_ZERO or __GFP_ACCOUNT + * memcg charging is forced over limit if necessary, to avoid failure. + */ +void * +kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + void *ret =3D NULL; + bool init; + + if (sheaf->size =3D=3D 0) + goto out; + + ret =3D sheaf->objects[--sheaf->size]; + + init =3D slab_want_init_on_alloc(gfp, s); + + /* add __GFP_NOFAIL to force successful memcg charging */ + slab_post_alloc_hook(s, NULL, gfp | __GFP_NOFAIL, 1, &ret, init, s->objec= t_size); +out: + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfp, NUMA_NO_NODE); + + return ret; +} + +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf) +{ + return sheaf->size; +} /* * To avoid unnecessary overhead, we pass through large allocation requests * directly to the page allocator. We use __GFP_COMP, because we will need= to @@ -8464,6 +8719,11 @@ STAT_ATTR(BARN_GET, barn_get); STAT_ATTR(BARN_GET_FAIL, barn_get_fail); STAT_ATTR(BARN_PUT, barn_put); STAT_ATTR(BARN_PUT_FAIL, barn_put_fail); +STAT_ATTR(SHEAF_PREFILL_FAST, sheaf_prefill_fast); +STAT_ATTR(SHEAF_PREFILL_SLOW, sheaf_prefill_slow); +STAT_ATTR(SHEAF_PREFILL_OVERSIZE, sheaf_prefill_oversize); +STAT_ATTR(SHEAF_RETURN_FAST, sheaf_return_fast); +STAT_ATTR(SHEAF_RETURN_SLOW, sheaf_return_slow); #endif /* CONFIG_SLUB_STATS */ =20 #ifdef CONFIG_KFENCE @@ -8564,6 +8824,11 @@ static struct attribute *slab_attrs[] =3D { &barn_get_fail_attr.attr, &barn_put_attr.attr, &barn_put_fail_attr.attr, + &sheaf_prefill_fast_attr.attr, + &sheaf_prefill_slow_attr.attr, + &sheaf_prefill_oversize_attr.attr, + &sheaf_return_fast_attr.attr, + &sheaf_return_slow_attr.attr, #endif #ifdef CONFIG_FAILSLAB &failslab_attr.attr, --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E74442E9EA6 for ; Wed, 23 Jul 2025 13:35:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277730; cv=none; b=joEAPFmySeVm7r7+xZr7C9j6IsrA/D7csAAc28a3paxp4TmR7UgN7783WDrtpfMBzJbJUKWlekW8j/Sf6wnawA5zRXOjGEWMRXvfWge9m16pOUQOAFv1+x7TQfWB6CO3qz+1G5LRDaaQwcNqdGUe2nY2MFrdZ8Q/gP0IWvh4/gY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277730; c=relaxed/simple; bh=g+cW4iv2z/ZLxq3t03iSc/ruNf+jvJ3mN4AJ/CBvE6o=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=hhuNLediyGfSZfCIrGaCaMYqr6zmFnUH7bKnduiHjpwsltilXFy0++ViGjRieIwMEmxzWuFl1Nc7ou+nr/BxDmm9ksrjPBLzMmq4ByV46ig+pC1OHvN9Hp+erMnkfoFLTiTGpjfYNsYW1sfV7zD9Yzx08UehbJDTNH+o9NoBTeg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=PeRIBlAG; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=QH0aAN2I; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=PeRIBlAG; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=QH0aAN2I; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="PeRIBlAG"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="QH0aAN2I"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="PeRIBlAG"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="QH0aAN2I" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 2F3B01F790; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DnB5Z/uUr1V7ojjJYSvVXb431U/VD/MFqSsJJofBvxs=; b=PeRIBlAGtgcYtv6sV41maiS0N6Wyp9R4jEVFozj0FAL64NcONmOw/HHDJ2Uw8yqbS/yj07 uX72Rm5xLLHPkWs44sV8qF8N+d3fCZfgQXV66toCcEpscT4Ay5+zMYiiY5IXJE+McEceiL Z7ktm3gD6bc8QbxCGpjmf5rdQMqKc5A= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DnB5Z/uUr1V7ojjJYSvVXb431U/VD/MFqSsJJofBvxs=; b=QH0aAN2IfgpijQ3IVcSa2GuONJ86Wq/eNK/EUoByY4hgIJ+MyRWxfLYayE4qTrn8LBop5m HT0bOSzSE5uZgDAA== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=PeRIBlAG; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=QH0aAN2I DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DnB5Z/uUr1V7ojjJYSvVXb431U/VD/MFqSsJJofBvxs=; b=PeRIBlAGtgcYtv6sV41maiS0N6Wyp9R4jEVFozj0FAL64NcONmOw/HHDJ2Uw8yqbS/yj07 uX72Rm5xLLHPkWs44sV8qF8N+d3fCZfgQXV66toCcEpscT4Ay5+zMYiiY5IXJE+McEceiL Z7ktm3gD6bc8QbxCGpjmf5rdQMqKc5A= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DnB5Z/uUr1V7ojjJYSvVXb431U/VD/MFqSsJJofBvxs=; b=QH0aAN2IfgpijQ3IVcSa2GuONJ86Wq/eNK/EUoByY4hgIJ+MyRWxfLYayE4qTrn8LBop5m HT0bOSzSE5uZgDAA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 17D7613AFB; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 4KOKBQjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:37 +0200 Subject: [PATCH v5 04/14] slab: determine barn status racily outside of lock Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-4-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_TLS_ALL(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; DKIM_TRACE(0.00)[suse.cz:+] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 2F3B01F790 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 The possibility of many barn operations is determined by the current number of full or empty sheaves. Taking the barn->lock just to find out that e.g. there are no empty sheaves results in unnecessary overhead and lock contention. Thus perform these checks outside of the lock with a data_race() annotated variable read and fail quickly without taking the lock. Checks for sheaf availability that racily succeed have to be obviously repeated under the lock for correctness, but we can skip repeating checks if there are too many sheaves on the given list as the limits don't need to be strict. Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan Reviewed-by: Harry Yoo --- mm/slub.c | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 8b3093ee2e02c9ff4e149ac54833db4972b414a3..339d91c6ea29be99a14a8914117= fab0e3e6ed26b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2760,9 +2760,12 @@ static struct slab_sheaf *barn_get_empty_sheaf(struc= t node_barn *barn) struct slab_sheaf *empty =3D NULL; unsigned long flags; =20 + if (!data_race(barn->nr_empty)) + return NULL; + spin_lock_irqsave(&barn->lock, flags); =20 - if (barn->nr_empty) { + if (likely(barn->nr_empty)) { empty =3D list_first_entry(&barn->sheaves_empty, struct slab_sheaf, barn_list); list_del(&empty->barn_list); @@ -2809,6 +2812,9 @@ static struct slab_sheaf *barn_get_full_or_empty_shea= f(struct node_barn *barn) struct slab_sheaf *sheaf =3D NULL; unsigned long flags; =20 + if (!data_race(barn->nr_full) && !data_race(barn->nr_empty)) + return NULL; + spin_lock_irqsave(&barn->lock, flags); =20 if (barn->nr_full) { @@ -2839,9 +2845,12 @@ barn_replace_empty_sheaf(struct node_barn *barn, str= uct slab_sheaf *empty) struct slab_sheaf *full =3D NULL; unsigned long flags; =20 + if (!data_race(barn->nr_full)) + return NULL; + spin_lock_irqsave(&barn->lock, flags); =20 - if (barn->nr_full) { + if (likely(barn->nr_full)) { full =3D list_first_entry(&barn->sheaves_full, struct slab_sheaf, barn_list); list_del(&full->barn_list); @@ -2864,19 +2873,23 @@ barn_replace_full_sheaf(struct node_barn *barn, str= uct slab_sheaf *full) struct slab_sheaf *empty; unsigned long flags; =20 + /* we don't repeat this check under barn->lock as it's not critical */ + if (data_race(barn->nr_full) >=3D MAX_FULL_SHEAVES) + return ERR_PTR(-E2BIG); + if (!data_race(barn->nr_empty)) + return ERR_PTR(-ENOMEM); + spin_lock_irqsave(&barn->lock, flags); =20 - if (barn->nr_full >=3D MAX_FULL_SHEAVES) { - empty =3D ERR_PTR(-E2BIG); - } else if (!barn->nr_empty) { - empty =3D ERR_PTR(-ENOMEM); - } else { + if (likely(barn->nr_empty)) { empty =3D list_first_entry(&barn->sheaves_empty, struct slab_sheaf, barn_list); list_del(&empty->barn_list); list_add(&full->barn_list, &barn->sheaves_full); barn->nr_empty--; barn->nr_full++; + } else { + empty =3D ERR_PTR(-ENOMEM); } =20 spin_unlock_irqrestore(&barn->lock, flags); --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 179172EA154 for ; Wed, 23 Jul 2025 13:35:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277738; cv=none; b=MPtVj/mP3DYKvpSbwLYX/YDQrC4McJRUr++XF5L9HbhqDOsG3/yJCUE1kzGyG1OXEzmUw3qGg2wz9GHgdzrOm2d626+SgXfzlVNya87cQAjGxiPSU62G6XNS7y75KeoG9s4Of1mBhDRwqflIK4LCyPYO1arSN8W/pksAkE67/Bs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277738; c=relaxed/simple; bh=yM12N+qjBdV+wJ+IvfCWEywD8wX8OcWtZRR11EzaA+g=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=BJA/H0R5hWjQm6W3yAq/wRFEJavRtVxCgmp9qJzZoeH4YY0zBKzuWgY/L/p6+rnOTjcJxAdNVmbQMm2AuI8njCuRqMC56Y5OgWvPFBU8fz+QSFuCaO6rNV5oCMf4meGATQYRA4QMTtYamGzYYclvFlO+p7EGNssVsNK3U3FzJlk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=KxrbcoQK; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=5VH7Cvn5; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=KxrbcoQK; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=5VH7Cvn5; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="KxrbcoQK"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="5VH7Cvn5"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="KxrbcoQK"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="5VH7Cvn5" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 447311F791; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rl2sdnU3Na/QjO19ynlaiFzijVMgFDAZeg0f3e42QKM=; b=KxrbcoQKJ5Zfh0d43utMQ996V/KuOHaz6QEYI7AgHy8WjqkcVQXXy527a0GvSIEIX5CuIm kX6sxUWQegF3+JIEGkUor41DcDTaGhM1mbpxq8bLum1o3JAA3BtoWZiZIUW9MdKS1ymbay 8lEda69OpP83eNKooq6uFbn09O+5cVA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rl2sdnU3Na/QjO19ynlaiFzijVMgFDAZeg0f3e42QKM=; b=5VH7Cvn50oCDXW33Smu9Pmz6TOm7KOYfInWQRUWwsYQRJm35g1CbxZiVux8KjZZ2CGGDZB XXH7aUnC+4hDV8AA== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rl2sdnU3Na/QjO19ynlaiFzijVMgFDAZeg0f3e42QKM=; b=KxrbcoQKJ5Zfh0d43utMQ996V/KuOHaz6QEYI7AgHy8WjqkcVQXXy527a0GvSIEIX5CuIm kX6sxUWQegF3+JIEGkUor41DcDTaGhM1mbpxq8bLum1o3JAA3BtoWZiZIUW9MdKS1ymbay 8lEda69OpP83eNKooq6uFbn09O+5cVA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rl2sdnU3Na/QjO19ynlaiFzijVMgFDAZeg0f3e42QKM=; b=5VH7Cvn50oCDXW33Smu9Pmz6TOm7KOYfInWQRUWwsYQRJm35g1CbxZiVux8KjZZ2CGGDZB XXH7aUnC+4hDV8AA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 2B13E13ADD; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wIc6CgjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:38 +0200 Subject: [PATCH v5 05/14] tools: Add testing support for changes to rcu and slab for sheaves Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-5-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, "Liam R. Howlett" X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; ARC_NA(0.00)[]; FUZZY_RATELIMITED(0.00)[rspamd.com]; MIME_TRACE(0.00)[0:+]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,Oracle.com]; TO_DN_SOME(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo] X-Spam-Flag: NO X-Spam-Score: -4.30 From: "Liam R. Howlett" Make testing work for the slab and rcu changes that have come in with the sheaves work. This only works with one kmem_cache, and only the first one used. Subsequent setting of kmem_cache will not update the active kmem_cache and will be silently dropped because there are other tests which happen after the kmem_cache of interest is set. The saved active kmem_cache is used in the rcu callback, which passes the object to be freed. The rcu call takes the rcu_head, which is passed in as the field in the struct (in this case rcu in the maple tree node), which is calculated by pointer math. The offset of which is saved (in a global variable) for restoring the node pointer on the callback after the rcu grace period expires. Don't use any of this outside of testing, please. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- tools/include/linux/slab.h | 41 +++++++++++++++++++++++++++++++= +--- tools/testing/shared/linux.c | 24 ++++++++++++++++---- tools/testing/shared/linux/rcupdate.h | 22 +++++++++++++++++++ 3 files changed, 80 insertions(+), 7 deletions(-) diff --git a/tools/include/linux/slab.h b/tools/include/linux/slab.h index c87051e2b26f5a7fee0362697fae067076b8e84d..d1444e79f2685edb828adbce8b3= fbb500c0f8844 100644 --- a/tools/include/linux/slab.h +++ b/tools/include/linux/slab.h @@ -23,6 +23,12 @@ enum slab_state { FULL }; =20 +struct kmem_cache_args { + unsigned int align; + unsigned int sheaf_capacity; + void (*ctor)(void *); +}; + static inline void *kzalloc(size_t size, gfp_t gfp) { return kmalloc(size, gfp | __GFP_ZERO); @@ -37,9 +43,38 @@ static inline void *kmem_cache_alloc(struct kmem_cache *= cachep, int flags) } void kmem_cache_free(struct kmem_cache *cachep, void *objp); =20 -struct kmem_cache *kmem_cache_create(const char *name, unsigned int size, - unsigned int align, unsigned int flags, - void (*ctor)(void *)); + +struct kmem_cache * +__kmem_cache_create_args(const char *name, unsigned int size, + struct kmem_cache_args *args, unsigned int flags); + +/* If NULL is passed for @args, use this variant with default arguments. */ +static inline struct kmem_cache * +__kmem_cache_default_args(const char *name, unsigned int size, + struct kmem_cache_args *args, unsigned int flags) +{ + struct kmem_cache_args kmem_default_args =3D {}; + + return __kmem_cache_create_args(name, size, &kmem_default_args, flags); +} + +static inline struct kmem_cache * +__kmem_cache_create(const char *name, unsigned int size, unsigned int alig= n, + unsigned int flags, void (*ctor)(void *)) +{ + struct kmem_cache_args kmem_args =3D { + .align =3D align, + .ctor =3D ctor, + }; + + return __kmem_cache_create_args(name, size, &kmem_args, flags); +} + +#define kmem_cache_create(__name, __object_size, __args, ...) \ + _Generic((__args), \ + struct kmem_cache_args *: __kmem_cache_create_args, \ + void *: __kmem_cache_default_args, \ + default: __kmem_cache_create)(__name, __object_size, __args, __VA_ARGS__) =20 void kmem_cache_free_bulk(struct kmem_cache *cachep, size_t size, void **l= ist); int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t siz= e, diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c index 0f97fb0d19e19c327aa4843a35b45cc086f4f366..f998555a1b2af4a899a468a652b= 04622df459ed3 100644 --- a/tools/testing/shared/linux.c +++ b/tools/testing/shared/linux.c @@ -20,6 +20,7 @@ struct kmem_cache { pthread_mutex_t lock; unsigned int size; unsigned int align; + unsigned int sheaf_capacity; int nr_objs; void *objs; void (*ctor)(void *); @@ -31,6 +32,8 @@ struct kmem_cache { void *private; }; =20 +static struct kmem_cache *kmem_active =3D NULL; + void kmem_cache_set_callback(struct kmem_cache *cachep, void (*callback)(v= oid *)) { cachep->callback =3D callback; @@ -147,6 +150,14 @@ void kmem_cache_free(struct kmem_cache *cachep, void *= objp) pthread_mutex_unlock(&cachep->lock); } =20 +void kmem_cache_free_active(void *objp) +{ + if (!kmem_active) + printf("WARNING: No active kmem_cache\n"); + + kmem_cache_free(kmem_active, objp); +} + void kmem_cache_free_bulk(struct kmem_cache *cachep, size_t size, void **l= ist) { if (kmalloc_verbose) @@ -234,23 +245,28 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, = gfp_t gfp, size_t size, } =20 struct kmem_cache * -kmem_cache_create(const char *name, unsigned int size, unsigned int align, - unsigned int flags, void (*ctor)(void *)) +__kmem_cache_create_args(const char *name, unsigned int size, + struct kmem_cache_args *args, + unsigned int flags) { struct kmem_cache *ret =3D malloc(sizeof(*ret)); =20 pthread_mutex_init(&ret->lock, NULL); ret->size =3D size; - ret->align =3D align; + ret->align =3D args->align; + ret->sheaf_capacity =3D args->sheaf_capacity; ret->nr_objs =3D 0; ret->nr_allocated =3D 0; ret->nr_tallocated =3D 0; ret->objs =3D NULL; - ret->ctor =3D ctor; + ret->ctor =3D args->ctor; ret->non_kernel =3D 0; ret->exec_callback =3D false; ret->callback =3D NULL; ret->private =3D NULL; + if (!kmem_active) + kmem_active =3D ret; + return ret; } =20 diff --git a/tools/testing/shared/linux/rcupdate.h b/tools/testing/shared/l= inux/rcupdate.h index fed468fb0c78db6f33fb1900c7110ab5f3c19c65..c95e2f0bbd93798e544d7d34e08= 23ed68414f924 100644 --- a/tools/testing/shared/linux/rcupdate.h +++ b/tools/testing/shared/linux/rcupdate.h @@ -9,4 +9,26 @@ #define rcu_dereference_check(p, cond) rcu_dereference(p) #define RCU_INIT_POINTER(p, v) do { (p) =3D (v); } while (0) =20 +void kmem_cache_free_active(void *objp); +static unsigned long kfree_cb_offset =3D 0; + +static inline void kfree_rcu_cb(struct rcu_head *head) +{ + void *objp =3D (void *) ((unsigned long)head - kfree_cb_offset); + + kmem_cache_free_active(objp); +} + +#ifndef offsetof +#define offsetof(TYPE, MEMBER) __builtin_offsetof(TYPE, MEMBER) +#endif + +#define kfree_rcu(ptr, rhv) \ +do { \ + if (!kfree_cb_offset) \ + kfree_cb_offset =3D offsetof(typeof(*(ptr)), rhv); \ + \ + call_rcu(&ptr->rhv, kfree_rcu_cb); \ +} while (0) + #endif --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E31D22EA748 for ; Wed, 23 Jul 2025 13:35:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277745; cv=none; b=Di+lFDKl2dhgvExZDcQ9FgMosIs50mFWr1StlNdxaXnG1jeqtWv1CS4xhpDJep8/u4GZ4jxpLONStxI1LTVErccZqt/LuvgX8SC7KuE31PBkv8Aqly4eZelWa/39y+Uzw+i5T9L6+FSPfIYXQQukldGpOyh3Xng+8cSbjgoUA3U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277745; c=relaxed/simple; bh=OZHsqf0BcRyIzzSAZ5/8wcfhkKpJVOslJysdS4Pnj44=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=JWIqxUkLGbjuxGS6QCiBYWO7eWgF/lukSlkLOTUcFT0D49+9N0yecXVTR0iHmLbJpNri4AAy0rG5H5GUrcu23GRfSSLcar4ttL4dwV5BVCaTm9iT+U2S+coDkxkVS+YEXTfo4AbIMXIIhWzSb1Rp3Xd2XrLqg3D2L1tAnxqoJrk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=XHGdGzm6; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=G7h+Bn4B; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=XHGdGzm6; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=G7h+Bn4B; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="XHGdGzm6"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="G7h+Bn4B"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="XHGdGzm6"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="G7h+Bn4B" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 562FF1F792; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5fn4jC0LbpW83Xy3GBIO/Fb5+psgfkoknW+IQl2hd0g=; b=XHGdGzm6NEY6gd28f42z9hE54zRf3WER4Ee6OsNibRJNSg1JKCmBGAiVHdQ8pyj3o8uMre 0TycU1VNl5TxazwOmSBMAi0OG7xykllmaUjOO39kxOUMJqYfxgySAG7HuD2Sx8AO63HOFL ARRI/NOSv/avAcotFJ5D0d/a78XdV7Y= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5fn4jC0LbpW83Xy3GBIO/Fb5+psgfkoknW+IQl2hd0g=; b=G7h+Bn4BySgaHS/bv3XrlBjpXz1UUCb5XfINdztjNigaDxC5+MMDEv/iRV5C+y7+b+fkYb vHgHz+1gN2P9N6CA== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=XHGdGzm6; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=G7h+Bn4B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5fn4jC0LbpW83Xy3GBIO/Fb5+psgfkoknW+IQl2hd0g=; b=XHGdGzm6NEY6gd28f42z9hE54zRf3WER4Ee6OsNibRJNSg1JKCmBGAiVHdQ8pyj3o8uMre 0TycU1VNl5TxazwOmSBMAi0OG7xykllmaUjOO39kxOUMJqYfxgySAG7HuD2Sx8AO63HOFL ARRI/NOSv/avAcotFJ5D0d/a78XdV7Y= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5fn4jC0LbpW83Xy3GBIO/Fb5+psgfkoknW+IQl2hd0g=; b=G7h+Bn4BySgaHS/bv3XrlBjpXz1UUCb5XfINdztjNigaDxC5+MMDEv/iRV5C+y7+b+fkYb vHgHz+1gN2P9N6CA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 3F59D13AFC; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id eJgwDwjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:39 +0200 Subject: [PATCH v5 06/14] tools: Add sheaves support to testing infrastructure Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-6-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, "Liam R. Howlett" X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,Oracle.com]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; DKIM_TRACE(0.00)[suse.cz:+] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 562FF1F792 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 From: "Liam R. Howlett" Allocate a sheaf and fill it to the count amount. Does not fill to the sheaf limit to detect incorrect allocation requests. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka --- tools/include/linux/slab.h | 24 +++++++++++++ tools/testing/shared/linux.c | 84 ++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 108 insertions(+) diff --git a/tools/include/linux/slab.h b/tools/include/linux/slab.h index d1444e79f2685edb828adbce8b3fbb500c0f8844..1962d7f1abee154e1cda5dba28a= ef213088dd198 100644 --- a/tools/include/linux/slab.h +++ b/tools/include/linux/slab.h @@ -23,6 +23,13 @@ enum slab_state { FULL }; =20 +struct slab_sheaf { + struct kmem_cache *cache; + unsigned int size; + unsigned int capacity; + void *objects[]; +}; + struct kmem_cache_args { unsigned int align; unsigned int sheaf_capacity; @@ -80,4 +87,21 @@ void kmem_cache_free_bulk(struct kmem_cache *cachep, siz= e_t size, void **list); int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t siz= e, void **list); =20 +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int siz= e); + +void * +kmem_cache_alloc_from_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf); + +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf); +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size); + +static inline unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf) +{ + return sheaf->size; +} + #endif /* _TOOLS_SLAB_H */ diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c index f998555a1b2af4a899a468a652b04622df459ed3..e0255f53159bd3a1325d4919228= 3dd6790a5e3b8 100644 --- a/tools/testing/shared/linux.c +++ b/tools/testing/shared/linux.c @@ -181,6 +181,12 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, g= fp_t gfp, size_t size, if (kmalloc_verbose) pr_debug("Bulk alloc %zu\n", size); =20 + if (cachep->exec_callback) { + if (cachep->callback) + cachep->callback(cachep->private); + cachep->exec_callback =3D false; + } + pthread_mutex_lock(&cachep->lock); if (cachep->nr_objs >=3D size) { struct radix_tree_node *node; @@ -270,6 +276,84 @@ __kmem_cache_create_args(const char *name, unsigned in= t size, return ret; } =20 +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int siz= e) +{ + struct slab_sheaf *sheaf; + unsigned int capacity; + + if (size > s->sheaf_capacity) + capacity =3D size; + else + capacity =3D s->sheaf_capacity; + + sheaf =3D malloc(sizeof(*sheaf) + sizeof(void *) * s->sheaf_capacity * ca= pacity); + if (!sheaf) { + return NULL; + } + + memset(sheaf, 0, size); + sheaf->cache =3D s; + sheaf->capacity =3D capacity; + sheaf->size =3D kmem_cache_alloc_bulk(s, gfp, size, sheaf->objects); + if (!sheaf->size) { + free(sheaf); + return NULL; + } + + return sheaf; +} + +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size) +{ + struct slab_sheaf *sheaf =3D *sheafp; + int refill; + + if (sheaf->size >=3D size) + return 0; + + if (size > sheaf->capacity) { + sheaf =3D kmem_cache_prefill_sheaf(s, gfp, size); + if (!sheaf) + return -ENOMEM; + + kmem_cache_return_sheaf(s, gfp, *sheafp); + *sheafp =3D sheaf; + return 0; + } + + refill =3D kmem_cache_alloc_bulk(s, gfp, size - sheaf->size, + &sheaf->objects[sheaf->size]); + if (!refill) + return -ENOMEM; + + sheaf->size +=3D refill; + return 0; +} + +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + if (sheaf->size) { + //s->non_kernel +=3D sheaf->size; + kmem_cache_free_bulk(s, sheaf->size, &sheaf->objects[0]); + } + free(sheaf); +} + +void * +kmem_cache_alloc_from_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + if (sheaf->size =3D=3D 0) { + printf("Nothing left in sheaf!\n"); + return NULL; + } + + return sheaf->objects[--sheaf->size]; +} + /* * Test the test infrastructure for kem_cache_alloc/free and bulk counterp= arts. */ --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BBBA2E7192 for ; Wed, 23 Jul 2025 13:35:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277708; cv=none; b=kZEaUmT3PAC35tNzdpdj+6/jyS1wnZjoQnyuLTzd8AquoyujhF8d5Aa0mK+6KRNbwUv8PmeNTkpNA3e9LJCeDdUiwFmDYgNDLB6DEEudiFo6TVm2Udxn7qC7N1oyN96/ykKYlbZXX5QvgVrO2xuOhIrhzVa3+dQMjHy5UOP+JKc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277708; c=relaxed/simple; bh=UH5wz0l1feCpdvBKuKCYlukK0WqKo6g8vIdbOQZxl+s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EOZqUm7Mi+OKfowYkvYgTmUhq2OOo3TAFaxXUFSQiUaLPDBVMDOtayykafc2efZhn3fH624oMV4bqQOUmnDNeKkl67uwxNW16RQEk2j8kEBhxE6g37RMo6zRzwyx/Dh9+1iIMFKZzhFnwl9j80vQs5f8ikRnug4L3pvcXCOkRKA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=jLuWcep7; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=TuOmiLe9; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=jLuWcep7; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=TuOmiLe9; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="jLuWcep7"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="TuOmiLe9"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="jLuWcep7"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="TuOmiLe9" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 684A921748; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=n76T5/oLzAKLG/4h3bNpBYSlkM7GH6lXo34OEj/xIig=; b=jLuWcep7wvOrM9VfLFU5V7n3h8HO8EdEtMwkILNwBO9fDXDmtD3gUZmxaQDHXaRr0gHOTN NoCSvbIy5xzO1RB8xzGSgoKOx0Ic0DgCOYUvQAA3wmqekIAVPV8FtEkxD5usGoxpj+cswT JZhwSs9Rp+QZ3U5zQEj4Z9exJh8AXrU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=n76T5/oLzAKLG/4h3bNpBYSlkM7GH6lXo34OEj/xIig=; b=TuOmiLe99Wx9WUds4G4PNGnxwGj5KiApUUhPYuk3GLSl+bKCYO0kBewqM5VVUfyH9XWHhi wMuhr1lKd6hTWWCw== Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=n76T5/oLzAKLG/4h3bNpBYSlkM7GH6lXo34OEj/xIig=; b=jLuWcep7wvOrM9VfLFU5V7n3h8HO8EdEtMwkILNwBO9fDXDmtD3gUZmxaQDHXaRr0gHOTN NoCSvbIy5xzO1RB8xzGSgoKOx0Ic0DgCOYUvQAA3wmqekIAVPV8FtEkxD5usGoxpj+cswT JZhwSs9Rp+QZ3U5zQEj4Z9exJh8AXrU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=n76T5/oLzAKLG/4h3bNpBYSlkM7GH6lXo34OEj/xIig=; b=TuOmiLe99Wx9WUds4G4PNGnxwGj5KiApUUhPYuk3GLSl+bKCYO0kBewqM5VVUfyH9XWHhi wMuhr1lKd6hTWWCw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 5369E13AF2; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id QK0XFAjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:40 +0200 Subject: [PATCH v5 07/14] maple_tree: use percpu sheaves for maple_node_cache Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-7-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; ARC_NA(0.00)[]; FUZZY_RATELIMITED(0.00)[rspamd.com]; MIME_TRACE(0.00)[0:+]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_TLS_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; TO_DN_SOME(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo] X-Spam-Flag: NO X-Spam-Score: -4.30 Setup the maple_node_cache with percpu sheaves of size 32 to hopefully improve its performance. Change the single node rcu freeing in ma_free_rcu() to use kfree_rcu() instead of the custom callback, which allows the rcu_free sheaf batching to be used. Note there are other users of mt_free_rcu() where larger parts of maple tree are submitted to call_rcu() as a whole, and that cannot use the rcu_free sheaf. But it's still possible for maple nodes freed this way to be reused via the barn, even if only some cpus are allowed to process rcu callbacks. Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- lib/maple_tree.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index affe979bd14d30b96f8e012ff03dfd2fda6eec0b..82f39fe29a462aa3c779789a28e= fdd6cdef64c79 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -208,7 +208,7 @@ static void mt_free_rcu(struct rcu_head *head) static void ma_free_rcu(struct maple_node *node) { WARN_ON(node->parent !=3D ma_parent_ptr(node)); - call_rcu(&node->rcu, mt_free_rcu); + kfree_rcu(node, rcu); } =20 static void mt_set_height(struct maple_tree *mt, unsigned char height) @@ -6285,9 +6285,14 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp) =20 void __init maple_tree_init(void) { + struct kmem_cache_args args =3D { + .align =3D sizeof(struct maple_node), + .sheaf_capacity =3D 32, + }; + maple_node_cache =3D kmem_cache_create("maple_node", - sizeof(struct maple_node), sizeof(struct maple_node), - SLAB_PANIC, NULL); + sizeof(struct maple_node), &args, + SLAB_PANIC); } =20 /** --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 492622E8894 for ; Wed, 23 Jul 2025 13:35:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277714; cv=none; b=FCmJid98EYnTJS2m663+BMzL2EG2X2IT4BG296KqDjvCfigIFUOkgT4fEj+E4K4BAdXeSP0NjiDlo1QTX7RKwdGykJsy511z2IHvK9+lYldSn91cIGkwzRGugsCc5C2wSw+riqbLWhj31dkSY/B6nSx05cQkLOSjrj84emyBEVE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277714; c=relaxed/simple; bh=uwGuMkdfFM9dL2F84LqpQY2YADs4OLXcO/y6R3TN23Q=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=U+ydgyPyXFJ+9M1DRHR8K0ewkJZEynolQz7EfOw6fI2FYK2O6aeBNeOvq+rNKEytkOy6ZHD5ujIg3wSFoRUhAN+D+LF99n1nP9vK1ZGABVdTChdhNQhWPId1jVneOAb3eJrJSpM2kxXKdfrpUvVwx5gn8g5nR5wXiwvYOH2qQT4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=k7TWNhIa; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=dhAFiZfv; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=k7TWNhIa; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=dhAFiZfv; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="k7TWNhIa"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="dhAFiZfv"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="k7TWNhIa"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="dhAFiZfv" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 7ACF6218B0; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qjspbytw5pUKmztZF0TrPMKw/E4DLanznrsR+bMZ7RQ=; b=k7TWNhIaARnum1WkfbGe5RwoewtunZWxrgNGiLzBmp8m1PRoNoDkPEIFtd05Z9Vmv9CNtL u09LiDbroxL8En4mLkNAPFeHaERQ4GYqAk36oBZfEUcnG9111b6gdgRrHv8+Sjf/ha5JTr S5VFn7nePx0GbfOIZtvr6Cv0n630emM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qjspbytw5pUKmztZF0TrPMKw/E4DLanznrsR+bMZ7RQ=; b=dhAFiZfvJVtq3zZuZ5AZ8bXG7pje4bq/RsQHdv/EjrDDGaFM27AftIV7dMmKQxKE9gTQC1 mCTB3/sdl3ceb5Bg== Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qjspbytw5pUKmztZF0TrPMKw/E4DLanznrsR+bMZ7RQ=; b=k7TWNhIaARnum1WkfbGe5RwoewtunZWxrgNGiLzBmp8m1PRoNoDkPEIFtd05Z9Vmv9CNtL u09LiDbroxL8En4mLkNAPFeHaERQ4GYqAk36oBZfEUcnG9111b6gdgRrHv8+Sjf/ha5JTr S5VFn7nePx0GbfOIZtvr6Cv0n630emM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qjspbytw5pUKmztZF0TrPMKw/E4DLanznrsR+bMZ7RQ=; b=dhAFiZfvJVtq3zZuZ5AZ8bXG7pje4bq/RsQHdv/EjrDDGaFM27AftIV7dMmKQxKE9gTQC1 mCTB3/sdl3ceb5Bg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 65FBF13ADD; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id uDKiGAjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:41 +0200 Subject: [PATCH v5 08/14] mm, vma: use percpu sheaves for vm_area_struct cache Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-8-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo] X-Spam-Flag: NO X-Spam-Level: X-Spam-Score: -4.30 Create the vm_area_struct cache with percpu sheaves of size 32 to improve its performance. Reviewed-by: Suren Baghdasaryan Signed-off-by: Vlastimil Babka --- mm/vma_init.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/vma_init.c b/mm/vma_init.c index 8e53c7943561e7324e7992946b4065dec1149b82..52c6b55fac4519e0da39ca75ad0= 18e14449d1d95 100644 --- a/mm/vma_init.c +++ b/mm/vma_init.c @@ -16,6 +16,7 @@ void __init vma_state_init(void) struct kmem_cache_args args =3D { .use_freeptr_offset =3D true, .freeptr_offset =3D offsetof(struct vm_area_struct, vm_freeptr), + .sheaf_capacity =3D 32, }; =20 vm_area_cachep =3D kmem_cache_create("vm_area_struct", --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33EC52EACEE for ; Wed, 23 Jul 2025 13:35:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277752; cv=none; b=LDLXTf10O8uc4SCbMWkVZmbkWWoVJjR60jEbyO8UQUS14qB+0WdXKvsn1pevkZ+MpLrtAF5/QFN5yaLB7Q9ocz1e0YLU+Lovu6RUivotTO2AR+ViTmkFFAojCSrtyhFVryrQmmV3uR3LvMIUJGD/tizz/yc+jPF5m1F6KeS2Fe0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277752; c=relaxed/simple; bh=LQ98cXhb7xgZMnKJWRtMPbJYul6dO0G4nK4P5P/gNG4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=HDmIZuWpmF6HokJAbsL14d6Bbxd0iNqEANFXxrzEKIPU1czLv4ZCqRX9CbQZIJMBvIg4YYC1o1Ck93lRs78YNbxrMN10a1VIRHLsEVw6XGNYlE3twXQVQnuOexgR9882YKiEaDKLXzBshN5dBB2jjK8HIIWC6A06EX6pEdvNdnY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=ac//nzqZ; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=uIY0+D3R; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=ac//nzqZ; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=uIY0+D3R; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="ac//nzqZ"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="uIY0+D3R"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="ac//nzqZ"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="uIY0+D3R" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 8FFEC1F793; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xx1ef+51dA8FUNwEKzJAep3O9/h2Ns+stsV3pscdNoI=; b=ac//nzqZhn5GVqCvO6Wl9B+zJ3xm92fWBlCmqfGlFAX6d4dQnif4NgHOySxICkO/ZuMGLB xt4nEVAPg47fzE1xnq2u1bYu5eZWB7wqQ5BQjqjSNl4Beg0RY3iu4fjjjW2RNKIjnVkmUx WnFMxzzVxzhn39bXqr/dtiotEGIZqjY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xx1ef+51dA8FUNwEKzJAep3O9/h2Ns+stsV3pscdNoI=; b=uIY0+D3RMIzg9fp1EL0Jg+kkqIX+hqQi29BGGg8gk1rjf0FrzMua/5Xnko2j1n1bjgCMMK h7l1g0MyEiyHCiDw== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xx1ef+51dA8FUNwEKzJAep3O9/h2Ns+stsV3pscdNoI=; b=ac//nzqZhn5GVqCvO6Wl9B+zJ3xm92fWBlCmqfGlFAX6d4dQnif4NgHOySxICkO/ZuMGLB xt4nEVAPg47fzE1xnq2u1bYu5eZWB7wqQ5BQjqjSNl4Beg0RY3iu4fjjjW2RNKIjnVkmUx WnFMxzzVxzhn39bXqr/dtiotEGIZqjY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xx1ef+51dA8FUNwEKzJAep3O9/h2Ns+stsV3pscdNoI=; b=uIY0+D3RMIzg9fp1EL0Jg+kkqIX+hqQi29BGGg8gk1rjf0FrzMua/5Xnko2j1n1bjgCMMK h7l1g0MyEiyHCiDw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 7872E13AFA; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wNwiHQjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:42 +0200 Subject: [PATCH v5 09/14] mm, slub: skip percpu sheaves for remote object freeing Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-9-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo] X-Spam-Flag: NO X-Spam-Score: -4.30 Since we don't control the NUMA locality of objects in percpu sheaves, allocations with node restrictions bypass them. Allocations without restrictions may however still expect to get local objects with high probability, and the introduction of sheaves can decrease it due to freed object from a remote node ending up in percpu sheaves. The fraction of such remote frees seems low (5% on an 8-node machine) but it can be expected that some cache or workload specific corner cases exist. We can either conclude that this is not a problem due to the low fraction, or we can make remote frees bypass percpu sheaves and go directly to their slabs. This will make the remote frees more expensive, but if if's only a small fraction, most frees will still benefit from the lower overhead of percpu sheaves. This patch thus makes remote object freeing bypass percpu sheaves, including bulk freeing, and kfree_rcu() via the rcu_free sheaf. However it's not intended to be 100% guarantee that percpu sheaves will only contain local objects. The refill from slabs does not provide that guarantee in the first place, and there might be cpu migrations happening when we need to unlock the local_lock. Avoiding all that could be possible but complicated so we can leave it for later investigation whether it would be worth it. It can be expected that the more selective freeing will itself prevent accumulation of remote objects in percpu sheaves so any such violations would have only short-term effects. Signed-off-by: Vlastimil Babka Reviewed-by: Harry Yoo --- mm/slab_common.c | 7 +++++-- mm/slub.c | 42 ++++++++++++++++++++++++++++++++++++------ 2 files changed, 41 insertions(+), 8 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 2d806e02568532a1000fd3912db6978e945dcfa8..f466f68a5bd82030a987baf849a= 98154cd48ef23 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1623,8 +1623,11 @@ static bool kfree_rcu_sheaf(void *obj) =20 slab =3D folio_slab(folio); s =3D slab->slab_cache; - if (s->cpu_sheaves) - return __kfree_rcu_sheaf(s, obj); + if (s->cpu_sheaves) { + if (likely(!IS_ENABLED(CONFIG_NUMA) || + slab_nid(slab) =3D=3D numa_node_id())) + return __kfree_rcu_sheaf(s, obj); + } =20 return false; } diff --git a/mm/slub.c b/mm/slub.c index 339d91c6ea29be99a14a8914117fab0e3e6ed26b..50fc35b8fc9b3101821c338e946= 9c134677ded51 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -455,6 +455,7 @@ struct slab_sheaf { }; struct kmem_cache *cache; unsigned int size; + int node; /* only used for rcu_sheaf */ void *objects[]; }; =20 @@ -5682,7 +5683,7 @@ static void rcu_free_sheaf(struct rcu_head *head) */ __rcu_free_sheaf_prepare(s, sheaf); =20 - barn =3D get_node(s, numa_mem_id())->barn; + barn =3D get_node(s, sheaf->node)->barn; =20 /* due to slab_free_hook() */ if (unlikely(sheaf->size =3D=3D 0)) @@ -5765,10 +5766,12 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void *= obj) =20 rcu_sheaf->objects[rcu_sheaf->size++] =3D obj; =20 - if (likely(rcu_sheaf->size < s->sheaf_capacity)) + if (likely(rcu_sheaf->size < s->sheaf_capacity)) { rcu_sheaf =3D NULL; - else + } else { pcs->rcu_free =3D NULL; + rcu_sheaf->node =3D numa_mem_id(); + } =20 local_unlock(&s->cpu_sheaves->lock); =20 @@ -5794,7 +5797,11 @@ static void free_to_pcs_bulk(struct kmem_cache *s, s= ize_t size, void **p) struct slab_sheaf *main, *empty; bool init =3D slab_want_init_on_free(s); unsigned int batch, i =3D 0; + void *remote_objects[PCS_BATCH_MAX]; + unsigned int remote_nr =3D 0; + int node =3D numa_mem_id(); =20 +next_remote_batch: while (i < size) { struct slab *slab =3D virt_to_slab(p[i]); =20 @@ -5804,7 +5811,15 @@ static void free_to_pcs_bulk(struct kmem_cache *s, s= ize_t size, void **p) if (unlikely(!slab_free_hook(s, p[i], init, false))) { p[i] =3D p[--size]; if (!size) - return; + goto flush_remote; + continue; + } + + if (unlikely(IS_ENABLED(CONFIG_NUMA) && slab_nid(slab) !=3D node)) { + remote_objects[remote_nr] =3D p[i]; + p[i] =3D p[--size]; + if (++remote_nr >=3D PCS_BATCH_MAX) + goto flush_remote; continue; } =20 @@ -5872,6 +5887,15 @@ static void free_to_pcs_bulk(struct kmem_cache *s, s= ize_t size, void **p) */ fallback: __kmem_cache_free_bulk(s, size, p); + +flush_remote: + if (remote_nr) { + __kmem_cache_free_bulk(s, remote_nr, &remote_objects[0]); + if (i < size) { + remote_nr =3D 0; + goto next_remote_batch; + } + } } =20 #ifndef CONFIG_SLUB_TINY @@ -5963,8 +5987,14 @@ void slab_free(struct kmem_cache *s, struct slab *sl= ab, void *object, if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false)= )) return; =20 - if (!s->cpu_sheaves || !free_to_pcs(s, object)) - do_slab_free(s, slab, object, object, 1, addr); + if (s->cpu_sheaves && likely(!IS_ENABLED(CONFIG_NUMA) || + slab_nid(slab) =3D=3D numa_mem_id())) { + if (likely(free_to_pcs(s, object))) { + return; + } + } + + do_slab_free(s, slab, object, object, 1, addr); } =20 #ifdef CONFIG_MEMCG --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41BB32E8E12 for ; Wed, 23 Jul 2025 13:35:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277721; cv=none; b=eZH6BIJu/mnjSyn8LB8ltIRILum/WZlcU/YwYeEk7dmqQ8ikx+Th3TkzvHHaJheAbtgaZ9zxcxxgkrTyd8HRgwL2aJboQEcwqjSmorD59wqt5dJIW7CekeCuOJD8z2OKf3FE87FIJMx7PWNFsHzLcjWeE1AsYxBcBJQrTEQs9LE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277721; c=relaxed/simple; bh=mUqgzrU5dc2neVGtE6a0FvUEDhDz5Uruu+j1fauzLI8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=uuWJYcCYWXAeW57BS0FdbHwdb5VoHyDe7TdHnN0XjWwsEOH/IP2xAKfSwO3JCEOFwCTCkslQvMp3kQiv6AegPz6fBczvD76X8jUQ3jcxjUElluxu0K/BKu3wYv29Dv6IA/WbbkQZVN95f/D2qIH5KP7Sxq6RX4BAF1AvkhFWALM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=QYZMjNEq; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=s3a1IUzu; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=QYZMjNEq; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=s3a1IUzu; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="QYZMjNEq"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="s3a1IUzu"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="QYZMjNEq"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="s3a1IUzu" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id A21A221233; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5N50KpHrcbkWP+T2bNvlGjW2kxPjMJyMhhnC06rEb9Y=; b=QYZMjNEq62hxR8jbfv4GYd+aORWzzXWl0FWZ+TyLv5dnH3/LPxYKrgmLbBjIEBU8FJ8EXN 86CEJwGcJmvb5g8fL/twbUC7S8Kvlk66PmkH/6Mfct9w5ezlYWDzOT5NQkvcVjHNs7pCNF Jzr9PYGMagrLcutpZ0rSdz+u4KezpTs= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5N50KpHrcbkWP+T2bNvlGjW2kxPjMJyMhhnC06rEb9Y=; b=s3a1IUzu8NfcfpQ1qLFiuUtxB6p8xCWqjald55ZZcij9W14qXdYk+8diRIjofH4XgEFSe8 zXEPzUPu2mV/SnAQ== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=QYZMjNEq; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=s3a1IUzu DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5N50KpHrcbkWP+T2bNvlGjW2kxPjMJyMhhnC06rEb9Y=; b=QYZMjNEq62hxR8jbfv4GYd+aORWzzXWl0FWZ+TyLv5dnH3/LPxYKrgmLbBjIEBU8FJ8EXN 86CEJwGcJmvb5g8fL/twbUC7S8Kvlk66PmkH/6Mfct9w5ezlYWDzOT5NQkvcVjHNs7pCNF Jzr9PYGMagrLcutpZ0rSdz+u4KezpTs= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5N50KpHrcbkWP+T2bNvlGjW2kxPjMJyMhhnC06rEb9Y=; b=s3a1IUzu8NfcfpQ1qLFiuUtxB6p8xCWqjald55ZZcij9W14qXdYk+8diRIjofH4XgEFSe8 zXEPzUPu2mV/SnAQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 8DAFA13AF2; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 8I9OIgjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:43 +0200 Subject: [PATCH v5 10/14] mm, slab: allow NUMA restricted allocations to use percpu sheaves Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-10-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_TLS_ALL(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; DKIM_TRACE(0.00)[suse.cz:+] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: A21A221233 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 Currently allocations asking for a specific node explicitly or via mempolicy in strict_numa node bypass percpu sheaves. Since sheaves contain mostly local objects, we can try allocating from them if the local node happens to be the requested node or allowed by the mempolicy. If we find the object from percpu sheaves is not from the expected node, we skip the sheaves - this should be rare. Signed-off-by: Vlastimil Babka Reviewed-by: Harry Yoo --- mm/slub.c | 52 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 45 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 50fc35b8fc9b3101821c338e9469c134677ded51..b98983b8d2e3e04ea256d91efcf= 0215ff0ae7e38 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4765,18 +4765,42 @@ __pcs_handle_empty(struct kmem_cache *s, struct slu= b_percpu_sheaves *pcs, gfp_t } =20 static __fastpath_inline -void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) +void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp, int node) { struct slub_percpu_sheaves *pcs; void *object; =20 #ifdef CONFIG_NUMA - if (static_branch_unlikely(&strict_numa)) { - if (current->mempolicy) - return NULL; + if (static_branch_unlikely(&strict_numa) && + node =3D=3D NUMA_NO_NODE) { + + struct mempolicy *mpol =3D current->mempolicy; + + if (mpol) { + /* + * Special BIND rule support. If the local node + * is in permitted set then do not redirect + * to a particular node. + * Otherwise we apply the memory policy to get + * the node we need to allocate on. + */ + if (mpol->mode !=3D MPOL_BIND || + !node_isset(numa_mem_id(), mpol->nodes)) + + node =3D mempolicy_slab_node(); + } } #endif =20 + if (unlikely(node !=3D NUMA_NO_NODE)) { + /* + * We assume the percpu sheaves contain only local objects + * although it's not completely guaranteed, so we verify later. + */ + if (node !=3D numa_mem_id()) + return NULL; + } + if (!local_trylock(&s->cpu_sheaves->lock)) return NULL; =20 @@ -4788,7 +4812,21 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) return NULL; } =20 - object =3D pcs->main->objects[--pcs->main->size]; + object =3D pcs->main->objects[pcs->main->size - 1]; + + if (unlikely(node !=3D NUMA_NO_NODE)) { + /* + * Verify that the object was from the node we want. This could + * be false because of cpu migration during an unlocked part of + * the current allocation or previous freeing process. + */ + if (folio_nid(virt_to_folio(object)) !=3D node) { + local_unlock(&s->cpu_sheaves->lock); + return NULL; + } + } + + pcs->main->size--; =20 local_unlock(&s->cpu_sheaves->lock); =20 @@ -4888,8 +4926,8 @@ static __fastpath_inline void *slab_alloc_node(struct= kmem_cache *s, struct list if (unlikely(object)) goto out; =20 - if (s->cpu_sheaves && node =3D=3D NUMA_NO_NODE) - object =3D alloc_from_pcs(s, gfpflags); + if (s->cpu_sheaves) + object =3D alloc_from_pcs(s, gfpflags, node); =20 if (!object) object =3D __slab_alloc_node(s, gfpflags, node, addr, orig_size); --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E794E2E7F3A for ; Wed, 23 Jul 2025 13:35:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277727; cv=none; b=J1E56CrdMqUXlH2ME+/0yI+d5R8EfBQ5I9n5WrLlen3lCzdDGFNfTdP8vYBn3tXaCFuZ0BszRlFzI4+MbuLG6D409I1OT3w7qKssponMK4nsI55FSSX3mj7cqiQvfnzlR1UVmF9rb6w2nuwMr732Ex4jD/qGxFoFPrV0Fbmhpko= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277727; c=relaxed/simple; bh=N92RPV8qw0pn5y79OhPB/vfZVX/JyOGcG+2Hg6SRHKQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Prdjogt51Hx8KlBjGe7LS/R2sHz/l3CTqIiOSwPI4oPf5M1QQ18h2hQqI1ik3Ts/cy9lRlxO58A3K/fYYCn/ILjVy/PCekWPSelvMPvL4wwGl1ZYRE267cXDxa2Yiex//sTttZuUJQFWZhMdVYG+M646/dePioDznX8heJ0IeWE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=aRy0XfV6; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=Q81RoC3k; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=aRy0XfV6; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=Q81RoC3k; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="aRy0XfV6"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="Q81RoC3k"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="aRy0XfV6"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="Q81RoC3k" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id B5FF7218F3; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BZ8jFguaUhh6ngJiPeQDDEWlyXz498l0EKqDytKUQuU=; b=aRy0XfV6iBHXc66Atd1OdHGcjOJGeVoDwjCdDi6IG1bqBLcKJ03/Ff4rIHu8ienAkNndOt 7tKn/qP7oLqwY8TurNMa3eEGZ2DkqfD+BVqGpYBGZZISazTibzMxZMebBqVsMZrM8FnKgM ne7GThlYOUOie8dD7tbLgYExXL+P4u4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BZ8jFguaUhh6ngJiPeQDDEWlyXz498l0EKqDytKUQuU=; b=Q81RoC3k+Z4CAeXV+PTWo4u8vh/2Wn16aPOVxjCfIaXnN/mzeCqB9IOAHMQ7h8/oLePiSo zT/j7zbDqxQu19BQ== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=aRy0XfV6; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=Q81RoC3k DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BZ8jFguaUhh6ngJiPeQDDEWlyXz498l0EKqDytKUQuU=; b=aRy0XfV6iBHXc66Atd1OdHGcjOJGeVoDwjCdDi6IG1bqBLcKJ03/Ff4rIHu8ienAkNndOt 7tKn/qP7oLqwY8TurNMa3eEGZ2DkqfD+BVqGpYBGZZISazTibzMxZMebBqVsMZrM8FnKgM ne7GThlYOUOie8dD7tbLgYExXL+P4u4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BZ8jFguaUhh6ngJiPeQDDEWlyXz498l0EKqDytKUQuU=; b=Q81RoC3k+Z4CAeXV+PTWo4u8vh/2Wn16aPOVxjCfIaXnN/mzeCqB9IOAHMQ7h8/oLePiSo zT/j7zbDqxQu19BQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 9FF1F13ADD; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id AOy/JgjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:44 +0200 Subject: [PATCH v5 11/14] testing/radix-tree/maple: Increase readers and reduce delay for faster machines Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-11-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, "Liam R. Howlett" X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: B5FF7218F3 X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; DKIM_TRACE(0.00)[suse.cz:+] X-Spam-Score: -4.51 From: "Liam R. Howlett" Add more threads and reduce the timing of the readers to increase the possibility of catching the rcu changes. The test does not pass unless the reader is seen. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka --- tools/testing/radix-tree/maple.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/ma= ple.c index 2c0b3830125336af760768597d39ed07a2f8e92b..f6f923c9dc1039997953a94ec18= 4c560b225c2d4 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -35062,7 +35062,7 @@ void run_check_rcu_slowread(struct maple_tree *mt, = struct rcu_test_struct *vals) =20 int i; void *(*function)(void *); - pthread_t readers[20]; + pthread_t readers[30]; unsigned int index =3D vals->index; =20 mt_set_in_rcu(mt); @@ -35080,14 +35080,14 @@ void run_check_rcu_slowread(struct maple_tree *mt= , struct rcu_test_struct *vals) } } =20 - usleep(5); /* small yield to ensure all threads are at least started. */ + usleep(3); /* small yield to ensure all threads are at least started. */ =20 while (index <=3D vals->last) { mtree_store(mt, index, (index % 2 ? vals->entry2 : vals->entry3), GFP_KERNEL); index++; - usleep(5); + usleep(2); } =20 while (i--) @@ -35098,6 +35098,7 @@ void run_check_rcu_slowread(struct maple_tree *mt, = struct rcu_test_struct *vals) MT_BUG_ON(mt, !vals->seen_entry3); MT_BUG_ON(mt, !vals->seen_both); } + static noinline void __init check_rcu_simulated(struct maple_tree *mt) { unsigned long i, nr_entries =3D 1000; --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C580C2EA143 for ; Wed, 23 Jul 2025 13:35:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277734; cv=none; b=MBjZhxCDTUws5torXKL4Og5An9NOwSUeM6GO1B2Xqyx20dsSAGpxPXzJZdfulClvT8JRo1eH+i8oPlcGixsAtmcgWUJ+teUpy8USNBIK/WYwIw7p7tUd98gxvrOP3SaOpusmomfcfhwNnu848L/kdH96ek/mDzFcZwmDVOx1inU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277734; c=relaxed/simple; bh=iWb8ygHQqooAY7TgE4IT2DaNyUVfQixj8EY7dIXCMnQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LXGogeCYnM/VFViVPaWY2WEerR/lVAZyvDEgbFRDhrQKWzWrjLrpPkhw95CwmaGCQd4hjljSyNLC2qyFcXRTWn8+BPNtanLykdyzOLkRMVl1Q2eQ2ZnaSvkq83RkyxWx/fLnSVXM+4QAM/2ctdJtJEojYARebx4SP7qCmRrr2bY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=wm2bChio; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=zfUT3cTf; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=wm2bChio; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=zfUT3cTf; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="wm2bChio"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="zfUT3cTf"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="wm2bChio"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="zfUT3cTf" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id C8CDC218F9; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/lIiUA/nm7dIiMOgxQASiMYQdEoYDUIn9A89qqQIi9s=; b=wm2bChiohdlv3SvhGShGo1rR2nTHHa+GSV2g0YzoJoaDq/PRxSMOSrh54Milqgwv51NWoq NQ1AaugaQooRTejEb/tHksrOBGKL55jK/8iTE3C8kLLJmXPkqe53csx+gPzsNpKeKBCXD7 5PmWdoDOPPFUBOucnIqJZ3+CyM8MB+M= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/lIiUA/nm7dIiMOgxQASiMYQdEoYDUIn9A89qqQIi9s=; b=zfUT3cTfsm/48/z1m87pqMlDv0tEubs/v/e6V0DE9eiYsQc4JBBsDPUk9G9K93Tm16a32k gFfR7AFgGm/9aEBA== Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/lIiUA/nm7dIiMOgxQASiMYQdEoYDUIn9A89qqQIi9s=; b=wm2bChiohdlv3SvhGShGo1rR2nTHHa+GSV2g0YzoJoaDq/PRxSMOSrh54Milqgwv51NWoq NQ1AaugaQooRTejEb/tHksrOBGKL55jK/8iTE3C8kLLJmXPkqe53csx+gPzsNpKeKBCXD7 5PmWdoDOPPFUBOucnIqJZ3+CyM8MB+M= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/lIiUA/nm7dIiMOgxQASiMYQdEoYDUIn9A89qqQIi9s=; b=zfUT3cTfsm/48/z1m87pqMlDv0tEubs/v/e6V0DE9eiYsQc4JBBsDPUk9G9K93Tm16a32k gFfR7AFgGm/9aEBA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B44DB13AFA; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id OOHEKwjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:45 +0200 Subject: [PATCH v5 12/14] maple_tree: Sheaf conversion Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-12-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo] X-Spam-Flag: NO X-Spam-Score: -4.30 From: "Liam R. Howlett" Use sheaves instead of bulk allocations. This should speed up the allocations and the return path of unused allocations. Remove push/pop of nodes from maple state. Remove unnecessary testing ifdef out other testing that probably will be deleted Fix testcase for testing race Move some testing around in the same commit. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka --- include/linux/maple_tree.h | 6 +- lib/maple_tree.c | 331 ++++---------------- lib/test_maple_tree.c | 8 + tools/testing/radix-tree/maple.c | 632 +++++++----------------------------= ---- tools/testing/shared/linux.c | 8 +- 5 files changed, 185 insertions(+), 800 deletions(-) diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h index 9ef1290382249462d73ae72435dada7ce4b0622c..3cf1ae9dde7ce43fa20ae400c01= fefad048c302e 100644 --- a/include/linux/maple_tree.h +++ b/include/linux/maple_tree.h @@ -442,7 +442,8 @@ struct ma_state { struct maple_enode *node; /* The node containing this entry */ unsigned long min; /* The minimum index of this node - implied pivot min= */ unsigned long max; /* The maximum index of this node - implied pivot max= */ - struct maple_alloc *alloc; /* Allocated nodes for this operation */ + struct slab_sheaf *sheaf; /* Allocated nodes for this operation */ + unsigned long node_request; enum maple_status status; /* The status of the state (active, start, none= , etc) */ unsigned char depth; /* depth of tree descent during write */ unsigned char offset; @@ -490,7 +491,8 @@ struct ma_wr_state { .status =3D ma_start, \ .min =3D 0, \ .max =3D ULONG_MAX, \ - .alloc =3D NULL, \ + .node_request=3D 0, \ + .sheaf =3D NULL, \ .mas_flags =3D 0, \ .store_type =3D wr_invalid, \ } diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 82f39fe29a462aa3c779789a28efdd6cdef64c79..3c3c14a76d98ded3b619c178d64= 099b464a2ca23 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -198,6 +198,22 @@ static void mt_free_rcu(struct rcu_head *head) kmem_cache_free(maple_node_cache, node); } =20 +static void mt_return_sheaf(struct slab_sheaf *sheaf) +{ + kmem_cache_return_sheaf(maple_node_cache, GFP_KERNEL, sheaf); +} + +static struct slab_sheaf *mt_get_sheaf(gfp_t gfp, int count) +{ + return kmem_cache_prefill_sheaf(maple_node_cache, gfp, count); +} + +static int mt_refill_sheaf(gfp_t gfp, struct slab_sheaf **sheaf, + unsigned int size) +{ + return kmem_cache_refill_sheaf(maple_node_cache, gfp, sheaf, size); +} + /* * ma_free_rcu() - Use rcu callback to free a maple node * @node: The node to free @@ -590,67 +606,6 @@ static __always_inline bool mte_dead_node(const struct= maple_enode *enode) return ma_dead_node(node); } =20 -/* - * mas_allocated() - Get the number of nodes allocated in a maple state. - * @mas: The maple state - * - * The ma_state alloc member is overloaded to hold a pointer to the first - * allocated node or to the number of requested nodes to allocate. If bit= 0 is - * set, then the alloc contains the number of requested nodes. If there i= s an - * allocated node, then the total allocated nodes is in that node. - * - * Return: The total number of nodes allocated - */ -static inline unsigned long mas_allocated(const struct ma_state *mas) -{ - if (!mas->alloc || ((unsigned long)mas->alloc & 0x1)) - return 0; - - return mas->alloc->total; -} - -/* - * mas_set_alloc_req() - Set the requested number of allocations. - * @mas: the maple state - * @count: the number of allocations. - * - * The requested number of allocations is either in the first allocated no= de, - * located in @mas->alloc->request_count, or directly in @mas->alloc if th= ere is - * no allocated node. Set the request either in the node or do the necess= ary - * encoding to store in @mas->alloc directly. - */ -static inline void mas_set_alloc_req(struct ma_state *mas, unsigned long c= ount) -{ - if (!mas->alloc || ((unsigned long)mas->alloc & 0x1)) { - if (!count) - mas->alloc =3D NULL; - else - mas->alloc =3D (struct maple_alloc *)(((count) << 1U) | 1U); - return; - } - - mas->alloc->request_count =3D count; -} - -/* - * mas_alloc_req() - get the requested number of allocations. - * @mas: The maple state - * - * The alloc count is either stored directly in @mas, or in - * @mas->alloc->request_count if there is at least one node allocated. De= code - * the request count if it's stored directly in @mas->alloc. - * - * Return: The allocation request count. - */ -static inline unsigned int mas_alloc_req(const struct ma_state *mas) -{ - if ((unsigned long)mas->alloc & 0x1) - return (unsigned long)(mas->alloc) >> 1; - else if (mas->alloc) - return mas->alloc->request_count; - return 0; -} - /* * ma_pivots() - Get a pointer to the maple node pivots. * @node: the maple node @@ -1148,77 +1103,15 @@ static int mas_ascend(struct ma_state *mas) */ static inline struct maple_node *mas_pop_node(struct ma_state *mas) { - struct maple_alloc *ret, *node =3D mas->alloc; - unsigned long total =3D mas_allocated(mas); - unsigned int req =3D mas_alloc_req(mas); + struct maple_node *ret; =20 - /* nothing or a request pending. */ - if (WARN_ON(!total)) + if (WARN_ON_ONCE(!mas->sheaf)) return NULL; =20 - if (total =3D=3D 1) { - /* single allocation in this ma_state */ - mas->alloc =3D NULL; - ret =3D node; - goto single_node; - } - - if (node->node_count =3D=3D 1) { - /* Single allocation in this node. */ - mas->alloc =3D node->slot[0]; - mas->alloc->total =3D node->total - 1; - ret =3D node; - goto new_head; - } - node->total--; - ret =3D node->slot[--node->node_count]; - node->slot[node->node_count] =3D NULL; - -single_node: -new_head: - if (req) { - req++; - mas_set_alloc_req(mas, req); - } - + ret =3D kmem_cache_alloc_from_sheaf(maple_node_cache, GFP_NOWAIT, mas->sh= eaf); memset(ret, 0, sizeof(*ret)); - return (struct maple_node *)ret; -} - -/* - * mas_push_node() - Push a node back on the maple state allocation. - * @mas: The maple state - * @used: The used maple node - * - * Stores the maple node back into @mas->alloc for reuse. Updates allocat= ed and - * requested node count as necessary. - */ -static inline void mas_push_node(struct ma_state *mas, struct maple_node *= used) -{ - struct maple_alloc *reuse =3D (struct maple_alloc *)used; - struct maple_alloc *head =3D mas->alloc; - unsigned long count; - unsigned int requested =3D mas_alloc_req(mas); - - count =3D mas_allocated(mas); =20 - reuse->request_count =3D 0; - reuse->node_count =3D 0; - if (count) { - if (head->node_count < MAPLE_ALLOC_SLOTS) { - head->slot[head->node_count++] =3D reuse; - head->total++; - goto done; - } - reuse->slot[0] =3D head; - reuse->node_count =3D 1; - } - - reuse->total =3D count + 1; - mas->alloc =3D reuse; -done: - if (requested > 1) - mas_set_alloc_req(mas, requested - 1); + return ret; } =20 /* @@ -1228,75 +1121,32 @@ static inline void mas_push_node(struct ma_state *m= as, struct maple_node *used) */ static inline void mas_alloc_nodes(struct ma_state *mas, gfp_t gfp) { - struct maple_alloc *node; - unsigned long allocated =3D mas_allocated(mas); - unsigned int requested =3D mas_alloc_req(mas); - unsigned int count; - void **slots =3D NULL; - unsigned int max_req =3D 0; - - if (!requested) - return; + if (unlikely(mas->sheaf)) { + unsigned long refill =3D mas->node_request; =20 - mas_set_alloc_req(mas, 0); - if (mas->mas_flags & MA_STATE_PREALLOC) { - if (allocated) + if(kmem_cache_sheaf_size(mas->sheaf) >=3D refill) { + mas->node_request =3D 0; return; - WARN_ON(!allocated); - } - - if (!allocated || mas->alloc->node_count =3D=3D MAPLE_ALLOC_SLOTS) { - node =3D (struct maple_alloc *)mt_alloc_one(gfp); - if (!node) - goto nomem_one; - - if (allocated) { - node->slot[0] =3D mas->alloc; - node->node_count =3D 1; - } else { - node->node_count =3D 0; } =20 - mas->alloc =3D node; - node->total =3D ++allocated; - node->request_count =3D 0; - requested--; - } + if (mt_refill_sheaf(gfp, &mas->sheaf, refill)) + goto error; =20 - node =3D mas->alloc; - while (requested) { - max_req =3D MAPLE_ALLOC_SLOTS - node->node_count; - slots =3D (void **)&node->slot[node->node_count]; - max_req =3D min(requested, max_req); - count =3D mt_alloc_bulk(gfp, max_req, slots); - if (!count) - goto nomem_bulk; - - if (node->node_count =3D=3D 0) { - node->slot[0]->node_count =3D 0; - node->slot[0]->request_count =3D 0; - } + mas->node_request =3D 0; + return; + } =20 - node->node_count +=3D count; - allocated +=3D count; - /* find a non-full node*/ - do { - node =3D node->slot[0]; - } while (unlikely(node->node_count =3D=3D MAPLE_ALLOC_SLOTS)); - requested -=3D count; + mas->sheaf =3D mt_get_sheaf(gfp, mas->node_request); + if (likely(mas->sheaf)) { + mas->node_request =3D 0; + return; } - mas->alloc->total =3D allocated; - return; =20 -nomem_bulk: - /* Clean up potential freed allocations on bulk failure */ - memset(slots, 0, max_req * sizeof(unsigned long)); - mas->alloc->total =3D allocated; -nomem_one: - mas_set_alloc_req(mas, requested); +error: =20 mas_set_err(mas, -ENOMEM); } =20 + /* * mas_free() - Free an encoded maple node * @mas: The maple state @@ -1307,42 +1157,7 @@ static inline void mas_alloc_nodes(struct ma_state *= mas, gfp_t gfp) */ static inline void mas_free(struct ma_state *mas, struct maple_enode *used) { - struct maple_node *tmp =3D mte_to_node(used); - - if (mt_in_rcu(mas->tree)) - ma_free_rcu(tmp); - else - mas_push_node(mas, tmp); -} - -/* - * mas_node_count_gfp() - Check if enough nodes are allocated and request = more - * if there is not enough nodes. - * @mas: The maple state - * @count: The number of nodes needed - * @gfp: the gfp flags - */ -static void mas_node_count_gfp(struct ma_state *mas, int count, gfp_t gfp) -{ - unsigned long allocated =3D mas_allocated(mas); - - if (allocated < count) { - mas_set_alloc_req(mas, count - allocated); - mas_alloc_nodes(mas, gfp); - } -} - -/* - * mas_node_count() - Check if enough nodes are allocated and request more= if - * there is not enough nodes. - * @mas: The maple state - * @count: The number of nodes needed - * - * Note: Uses GFP_NOWAIT | __GFP_NOWARN for gfp flags. - */ -static void mas_node_count(struct ma_state *mas, int count) -{ - return mas_node_count_gfp(mas, count, GFP_NOWAIT | __GFP_NOWARN); + ma_free_rcu(mte_to_node(used)); } =20 /* @@ -2517,10 +2332,7 @@ static inline void mas_topiary_node(struct ma_state = *mas, enode =3D tmp_mas->node; tmp =3D mte_to_node(enode); mte_set_node_dead(enode); - if (in_rcu) - ma_free_rcu(tmp); - else - mas_push_node(mas, tmp); + ma_free_rcu(tmp); } =20 /* @@ -4168,7 +3980,7 @@ static inline void mas_wr_prealloc_setup(struct ma_wr= _state *wr_mas) * * Return: Number of nodes required for preallocation. */ -static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entr= y) +static inline void mas_prealloc_calc(struct ma_wr_state *wr_mas, void *ent= ry) { struct ma_state *mas =3D wr_mas->mas; unsigned char height =3D mas_mt_height(mas); @@ -4214,7 +4026,7 @@ static inline int mas_prealloc_calc(struct ma_wr_stat= e *wr_mas, void *entry) WARN_ON_ONCE(1); } =20 - return ret; + mas->node_request =3D ret; } =20 /* @@ -4275,15 +4087,15 @@ static inline enum store_type mas_wr_store_type(str= uct ma_wr_state *wr_mas) */ static inline void mas_wr_preallocate(struct ma_wr_state *wr_mas, void *en= try) { - int request; + struct ma_state *mas =3D wr_mas->mas; =20 mas_wr_prealloc_setup(wr_mas); - wr_mas->mas->store_type =3D mas_wr_store_type(wr_mas); - request =3D mas_prealloc_calc(wr_mas, entry); - if (!request) + mas->store_type =3D mas_wr_store_type(wr_mas); + mas_prealloc_calc(wr_mas, entry); + if (!mas->node_request) return; =20 - mas_node_count(wr_mas->mas, request); + mas_alloc_nodes(mas, GFP_NOWAIT | __GFP_NOWARN); } =20 /** @@ -5398,7 +5210,6 @@ static inline void mte_destroy_walk(struct maple_enod= e *enode, */ void *mas_store(struct ma_state *mas, void *entry) { - int request; MA_WR_STATE(wr_mas, mas, entry); =20 trace_ma_write(__func__, mas, 0, entry); @@ -5428,11 +5239,11 @@ void *mas_store(struct ma_state *mas, void *entry) return wr_mas.content; } =20 - request =3D mas_prealloc_calc(&wr_mas, entry); - if (!request) + mas_prealloc_calc(&wr_mas, entry); + if (!mas->node_request) goto store; =20 - mas_node_count(mas, request); + mas_alloc_nodes(mas, GFP_NOWAIT | __GFP_NOWARN); if (mas_is_err(mas)) return NULL; =20 @@ -5520,26 +5331,25 @@ EXPORT_SYMBOL_GPL(mas_store_prealloc); int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp) { MA_WR_STATE(wr_mas, mas, entry); - int ret =3D 0; - int request; =20 mas_wr_prealloc_setup(&wr_mas); mas->store_type =3D mas_wr_store_type(&wr_mas); - request =3D mas_prealloc_calc(&wr_mas, entry); - if (!request) - return ret; + mas_prealloc_calc(&wr_mas, entry); + if (!mas->node_request) + return 0; =20 - mas_node_count_gfp(mas, request, gfp); + mas_alloc_nodes(mas, gfp); if (mas_is_err(mas)) { - mas_set_alloc_req(mas, 0); - ret =3D xa_err(mas->node); + int ret =3D xa_err(mas->node); + + mas->node_request =3D 0; mas_destroy(mas); mas_reset(mas); return ret; } =20 mas->mas_flags |=3D MA_STATE_PREALLOC; - return ret; + return 0; } EXPORT_SYMBOL_GPL(mas_preallocate); =20 @@ -5553,9 +5363,6 @@ EXPORT_SYMBOL_GPL(mas_preallocate); */ void mas_destroy(struct ma_state *mas) { - struct maple_alloc *node; - unsigned long total; - /* * When using mas_for_each() to insert an expected number of elements, * it is possible that the number inserted is less than the expected @@ -5576,21 +5383,11 @@ void mas_destroy(struct ma_state *mas) } mas->mas_flags &=3D ~(MA_STATE_BULK|MA_STATE_PREALLOC); =20 - total =3D mas_allocated(mas); - while (total) { - node =3D mas->alloc; - mas->alloc =3D node->slot[0]; - if (node->node_count > 1) { - size_t count =3D node->node_count - 1; - - mt_free_bulk(count, (void __rcu **)&node->slot[1]); - total -=3D count; - } - mt_free_one(ma_mnode_ptr(node)); - total--; - } + mas->node_request =3D 0; + if (mas->sheaf) + mt_return_sheaf(mas->sheaf); =20 - mas->alloc =3D NULL; + mas->sheaf =3D NULL; } EXPORT_SYMBOL_GPL(mas_destroy); =20 @@ -5640,7 +5437,8 @@ int mas_expected_entries(struct ma_state *mas, unsign= ed long nr_entries) /* Internal nodes */ nr_nodes +=3D DIV_ROUND_UP(nr_nodes, nonleaf_cap); /* Add working room for split (2 nodes) + new parents */ - mas_node_count_gfp(mas, nr_nodes + 3, GFP_KERNEL); + mas->node_request =3D nr_nodes + 3; + mas_alloc_nodes(mas, GFP_KERNEL); =20 /* Detect if allocations run out */ mas->mas_flags |=3D MA_STATE_PREALLOC; @@ -6276,7 +6074,7 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp) mas_alloc_nodes(mas, gfp); } =20 - if (!mas_allocated(mas)) + if (!mas->sheaf) return false; =20 mas->status =3D ma_start; @@ -7671,8 +7469,9 @@ void mas_dump(const struct ma_state *mas) =20 pr_err("[%u/%u] index=3D%lx last=3D%lx\n", mas->offset, mas->end, mas->index, mas->last); - pr_err(" min=3D%lx max=3D%lx alloc=3D" PTR_FMT ", depth=3D%u, flags= =3D%x\n", - mas->min, mas->max, mas->alloc, mas->depth, mas->mas_flags); + pr_err(" min=3D%lx max=3D%lx sheaf=3D" PTR_FMT ", request %lu depth= =3D%u, flags=3D%x\n", + mas->min, mas->max, mas->sheaf, mas->node_request, mas->depth, + mas->mas_flags); if (mas->index > mas->last) pr_err("Check index & last\n"); } diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c index 13e2a10d7554d6b1de5ffbda59f3a5bc4039a8c8..5549eb4200c7974e3bb457e0fd0= 54c434e4b85da 100644 --- a/lib/test_maple_tree.c +++ b/lib/test_maple_tree.c @@ -2746,6 +2746,7 @@ static noinline void __init check_fuzzer(struct maple= _tree *mt) mtree_test_erase(mt, ULONG_MAX - 10); } =20 +#if 0 /* duplicate the tree with a specific gap */ static noinline void __init check_dup_gaps(struct maple_tree *mt, unsigned long nr_entries, bool zero_start, @@ -2770,6 +2771,7 @@ static noinline void __init check_dup_gaps(struct map= le_tree *mt, mtree_store_range(mt, i*10, (i+1)*10 - gap, xa_mk_value(i), GFP_KERNEL); =20 + mt_dump(mt, mt_dump_dec); mt_init_flags(&newmt, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN); mt_set_non_kernel(99999); down_write(&newmt_lock); @@ -2779,9 +2781,12 @@ static noinline void __init check_dup_gaps(struct ma= ple_tree *mt, =20 rcu_read_lock(); mas_for_each(&mas, tmp, ULONG_MAX) { + printk("%lu nodes %lu\n", mas.index, + kmem_cache_sheaf_count(newmas.sheaf)); newmas.index =3D mas.index; newmas.last =3D mas.last; mas_store(&newmas, tmp); + mt_dump(&newmt, mt_dump_dec); } rcu_read_unlock(); mas_destroy(&newmas); @@ -2878,6 +2883,7 @@ static noinline void __init check_dup(struct maple_tr= ee *mt) cond_resched(); } } +#endif =20 static noinline void __init check_bnode_min_spanning(struct maple_tree *mt) { @@ -4045,9 +4051,11 @@ static int __init maple_tree_seed(void) check_fuzzer(&tree); mtree_destroy(&tree); =20 +#if 0 mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); check_dup(&tree); mtree_destroy(&tree); +#endif =20 mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); check_bnode_min_spanning(&tree); diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/ma= ple.c index f6f923c9dc1039997953a94ec184c560b225c2d4..1bd789191f232385d69f2dd3e90= 0bac99d8919ff 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -63,430 +63,6 @@ struct rcu_reader_struct { struct rcu_test_struct2 *test; }; =20 -static int get_alloc_node_count(struct ma_state *mas) -{ - int count =3D 1; - struct maple_alloc *node =3D mas->alloc; - - if (!node || ((unsigned long)node & 0x1)) - return 0; - while (node->node_count) { - count +=3D node->node_count; - node =3D node->slot[0]; - } - return count; -} - -static void check_mas_alloc_node_count(struct ma_state *mas) -{ - mas_node_count_gfp(mas, MAPLE_ALLOC_SLOTS + 1, GFP_KERNEL); - mas_node_count_gfp(mas, MAPLE_ALLOC_SLOTS + 3, GFP_KERNEL); - MT_BUG_ON(mas->tree, get_alloc_node_count(mas) !=3D mas->alloc->total); - mas_destroy(mas); -} - -/* - * check_new_node() - Check the creation of new nodes and error path - * verification. - */ -static noinline void __init check_new_node(struct maple_tree *mt) -{ - - struct maple_node *mn, *mn2, *mn3; - struct maple_alloc *smn; - struct maple_node *nodes[100]; - int i, j, total; - - MA_STATE(mas, mt, 0, 0); - - check_mas_alloc_node_count(&mas); - - /* Try allocating 3 nodes */ - mtree_lock(mt); - mt_set_non_kernel(0); - /* request 3 nodes to be allocated. */ - mas_node_count(&mas, 3); - /* Allocation request of 3. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 3); - /* Allocate failed. */ - MT_BUG_ON(mt, mas.node !=3D MA_ERROR(-ENOMEM)); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 3); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[0] =3D=3D NULL); - mas_push_node(&mas, mn); - mas_reset(&mas); - mas_destroy(&mas); - mtree_unlock(mt); - - - /* Try allocating 1 node, then 2 more */ - mtree_lock(mt); - /* Set allocation request to 1. */ - mas_set_alloc_req(&mas, 1); - /* Check Allocation request of 1. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 1); - mas_set_err(&mas, -ENOMEM); - /* Validate allocation request. */ - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - /* Eat the requested node. */ - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, mn->slot[0] !=3D NULL); - MT_BUG_ON(mt, mn->slot[1] !=3D NULL); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mas.status =3D ma_start; - mas_destroy(&mas); - /* Allocate 3 nodes, will fail. */ - mas_node_count(&mas, 3); - /* Drop the lock and allocate 3 nodes. */ - mas_nomem(&mas, GFP_KERNEL); - /* Ensure 3 are allocated. */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 3); - /* Allocation request of 0. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 0); - - MT_BUG_ON(mt, mas.alloc =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[0] =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[1] =3D=3D NULL); - /* Ensure we counted 3. */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 3); - /* Free. */ - mas_reset(&mas); - mas_destroy(&mas); - - /* Set allocation request to 1. */ - mas_set_alloc_req(&mas, 1); - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 1); - mas_set_err(&mas, -ENOMEM); - /* Validate allocation request. */ - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 1); - /* Check the node is only one node. */ - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, mn->slot[0] !=3D NULL); - MT_BUG_ON(mt, mn->slot[1] !=3D NULL); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 1); - MT_BUG_ON(mt, mas.alloc->node_count); - - mas_set_alloc_req(&mas, 2); /* request 2 more. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 2); - mas_set_err(&mas, -ENOMEM); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 3); - MT_BUG_ON(mt, mas.alloc =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[0] =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[1] =3D=3D NULL); - for (i =3D 2; i >=3D 0; i--) { - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i); - MT_BUG_ON(mt, !mn); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - - total =3D 64; - mas_set_alloc_req(&mas, total); /* request 2 more. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D total); - mas_set_err(&mas, -ENOMEM); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - for (i =3D total; i > 0; i--) { - unsigned int e =3D 0; /* expected node_count */ - - if (!MAPLE_32BIT) { - if (i >=3D 35) - e =3D i - 34; - else if (i >=3D 5) - e =3D i - 4; - else if (i >=3D 2) - e =3D i - 1; - } else { - if (i >=3D 4) - e =3D i - 3; - else if (i >=3D 1) - e =3D i - 1; - else - e =3D 0; - } - - MT_BUG_ON(mt, mas.alloc->node_count !=3D e); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - 1); - MT_BUG_ON(mt, !mn); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - - total =3D 100; - for (i =3D 1; i < total; i++) { - mas_set_alloc_req(&mas, i); - mas_set_err(&mas, -ENOMEM); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - for (j =3D i; j > 0; j--) { - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D j - 1); - MT_BUG_ON(mt, !mn); - MT_BUG_ON(mt, not_empty(mn)); - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D j); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D j - 1); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - - mas_set_alloc_req(&mas, i); - mas_set_err(&mas, -ENOMEM); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - for (j =3D 0; j <=3D i/2; j++) { - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j); - nodes[j] =3D mas_pop_node(&mas); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j - 1); - } - - while (j) { - j--; - mas_push_node(&mas, nodes[j]); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i); - for (j =3D 0; j <=3D i/2; j++) { - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j - 1); - } - mas_reset(&mas); - MT_BUG_ON(mt, mas_nomem(&mas, GFP_KERNEL)); - mas_destroy(&mas); - - } - - /* Set allocation request. */ - total =3D 500; - mas_node_count(&mas, total); - /* Drop the lock and allocate the nodes. */ - mas_nomem(&mas, GFP_KERNEL); - MT_BUG_ON(mt, !mas.alloc); - i =3D 1; - smn =3D mas.alloc; - while (i < total) { - for (j =3D 0; j < MAPLE_ALLOC_SLOTS; j++) { - i++; - MT_BUG_ON(mt, !smn->slot[j]); - if (i =3D=3D total) - break; - } - smn =3D smn->slot[0]; /* next. */ - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D total); - mas_reset(&mas); - mas_destroy(&mas); /* Free. */ - - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - for (i =3D 1; i < 128; i++) { - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i); /* check request filled */ - for (j =3D i; j > 0; j--) { /*Free the requests */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - } - - for (i =3D 1; i < MAPLE_NODE_MASK + 1; i++) { - MA_STATE(mas2, mt, 0, 0); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i); /* check request filled */ - for (j =3D 1; j <=3D i; j++) { /* Move the allocations to mas2 */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, not_empty(mn)); - mas_push_node(&mas2, mn); - MT_BUG_ON(mt, mas_allocated(&mas2) !=3D j); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - MT_BUG_ON(mt, mas_allocated(&mas2) !=3D i); - - for (j =3D i; j > 0; j--) { /*Free the requests */ - MT_BUG_ON(mt, mas_allocated(&mas2) !=3D j); - mn =3D mas_pop_node(&mas2); /* get the next node. */ - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - MT_BUG_ON(mt, mas_allocated(&mas2) !=3D 0); - } - - - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS + 1); /* Request */ - MT_BUG_ON(mt, mas.node !=3D MA_ERROR(-ENOMEM)); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count !=3D MAPLE_ALLOC_SLOTS); - - mn =3D mas_pop_node(&mas); /* get the next node. */ - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS); - MT_BUG_ON(mt, mas.alloc->node_count !=3D MAPLE_ALLOC_SLOTS - 1); - - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count !=3D MAPLE_ALLOC_SLOTS); - - /* Check the limit of pop/push/pop */ - mas_node_count(&mas, MAPLE_ALLOC_SLOTS + 2); /* Request */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 1); - MT_BUG_ON(mt, mas.node !=3D MA_ERROR(-ENOMEM)); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - MT_BUG_ON(mt, mas_alloc_req(&mas)); - MT_BUG_ON(mt, mas.alloc->node_count !=3D 1); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 2); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count !=3D MAPLE_ALLOC_SLOTS); - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas.alloc->node_count !=3D 1); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 2); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - for (i =3D 1; i <=3D MAPLE_ALLOC_SLOTS + 1; i++) { - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - - - for (i =3D 3; i < MAPLE_NODE_MASK * 3; i++) { - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mas_push_node(&mas, mn); /* put it back */ - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn2 =3D mas_pop_node(&mas); /* get the next node. */ - mas_push_node(&mas, mn); /* put them back */ - mas_push_node(&mas, mn2); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn2 =3D mas_pop_node(&mas); /* get the next node. */ - mn3 =3D mas_pop_node(&mas); /* get the next node. */ - mas_push_node(&mas, mn); /* put them back */ - mas_push_node(&mas, mn2); - mas_push_node(&mas, mn3); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mas_destroy(&mas); - } - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, 5); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 5); - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, 10); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mas.status =3D ma_start; - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 10); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS - 1); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS - 1); - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, 10 + MAPLE_ALLOC_SLOTS - 1); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mas.status =3D ma_start; - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 10 + MAPLE_ALLOC_SLOTS - 1); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS + 1); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 1); - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS * 2 + 2); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mas.status =3D ma_start; - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS * 2 + 2); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS * 2 + 1); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS * 2 + 1); - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS * 3 + 2); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mas.status =3D ma_start; - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS * 3 + 2); - mas_destroy(&mas); - - mtree_unlock(mt); -} - /* * Check erasing including RCU. */ @@ -35458,8 +35034,7 @@ static void check_dfs_preorder(struct maple_tree *m= t) mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); mas_reset(&mas); mt_zero_nr_tallocated(); - mt_set_non_kernel(200); - mas_expected_entries(&mas, max); + mt_set_non_kernel(1000); for (count =3D 0; count <=3D max; count++) { mas.index =3D mas.last =3D count; mas_store(&mas, xa_mk_value(count)); @@ -35524,6 +35099,13 @@ static unsigned char get_vacant_height(struct ma_w= r_state *wr_mas, void *entry) return vacant_height; } =20 +static int mas_allocated(struct ma_state *mas) +{ + if (mas->sheaf) + return kmem_cache_sheaf_size(mas->sheaf); + + return 0; +} /* Preallocation testing */ static noinline void __init check_prealloc(struct maple_tree *mt) { @@ -35533,8 +35115,8 @@ static noinline void __init check_prealloc(struct m= aple_tree *mt) unsigned char vacant_height; struct maple_node *mn; void *ptr =3D check_prealloc; + struct ma_wr_state wr_mas; MA_STATE(mas, mt, 10, 20); - MA_WR_STATE(wr_mas, &mas, ptr); =20 mt_set_non_kernel(1000); for (i =3D 0; i <=3D max; i++) @@ -35542,7 +35124,11 @@ static noinline void __init check_prealloc(struct = maple_tree *mt) =20 /* Spanning store */ mas_set_range(&mas, 470, 500); - MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); + wr_mas.mas =3D &mas; + + mas_wr_preallocate(&wr_mas, ptr); + MT_BUG_ON(mt, mas.store_type !=3D wr_spanning_store); + MT_BUG_ON(mt, mas_is_err(&mas)); allocated =3D mas_allocated(&mas); height =3D mas_mt_height(&mas); vacant_height =3D get_vacant_height(&wr_mas, ptr); @@ -35552,6 +35138,7 @@ static noinline void __init check_prealloc(struct m= aple_tree *mt) allocated =3D mas_allocated(&mas); MT_BUG_ON(mt, allocated !=3D 0); =20 + mas_wr_preallocate(&wr_mas, ptr); MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); allocated =3D mas_allocated(&mas); height =3D mas_mt_height(&mas); @@ -35592,20 +35179,6 @@ static noinline void __init check_prealloc(struct = maple_tree *mt) mn->parent =3D ma_parent_ptr(mn); ma_free_rcu(mn); =20 - MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); - allocated =3D mas_allocated(&mas); - height =3D mas_mt_height(&mas); - vacant_height =3D get_vacant_height(&wr_mas, ptr); - MT_BUG_ON(mt, allocated !=3D 1 + (height - vacant_height) * 3); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D allocated - 1); - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D allocated); - MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); - mas_destroy(&mas); - allocated =3D mas_allocated(&mas); - MT_BUG_ON(mt, allocated !=3D 0); - MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); allocated =3D mas_allocated(&mas); height =3D mas_mt_height(&mas); @@ -36394,11 +35967,17 @@ static void check_nomem_writer_race(struct maple_= tree *mt) check_load(mt, 6, xa_mk_value(0xC)); mtree_unlock(mt); =20 + mt_set_non_kernel(0); /* test for the same race but with mas_store_gfp() */ mtree_store_range(mt, 0, 5, xa_mk_value(0xA), GFP_KERNEL); mtree_store_range(mt, 6, 10, NULL, GFP_KERNEL); =20 mas_set_range(&mas, 0, 5); + + /* setup writer 2 that will trigger the race condition */ + mt_set_private(mt); + mt_set_callback(writer2); + mtree_lock(mt); mas_store_gfp(&mas, NULL, GFP_KERNEL); =20 @@ -36435,7 +36014,6 @@ static inline int check_vma_modification(struct map= le_tree *mt) __mas_set_range(&mas, 0x7ffde4ca2000, 0x7ffffffff000 - 1); mas_preallocate(&mas, NULL, GFP_KERNEL); mas_store_prealloc(&mas, NULL); - mt_dump(mt, mt_dump_hex); =20 mas_destroy(&mas); mtree_unlock(mt); @@ -36453,6 +36031,8 @@ static inline void check_bulk_rebalance(struct mapl= e_tree *mt) =20 build_full_tree(mt, 0, 2); =20 + + mtree_lock(mt); /* erase every entry in the tree */ do { /* set up bulk store mode */ @@ -36462,6 +36042,85 @@ static inline void check_bulk_rebalance(struct map= le_tree *mt) } while (mas_prev(&mas, 0) !=3D NULL); =20 mas_destroy(&mas); + mtree_unlock(mt); +} + +static unsigned long get_last_index(struct ma_state *mas) +{ + struct maple_node *node =3D mas_mn(mas); + enum maple_type mt =3D mte_node_type(mas->node); + unsigned long *pivots =3D ma_pivots(node, mt); + unsigned long last_index =3D mas_data_end(mas); + + BUG_ON(last_index =3D=3D 0); + + return pivots[last_index - 1] + 1; +} + +/* + * Assert that we handle spanning stores that consume the entirety of the = right + * leaf node correctly. + */ +static void test_spanning_store_regression(void) +{ + unsigned long from =3D 0, to =3D 0; + DEFINE_MTREE(tree); + MA_STATE(mas, &tree, 0, 0); + + /* + * Build a 3-level tree. We require a parent node below the root node + * and 2 leaf nodes under it, so we can span the entirety of the right + * hand node. + */ + build_full_tree(&tree, 0, 3); + + /* Descend into position at depth 2. */ + mas_reset(&mas); + mas_start(&mas); + mas_descend(&mas); + mas_descend(&mas); + + /* + * We need to establish a tree like the below. + * + * Then we can try a store in [from, to] which results in a spanned + * store across nodes B and C, with the maple state at the time of the + * write being such that only the subtree at A and below is considered. + * + * Height + * 0 Root Node + * / \ + * pivot =3D to / \ pivot =3D ULONG_MAX + * / \ + * 1 A [-----] ... + * / \ + * pivot =3D from / \ pivot =3D to + * / \ + * 2 (LEAVES) B [-----] [-----] C + * ^--- Last pivot to. + */ + while (true) { + unsigned long tmp =3D get_last_index(&mas); + + if (mas_next_sibling(&mas)) { + from =3D tmp; + to =3D mas.max; + } else { + break; + } + } + + BUG_ON(from =3D=3D 0 && to =3D=3D 0); + + /* Perform the store. */ + mas_set_range(&mas, from, to); + mas_store_gfp(&mas, xa_mk_value(0xdead), GFP_KERNEL); + + /* If the regression occurs, the validation will fail. */ + mt_validate(&tree); + + /* Cleanup. */ + __mt_destroy(&tree); } =20 void farmer_tests(void) @@ -36525,6 +36184,7 @@ void farmer_tests(void) check_collapsing_rebalance(&tree); mtree_destroy(&tree); =20 + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); check_null_expand(&tree); mtree_destroy(&tree); @@ -36538,10 +36198,6 @@ void farmer_tests(void) check_erase_testset(&tree); mtree_destroy(&tree); =20 - mt_init_flags(&tree, 0); - check_new_node(&tree); - mtree_destroy(&tree); - if (!MAPLE_32BIT) { mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); check_rcu_simulated(&tree); @@ -36563,95 +36219,13 @@ void farmer_tests(void) =20 /* No memory handling */ check_nomem(&tree); -} - -static unsigned long get_last_index(struct ma_state *mas) -{ - struct maple_node *node =3D mas_mn(mas); - enum maple_type mt =3D mte_node_type(mas->node); - unsigned long *pivots =3D ma_pivots(node, mt); - unsigned long last_index =3D mas_data_end(mas); - - BUG_ON(last_index =3D=3D 0); =20 - return pivots[last_index - 1] + 1; -} - -/* - * Assert that we handle spanning stores that consume the entirety of the = right - * leaf node correctly. - */ -static void test_spanning_store_regression(void) -{ - unsigned long from =3D 0, to =3D 0; - DEFINE_MTREE(tree); - MA_STATE(mas, &tree, 0, 0); - - /* - * Build a 3-level tree. We require a parent node below the root node - * and 2 leaf nodes under it, so we can span the entirety of the right - * hand node. - */ - build_full_tree(&tree, 0, 3); - - /* Descend into position at depth 2. */ - mas_reset(&mas); - mas_start(&mas); - mas_descend(&mas); - mas_descend(&mas); - - /* - * We need to establish a tree like the below. - * - * Then we can try a store in [from, to] which results in a spanned - * store across nodes B and C, with the maple state at the time of the - * write being such that only the subtree at A and below is considered. - * - * Height - * 0 Root Node - * / \ - * pivot =3D to / \ pivot =3D ULONG_MAX - * / \ - * 1 A [-----] ... - * / \ - * pivot =3D from / \ pivot =3D to - * / \ - * 2 (LEAVES) B [-----] [-----] C - * ^--- Last pivot to. - */ - while (true) { - unsigned long tmp =3D get_last_index(&mas); - - if (mas_next_sibling(&mas)) { - from =3D tmp; - to =3D mas.max; - } else { - break; - } - } - - BUG_ON(from =3D=3D 0 && to =3D=3D 0); - - /* Perform the store. */ - mas_set_range(&mas, from, to); - mas_store_gfp(&mas, xa_mk_value(0xdead), GFP_KERNEL); - - /* If the regression occurs, the validation will fail. */ - mt_validate(&tree); - - /* Cleanup. */ - __mt_destroy(&tree); -} - -static void regression_tests(void) -{ test_spanning_store_regression(); } =20 void maple_tree_tests(void) { #if !defined(BENCH) - regression_tests(); farmer_tests(); #endif maple_tree_seed(); diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c index e0255f53159bd3a1325d49192283dd6790a5e3b8..6a15665fc8315168c718e6810c7= deaeed13a3a6a 100644 --- a/tools/testing/shared/linux.c +++ b/tools/testing/shared/linux.c @@ -82,7 +82,8 @@ void *kmem_cache_alloc_lru(struct kmem_cache *cachep, str= uct list_lru *lru, =20 if (!(gfp & __GFP_DIRECT_RECLAIM)) { if (!cachep->non_kernel) { - cachep->exec_callback =3D true; + if (cachep->callback) + cachep->exec_callback =3D true; return NULL; } =20 @@ -236,6 +237,8 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gf= p_t gfp, size_t size, for (i =3D 0; i < size; i++) __kmem_cache_free_locked(cachep, p[i]); pthread_mutex_unlock(&cachep->lock); + if (cachep->callback) + cachep->exec_callback =3D true; return 0; } =20 @@ -288,9 +291,8 @@ kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gf= p, unsigned int size) capacity =3D s->sheaf_capacity; =20 sheaf =3D malloc(sizeof(*sheaf) + sizeof(void *) * s->sheaf_capacity * ca= pacity); - if (!sheaf) { + if (!sheaf) return NULL; - } =20 memset(sheaf, 0, size); sheaf->cache =3D s; --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0AB282E7BDD for ; Wed, 23 Jul 2025 13:35:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277741; cv=none; b=GPwZ6QJJiy9052XDkwYlydrTkG+J3eJY9qmSgOe2QLCCfUnZ3tSuacIJelABqBycqcFqN82TWm6r/sHPupCtR4MFVTn+5++BifcengzcTDTHfs6Q54Zs/JyIEDTgVP7xJ+8KzaS6I0EsL1KQ/3E5K4mq4xq6nDAreiJLCYQzhVg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277741; c=relaxed/simple; bh=C3p2FLShVQd4jgjmy06VA+dAMp6lqjx2IO/RKHlrvTg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=t1IOUn+ndWHwTIprwWGmLnKB2S2FdBI+5IcYAw7IvSp/Zd85H2Ye4BYhwfHc+nrvliQnuYu9KxpCBZC8LMuvjMpAsm5EAub1tn05Oa+xEhfgFcbOT5WMtisdLVp91aMK9IsVaGs13CGoeStKb/dFnK2g7OiNLitHJAo+/r2iKWo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=D7xnFeFJ; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=W0T4MPEY; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=D7xnFeFJ; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=W0T4MPEY; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="D7xnFeFJ"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="W0T4MPEY"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="D7xnFeFJ"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="W0T4MPEY" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id DF8B6218FD; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GMN8ss+JM7ZT/0b+qlebdOQL8dTktv7rSpXSzwpKjiM=; b=D7xnFeFJAkpHugq7K+vv5+xSMSSqv3LoNlPVCWeCFQbPAPPqcioyTHfY/fAuH0ah3hc1qm xwLQCk3W5hSFk7qYtX4Dy23J2nmo5zwA00GEuK33qaoGBeHyH77mviLv80FrOEwd1nz8gW JcFlnOtoJyZkUh2CfE0Vp/5gazF0rLQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GMN8ss+JM7ZT/0b+qlebdOQL8dTktv7rSpXSzwpKjiM=; b=W0T4MPEYaYr+YrCIKUaN1zHTbbptplOPMW3ImnnR4cspWkdh/NgMwtL+HWczth9piHUT82 i4k4BTrXlL7fnDCg== Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GMN8ss+JM7ZT/0b+qlebdOQL8dTktv7rSpXSzwpKjiM=; b=D7xnFeFJAkpHugq7K+vv5+xSMSSqv3LoNlPVCWeCFQbPAPPqcioyTHfY/fAuH0ah3hc1qm xwLQCk3W5hSFk7qYtX4Dy23J2nmo5zwA00GEuK33qaoGBeHyH77mviLv80FrOEwd1nz8gW JcFlnOtoJyZkUh2CfE0Vp/5gazF0rLQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277704; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GMN8ss+JM7ZT/0b+qlebdOQL8dTktv7rSpXSzwpKjiM=; b=W0T4MPEYaYr+YrCIKUaN1zHTbbptplOPMW3ImnnR4cspWkdh/NgMwtL+HWczth9piHUT82 i4k4BTrXlL7fnDCg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id C72B713AF2; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id OCxcMAjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:46 +0200 Subject: [PATCH v5 13/14] maple_tree: Add single node allocation support to maple state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-13-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, "Liam R. Howlett" X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,Oracle.com]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo] X-Spam-Flag: NO X-Spam-Level: X-Spam-Score: -4.30 From: "Liam R. Howlett" The fast path through a write will require replacing a single node in the tree. Using a sheaf (32 nodes) is too heavy for the fast path, so special case the node store operation by just allocating one node in the maple state. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka --- include/linux/maple_tree.h | 4 +++- lib/maple_tree.c | 47 ++++++++++++++++++++++++++++++++++++++++--= ---- 2 files changed, 44 insertions(+), 7 deletions(-) diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h index 3cf1ae9dde7ce43fa20ae400c01fefad048c302e..61eb5e7d09ad0133978e3ac4b2a= f66710421e769 100644 --- a/include/linux/maple_tree.h +++ b/include/linux/maple_tree.h @@ -443,6 +443,7 @@ struct ma_state { unsigned long min; /* The minimum index of this node - implied pivot min= */ unsigned long max; /* The maximum index of this node - implied pivot max= */ struct slab_sheaf *sheaf; /* Allocated nodes for this operation */ + struct maple_node *alloc; /* allocated nodes */ unsigned long node_request; enum maple_status status; /* The status of the state (active, start, none= , etc) */ unsigned char depth; /* depth of tree descent during write */ @@ -491,8 +492,9 @@ struct ma_wr_state { .status =3D ma_start, \ .min =3D 0, \ .max =3D ULONG_MAX, \ - .node_request=3D 0, \ .sheaf =3D NULL, \ + .alloc =3D NULL, \ + .node_request=3D 0, \ .mas_flags =3D 0, \ .store_type =3D wr_invalid, \ } diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 3c3c14a76d98ded3b619c178d64099b464a2ca23..9aa782b1497f224e7366ebbd65f= 997523ee0c8ab 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -1101,16 +1101,23 @@ static int mas_ascend(struct ma_state *mas) * * Return: A pointer to a maple node. */ -static inline struct maple_node *mas_pop_node(struct ma_state *mas) +static __always_inline struct maple_node *mas_pop_node(struct ma_state *ma= s) { struct maple_node *ret; =20 + if (mas->alloc) { + ret =3D mas->alloc; + mas->alloc =3D NULL; + goto out; + } + if (WARN_ON_ONCE(!mas->sheaf)) return NULL; =20 ret =3D kmem_cache_alloc_from_sheaf(maple_node_cache, GFP_NOWAIT, mas->sh= eaf); - memset(ret, 0, sizeof(*ret)); =20 +out: + memset(ret, 0, sizeof(*ret)); return ret; } =20 @@ -1121,9 +1128,34 @@ static inline struct maple_node *mas_pop_node(struct= ma_state *mas) */ static inline void mas_alloc_nodes(struct ma_state *mas, gfp_t gfp) { - if (unlikely(mas->sheaf)) { - unsigned long refill =3D mas->node_request; + if (!mas->node_request) + return; + + if (mas->node_request =3D=3D 1) { + if (mas->sheaf) + goto use_sheaf; + + if (mas->alloc) + return; =20 + mas->alloc =3D mt_alloc_one(gfp); + if (!mas->alloc) + goto error; + + mas->node_request =3D 0; + return; + } + +use_sheaf: + if (unlikely(mas->alloc)) { + mt_free_one(mas->alloc); + mas->alloc =3D NULL; + } + + if (mas->sheaf) { + unsigned long refill; + + refill =3D mas->node_request; if(kmem_cache_sheaf_size(mas->sheaf) >=3D refill) { mas->node_request =3D 0; return; @@ -5386,8 +5418,11 @@ void mas_destroy(struct ma_state *mas) mas->node_request =3D 0; if (mas->sheaf) mt_return_sheaf(mas->sheaf); - mas->sheaf =3D NULL; + + if (mas->alloc) + mt_free_one(mas->alloc); + mas->alloc =3D NULL; } EXPORT_SYMBOL_GPL(mas_destroy); =20 @@ -6074,7 +6109,7 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp) mas_alloc_nodes(mas, gfp); } =20 - if (!mas->sheaf) + if (!mas->sheaf && !mas->alloc) return false; =20 mas->status =3D ma_start; --=20 2.50.1 From nobody Mon Oct 6 08:30:19 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 041D12E92C0 for ; Wed, 23 Jul 2025 13:35:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277759; cv=none; b=XrGy0zkBw96fmOi2kgvlYMZR5dkNcMG9B2/DU57/NTDn1PDKTG5wjjXp3sP17ELXwbRwPOel+TNBslHC3pAaG6OEGA+s28LPlaFJhM6XjRvjv2xHITLGeKKPibgn+JdDaNLDm84SNBtYGEQ+I2M3o2C9GoTXF2UtF06y7u+kZbY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753277759; c=relaxed/simple; bh=z5pUnyw8hUonIK9+AnJweN/AmTdJLkkwvhC18stu32w=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Ckzl1DFHRHfvzJhizj9RrPe7iotiplp+4Vl5Z6Lmpgy2zFj6OsawlViUJHTU7LrqClxdzCgCWZD8wNH2376v8jj4QHMTHtIJuGTW9grEd3O2PumCjFUepKIjHr/ltQ2nIFkwvxHAFH4DBWaqbY/8QwjHQ1ARzY2ZfTfJRNKm6lY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=sn6d6Wfc; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=3DXGttR4; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=sn6d6Wfc; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=3DXGttR4; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="sn6d6Wfc"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="3DXGttR4"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="sn6d6Wfc"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="3DXGttR4" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 0AF611F794; Wed, 23 Jul 2025 13:35:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277705; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tzLrbkIiOuwOvdyzKWDx4NnuEyUkSbgCzU8mbhcB3rk=; b=sn6d6WfcnZz0DB5GiAC6bKFdOor9xAUmtBOjzv7UyoUJi3Brgkx5qyprggCv0Jz+3zOEBw VAtyXImFe99kv+r3YDGZME13MuC05nGNUHzdZ4oneFTxPl9nqQx/Ouv962zpAgZk2qzTC4 9imwhDfigzZKEOTRlrH+cY08NQ+0Uro= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277705; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tzLrbkIiOuwOvdyzKWDx4NnuEyUkSbgCzU8mbhcB3rk=; b=3DXGttR4IxzgIWDb2FW7GeODLV2hRB1+1SJUyMLWvyXK6xe4UaC+7BfWGz+b3dUeL4MiHq BeWT8hkanTBQ29CA== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=sn6d6Wfc; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=3DXGttR4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1753277705; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tzLrbkIiOuwOvdyzKWDx4NnuEyUkSbgCzU8mbhcB3rk=; b=sn6d6WfcnZz0DB5GiAC6bKFdOor9xAUmtBOjzv7UyoUJi3Brgkx5qyprggCv0Jz+3zOEBw VAtyXImFe99kv+r3YDGZME13MuC05nGNUHzdZ4oneFTxPl9nqQx/Ouv962zpAgZk2qzTC4 9imwhDfigzZKEOTRlrH+cY08NQ+0Uro= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1753277705; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tzLrbkIiOuwOvdyzKWDx4NnuEyUkSbgCzU8mbhcB3rk=; b=3DXGttR4IxzgIWDb2FW7GeODLV2hRB1+1SJUyMLWvyXK6xe4UaC+7BfWGz+b3dUeL4MiHq BeWT8hkanTBQ29CA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id DD4CB13ADD; Wed, 23 Jul 2025 13:35:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id gCa6NQjlgGh0IwAAD6G6ig (envelope-from ); Wed, 23 Jul 2025 13:35:04 +0000 From: Vlastimil Babka Date: Wed, 23 Jul 2025 15:34:47 +0200 Subject: [PATCH v5 14/14] maple_tree: Convert forking to use the sheaf interface Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250723-slub-percpu-caches-v5-14-b792cd830f5d@suse.cz> References: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> In-Reply-To: <20250723-slub-percpu-caches-v5-0-b792cd830f5d@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, "Liam R. Howlett" X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,Oracle.com]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; DKIM_TRACE(0.00)[suse.cz:+] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 0AF611F794 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 From: "Liam R. Howlett" Use the generic interface which should result in less bulk allocations during a forking. A part of this is to abstract the freeing of the sheaf or maple state allocations into its own function so mas_destroy() and the tree duplication code can use the same functionality to return any unused resources. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- lib/maple_tree.c | 42 +++++++++++++++++++++++------------------- 1 file changed, 23 insertions(+), 19 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 9aa782b1497f224e7366ebbd65f997523ee0c8ab..180d5e2ea49440248aaae04a066= 276406b2537ed 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -1178,6 +1178,19 @@ static inline void mas_alloc_nodes(struct ma_state *= mas, gfp_t gfp) mas_set_err(mas, -ENOMEM); } =20 +static inline void mas_empty_nodes(struct ma_state *mas) +{ + mas->node_request =3D 0; + if (mas->sheaf) { + mt_return_sheaf(mas->sheaf); + mas->sheaf =3D NULL; + } + + if (mas->alloc) { + mt_free_one(mas->alloc); + mas->alloc =3D NULL; + } +} =20 /* * mas_free() - Free an encoded maple node @@ -5414,15 +5427,7 @@ void mas_destroy(struct ma_state *mas) mas->mas_flags &=3D ~MA_STATE_REBALANCE; } mas->mas_flags &=3D ~(MA_STATE_BULK|MA_STATE_PREALLOC); - - mas->node_request =3D 0; - if (mas->sheaf) - mt_return_sheaf(mas->sheaf); - mas->sheaf =3D NULL; - - if (mas->alloc) - mt_free_one(mas->alloc); - mas->alloc =3D NULL; + mas_empty_nodes(mas); } EXPORT_SYMBOL_GPL(mas_destroy); =20 @@ -6499,7 +6504,7 @@ static inline void mas_dup_alloc(struct ma_state *mas= , struct ma_state *new_mas, struct maple_node *node =3D mte_to_node(mas->node); struct maple_node *new_node =3D mte_to_node(new_mas->node); enum maple_type type; - unsigned char request, count, i; + unsigned char count, i; void __rcu **slots; void __rcu **new_slots; unsigned long val; @@ -6507,20 +6512,17 @@ static inline void mas_dup_alloc(struct ma_state *m= as, struct ma_state *new_mas, /* Allocate memory for child nodes. */ type =3D mte_node_type(mas->node); new_slots =3D ma_slots(new_node, type); - request =3D mas_data_end(mas) + 1; - count =3D mt_alloc_bulk(gfp, request, (void **)new_slots); - if (unlikely(count < request)) { - memset(new_slots, 0, request * sizeof(void *)); - mas_set_err(mas, -ENOMEM); + count =3D mas->node_request =3D mas_data_end(mas) + 1; + mas_alloc_nodes(mas, gfp); + if (unlikely(mas_is_err(mas))) return; - } =20 - /* Restore node type information in slots. */ slots =3D ma_slots(node, type); for (i =3D 0; i < count; i++) { val =3D (unsigned long)mt_slot_locked(mas->tree, slots, i); val &=3D MAPLE_NODE_MASK; - ((unsigned long *)new_slots)[i] |=3D val; + new_slots[i] =3D ma_mnode_ptr((unsigned long)mas_pop_node(mas) | + val); } } =20 @@ -6574,7 +6576,7 @@ static inline void mas_dup_build(struct ma_state *mas= , struct ma_state *new_mas, /* Only allocate child nodes for non-leaf nodes. */ mas_dup_alloc(mas, new_mas, gfp); if (unlikely(mas_is_err(mas))) - return; + goto empty_mas; } else { /* * This is the last leaf node and duplication is @@ -6607,6 +6609,8 @@ static inline void mas_dup_build(struct ma_state *mas= , struct ma_state *new_mas, /* Make them the same height */ new_mas->tree->ma_flags =3D mas->tree->ma_flags; rcu_assign_pointer(new_mas->tree->ma_root, root); +empty_mas: + mas_empty_nodes(mas); } =20 /** --=20 2.50.1