From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3363130CDBF for ; Wed, 10 Sep 2025 08:01:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491270; cv=none; b=Am4c6B84xzO3GW1A587mtjZEiqjnjDLiC+c7L8LOcleaZBK8ChsFyF+myg4F7iyARhVi3wTCInrb4LHz5Z5Mda8NzVz8Mi32x2YZBSXKfUfZrToGzTmzyLg3VUi5yMWXymFBy2a8jktyjU/JBgr8/5P87P7jFrkR9UlHMQH+gNo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491270; c=relaxed/simple; bh=pPdP8Mo5V6IeorHQaiXPjv3uDEg0QTTYTfcxvqG4XoI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=nRiWW9bIJ9TScO8X2+t9KxTQXu6hU+E+maVrhpSXKrguZSPdde6m9bp3PSKhBOuzBpb9K20FZ5YImhXObklv0T+VSxXy2DFp9zdcNjfsX4jWGLzoKeBBKdCqp5maK5ci8qETzcLA8V6C8SHpDlwXiDU62fqhOLe4HAgpn4Ckkxg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=CQa9N9h4; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=moQxoCb3; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=dG7K8GH3; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=/zfZsb78; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="CQa9N9h4"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="moQxoCb3"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="dG7K8GH3"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="/zfZsb78" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id CB47D5CA23; Wed, 10 Sep 2025 08:01:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p3hjvzuOHMY42JWL8uS2aEHNxgCiX/TRsdBUhNeMoUs=; b=CQa9N9h4pDfX7nw/M+ubbl2raVQSIqyo9KLUKwBWyZ0tRfR5LSRwNV7ZlPozcAaF7IBRQv 6Lb9YB2LVS7iw+NMa4nAUq69h1/MHapt4uTBe3U3v7fFNB2ulRRRdlY6QRmU4PulCkLpz4 KuJWoio8RMV3lxDFAR/2zDIktVmkxC4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p3hjvzuOHMY42JWL8uS2aEHNxgCiX/TRsdBUhNeMoUs=; b=moQxoCb30JS8lAp5uRVMgaLiLhdUOFhJ1SFaUEBI7EZJXZeKpN/k1d6Z812kRPkPDzcv36 gMkCRc2PyiVAWOCw== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=dG7K8GH3; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="/zfZsb78" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p3hjvzuOHMY42JWL8uS2aEHNxgCiX/TRsdBUhNeMoUs=; b=dG7K8GH3wiohlmESb7yD9RKVBcv1OWZ40MD2XilJUpPWvEVpWuXtx/BgVJfDOLzzpGAeWa 2JkUzKinBaQkv4fTqmgWLI1JvHAQcHQ/ajispnZRa8hZbAviiFlU62SDQ0je6of64eMot8 JalcQjJhmEeqGVEDu0yHwu53aO1l2qo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p3hjvzuOHMY42JWL8uS2aEHNxgCiX/TRsdBUhNeMoUs=; b=/zfZsb78FvwKmW/C8Hqh2i72dMdU5YfyxbCqUY9iAjCg/h1+tbxNj3q6n4RJdg7vQEx9n2 se7HWI2yCbgfzfAg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B087013A54; Wed, 10 Sep 2025 08:01:05 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wNDQKkEwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:05 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:03 +0200 Subject: [PATCH v8 01/23] locking/local_lock: Expose dep_map in local_trylock_t. Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-1-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, Alexei Starovoitov , Sebastian Andrzej Siewior X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: CB47D5CA23 X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[15]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,kernel.org,linutronix.de]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:dkim,suse.cz:mid,suse.cz:email] X-Spam-Score: -4.51 From: Alexei Starovoitov lockdep_is_held() macro assumes that "struct lockdep_map dep_map;" is a top level field of any lock that participates in LOCKDEP. Make it so for local_trylock_t. Reviewed-by: Sebastian Andrzej Siewior Signed-off-by: Alexei Starovoitov Reviewed-by: Harry Yoo Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- include/linux/local_lock_internal.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock= _internal.h index d80b5306a2c0ccf95a3405b6b947b5f1f9a3bd38..949de37700dbc10feafc06d0b52= 382cf2e00c694 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -17,7 +17,10 @@ typedef struct { =20 /* local_trylock() and local_trylock_irqsave() only work with local_tryloc= k_t */ typedef struct { - local_lock_t llock; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map dep_map; + struct task_struct *owner; +#endif u8 acquired; } local_trylock_t; =20 @@ -31,7 +34,7 @@ typedef struct { .owner =3D NULL, =20 # define LOCAL_TRYLOCK_DEBUG_INIT(lockname) \ - .llock =3D { LOCAL_LOCK_DEBUG_INIT((lockname).llock) }, + LOCAL_LOCK_DEBUG_INIT(lockname) =20 static inline void local_lock_acquire(local_lock_t *l) { @@ -81,7 +84,7 @@ do { \ local_lock_debug_init(lock); \ } while (0) =20 -#define __local_trylock_init(lock) __local_lock_init(lock.llock) +#define __local_trylock_init(lock) __local_lock_init((local_lock_t *)lock) =20 #define __spinlock_nested_bh_init(lock) \ do { \ --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4DA0F30CDBF for ; Wed, 10 Sep 2025 08:01:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491276; cv=none; b=O+K64Ef/imM6ohFLUnLEj3DtMOakgtdZgQsM7VheEnkezezZfMg7utICq0dcyS398ZAN8wIo8UIXDJDY6KcCkI+v8XLxAfPcg7Uwkl51fzSxnDaiSX8wreY3O7nWs2oZAWc03HO9p+KJkKrhPUilwMWNJmoguE5lzoNb6bcIWXU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491276; c=relaxed/simple; bh=JDLcaRwdE+9JwFAkFxMDxq9vb8nXXtMSxj8CNH+wY5U=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WOBeyo0bpo/Ek6ubIEv/uriHo+tph8r05iitC5fAhehYc5iHtifySljIZientRTH9r7D81LtDn0GYMTk5E7s8n+QaGWVI0eVXNVoq/bgEzJvtE13BDQkc/Cjji6QGopvztdalTu3RZq/98h8w1gWl/pUm+ooAQEpChLmriY0B8g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=k6P7YCNl; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=7L+CBN32; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=d/9eUt6t; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=0VKnQ2OG; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="k6P7YCNl"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="7L+CBN32"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="d/9eUt6t"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="0VKnQ2OG" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id DFB2F34BFD; Wed, 10 Sep 2025 08:01:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QAwVYE4NG8inGvdEGs4C6XcT1t0ZcCQnS4gQbGvqQfk=; b=k6P7YCNldxNO9tGL0KHyGYqkJ0vwzAF74+VcWsT3RFofiTxwZzTBUpAkMGmtH1z4FctPnL vyUnai1vRS3PQwiZZg1y1WilRUassffb0dcKtMSnBXDt+Ie1WXqSohGfryHOvuZvmx1jJS TThTk+kLrxzKfdAfA3W48Q3KaExlp8c= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QAwVYE4NG8inGvdEGs4C6XcT1t0ZcCQnS4gQbGvqQfk=; b=7L+CBN32sCDv4omTTYmqZmhK1iT/Qog6f/sov3U64S3HvAVSsNP5tKfiYxW0d1fhUXBdok StqLHqGqzZHZIIBg== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="d/9eUt6t"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=0VKnQ2OG DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QAwVYE4NG8inGvdEGs4C6XcT1t0ZcCQnS4gQbGvqQfk=; b=d/9eUt6t0sNGla2HbrdNRg8+vqxixCr1e+tbj5lyL1L/HBY+icNr3yQjVJW61hmZTA1NdR 8KdTKaEeCBg9Q+5i04Qh5HPwpNOK5CZIquEFgPwQq0E9j47eMeSxwYfZDwX3EiMHLmQHeL 4BBofwFAR3xV6JN4QLCfeZRTL/FDdYI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QAwVYE4NG8inGvdEGs4C6XcT1t0ZcCQnS4gQbGvqQfk=; b=0VKnQ2OGFAbX/IBUZB2pe+bZpGugEIX1G2uBwV5cQtkrlSQuvQdyxK/ofC19sjQ8Wyn2+X kb30+uZYO/GxtEBA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id C9DC413ABD; Wed, 10 Sep 2025 08:01:05 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id INX8MEEwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:05 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:04 +0200 Subject: [PATCH v8 02/23] slab: simplify init_kmem_cache_nodes() error handling Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-2-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCPT_COUNT_TWELVE(0.00)[13]; FUZZY_RATELIMITED(0.00)[rspamd.com]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; DKIM_TRACE(0.00)[suse.cz:+]; TO_DN_SOME(0.00)[]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; RCVD_TLS_ALL(0.00)[]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; MID_RHS_MATCH_FROM(0.00)[]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; RCVD_VIA_SMTP_AUTH(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:dkim,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: DFB2F34BFD X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 We don't need to call free_kmem_cache_nodes() immediately when failing to allocate a kmem_cache_node, because when we return 0, do_kmem_cache_create() calls __kmem_cache_release() which also performs free_kmem_cache_nodes(). Reviewed-by: Harry Yoo Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- mm/slub.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 30003763d224c2704a4b93082b8b47af12dcffc5..9f671ec76131c4b0b28d5d568aa= 45842b5efb6d4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5669,10 +5669,8 @@ static int init_kmem_cache_nodes(struct kmem_cache *= s) n =3D kmem_cache_alloc_node(kmem_cache_node, GFP_KERNEL, node); =20 - if (!n) { - free_kmem_cache_nodes(s); + if (!n) return 0; - } =20 init_kmem_cache_node(n); s->node[node] =3D n; --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3655930FF34 for ; Wed, 10 Sep 2025 08:01:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491290; cv=none; b=kYgLADJifjqKdwy2AnTni6/bTmBVN2XTImGWnahNADiyALi18KJEsP6ANy9xlPxhbuo7oCikrISNx9yfPwwpD/c0zXTeqd4r26CHmPl+4VxpJ1S19Tyl/URvWjw6HGbabw1R5dL6RLzx1jf/DaFxbCn1r+QVdUpzvw2tgWp7kAA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491290; c=relaxed/simple; bh=HUtLjh4mG7mnZPbIylyNydz5hHyRmk9OeS5bTimFbrs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Ck07HlxBW0+/jbShto7ElV4C0ruKD5YdYQLqGKYWBacUHIHAxoa1+gXuFnbeJBO6f1P78VYWF3W/8TL9SWvws6k+4c9cmOZBw9pRoffujE+tv0WCdYPwSmLPGZVomoZYD8OtCFWY/xjJZsPQdbmu2cot/+6tAdeG4IuVWHN7+PU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=pxXNqw1p; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=xDCbppKK; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=dI2O39NK; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=/+WKm1tU; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="pxXNqw1p"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="xDCbppKK"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="dI2O39NK"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="/+WKm1tU" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 0691434C05; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kTJY3F3CgOixYYNExKGNnJeF/Wyp92vV8jqscqI0eDI=; b=pxXNqw1pNrMbx+Wqtp1OGe9imBO/wz0U4gTTzHSUdPNsDdtPxj3tQRZ2MYZa2xYMfer+Jl czOxAkvj5172os+KLKtIcDBxmxHMUKBIyOVNLTRfK7ZdNyiFKlJMg/NvF0Hpre91lbNCiA +DhfHO9K7G6RVKRZ7XpPMrtIEmj7YeI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kTJY3F3CgOixYYNExKGNnJeF/Wyp92vV8jqscqI0eDI=; b=xDCbppKKNGMG0pfkwE2YebnAAAxOvAHYRPZWlsIe+E114hKc+eDraEfkOiDGHBUigSClWL VhcYke0uodEGMZDw== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=dI2O39NK; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="/+WKm1tU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491266; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kTJY3F3CgOixYYNExKGNnJeF/Wyp92vV8jqscqI0eDI=; b=dI2O39NKWEG5iLR4HKT067p3iYVzVN/rUbBsV8FJuMstqEo/5t5Iy2cV8rbaGSA75Sf9Uz T3UpMGAUeqiE4Fw9wyi0i0a5KtkYFbahwV5GsuWG+E6kJ6SWjIxB2GwUA1wov4jSFDL+zf 6vdB8NkXxkqSpBK9EqWR3z64sdZ8Dk8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491266; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kTJY3F3CgOixYYNExKGNnJeF/Wyp92vV8jqscqI0eDI=; b=/+WKm1tUQS6GkguMhZcqAlRsSAz3xOizI76P+P58buYatxK89/94wbth+9MVzd4au1250j BjnpKBxADMGZnrAg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id DE68513ABF; Wed, 10 Sep 2025 08:01:05 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id WC8GNkEwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:05 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:05 +0200 Subject: [PATCH v8 03/23] slab: add opt-in caching layer of percpu sheaves Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-3-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, Venkat Rao Bagalkote X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FUZZY_RATELIMITED(0.00)[rspamd.com]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[14]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,linux.ibm.com]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:dkim,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 0691434C05 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 Specifying a non-zero value for a new struct kmem_cache_args field sheaf_capacity will setup a caching layer of percpu arrays called sheaves of given capacity for the created cache. Allocations from the cache will allocate via the percpu sheaves (main or spare) as long as they have no NUMA node preference. Frees will also put the object back into one of the sheaves. When both percpu sheaves are found empty during an allocation, an empty sheaf may be replaced with a full one from the per-node barn. If none are available and the allocation is allowed to block, an empty sheaf is refilled from slab(s) by an internal bulk alloc operation. When both percpu sheaves are full during freeing, the barn can replace a full one with an empty one, unless over a full sheaves limit. In that case a sheaf is flushed to slab(s) by an internal bulk free operation. Flushing sheaves and barns is also wired to the existing cpu flushing and cache shrinking operations. The sheaves do not distinguish NUMA locality of the cached objects. If an allocation is requested with kmem_cache_alloc_node() (or a mempolicy with strict_numa mode enabled) with a specific node (not NUMA_NO_NODE), the sheaves are bypassed. The bulk operations exposed to slab users also try to utilize the sheaves as long as the necessary (full or empty) sheaves are available on the cpu or in the barn. Once depleted, they will fallback to bulk alloc/free to slabs directly to avoid double copying. The sheaf_capacity value is exported in sysfs for observability. Sysfs CONFIG_SLUB_STATS counters alloc_cpu_sheaf and free_cpu_sheaf count objects allocated or freed using the sheaves (and thus not counting towards the other alloc/free path counters). Counters sheaf_refill and sheaf_flush count objects filled or flushed from or to slab pages, and can be used to assess how effective the caching is. The refill and flush operations will also count towards the usual alloc_fastpath/slowpath, free_fastpath/slowpath and other counters for the backing slabs. For barn operations, barn_get and barn_put count how many full sheaves were get from or put to the barn, the _fail variants count how many such requests could not be satisfied mainly because the barn was either empty or full. While the barn also holds empty sheaves to make some operations easier, these are not as critical to mandate own counters. Finally, there are sheaf_alloc/sheaf_free counters. Access to the percpu sheaves is protected by local_trylock() when potential callers include irq context, and local_lock() otherwise (such as when we already know the gfp flags allow blocking). The trylock failures should be rare and we can easily fallback. Each per-NUMA-node barn has a spin_lock. When slub_debug is enabled for a cache with sheaf_capacity also specified, the latter is ignored so that allocations and frees reach the slow path where debugging hooks are processed. Similarly, we ignore it with CONFIG_SLUB_TINY which prefers low memory usage to performance. [boot failure: https://lore.kernel.org/all/583eacf5-c971-451a-9f76-fed0e341= b815@linux.ibm.com/ ] Reported-and-tested-by: Venkat Rao Bagalkote Reviewed-by: Harry Yoo Reviewed-by: Suren Baghdasaryan Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 31 ++ mm/slab.h | 2 + mm/slab_common.c | 5 +- mm/slub.c | 1164 ++++++++++++++++++++++++++++++++++++++++++++++= +--- 4 files changed, 1142 insertions(+), 60 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index d5a8ab98035cf3e3d9043e3b038e1bebeff05b52..49acbcdc6696fd120c402adf757= b3f41660ad50a 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -335,6 +335,37 @@ struct kmem_cache_args { * %NULL means no constructor. */ void (*ctor)(void *); + /** + * @sheaf_capacity: Enable sheaves of given capacity for the cache. + * + * With a non-zero value, allocations from the cache go through caching + * arrays called sheaves. Each cpu has a main sheaf that's always + * present, and a spare sheaf that may be not present. When both become + * empty, there's an attempt to replace an empty sheaf with a full sheaf + * from the per-node barn. + * + * When no full sheaf is available, and gfp flags allow blocking, a + * sheaf is allocated and filled from slab(s) using bulk allocation. + * Otherwise the allocation falls back to the normal operation + * allocating a single object from a slab. + * + * Analogically when freeing and both percpu sheaves are full, the barn + * may replace it with an empty sheaf, unless it's over capacity. In + * that case a sheaf is bulk freed to slab pages. + * + * The sheaves do not enforce NUMA placement of objects, so allocations + * via kmem_cache_alloc_node() with a node specified other than + * NUMA_NO_NODE will bypass them. + * + * Bulk allocation and free operations also try to use the cpu sheaves + * and barn, but fallback to using slab pages directly. + * + * When slub_debug is enabled for the cache, the sheaf_capacity argument + * is ignored. + * + * %0 means no sheaves will be created. + */ + unsigned int sheaf_capacity; }; =20 struct kmem_cache *__kmem_cache_create_args(const char *name, diff --git a/mm/slab.h b/mm/slab.h index 248b34c839b7ca39cf14e139c62d116efb97d30f..206987ce44a4d053ebe3b5e5078= 4d2dd23822cd1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -235,6 +235,7 @@ struct kmem_cache { #ifndef CONFIG_SLUB_TINY struct kmem_cache_cpu __percpu *cpu_slab; #endif + struct slub_percpu_sheaves __percpu *cpu_sheaves; /* Used for retrieving partial slabs, etc. */ slab_flags_t flags; unsigned long min_partial; @@ -248,6 +249,7 @@ struct kmem_cache { /* Number of per cpu partial slabs to keep around */ unsigned int cpu_partial_slabs; #endif + unsigned int sheaf_capacity; struct kmem_cache_order_objects oo; =20 /* Allocation and freeing of slabs */ diff --git a/mm/slab_common.c b/mm/slab_common.c index bfe7c40eeee1a01c175766935c1e3c0304434a53..e2b197e47866c30acdbd1fee415= 9f262a751c5a7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -163,6 +163,9 @@ int slab_unmergeable(struct kmem_cache *s) return 1; #endif =20 + if (s->cpu_sheaves) + return 1; + /* * We may have set a slab to be unmergeable during bootstrap. */ @@ -321,7 +324,7 @@ struct kmem_cache *__kmem_cache_create_args(const char = *name, object_size - args->usersize < args->useroffset)) args->usersize =3D args->useroffset =3D 0; =20 - if (!args->usersize) + if (!args->usersize && !args->sheaf_capacity) s =3D __kmem_cache_alias(name, object_size, args->align, flags, args->ctor); if (s) diff --git a/mm/slub.c b/mm/slub.c index 9f671ec76131c4b0b28d5d568aa45842b5efb6d4..cba188b7e04ddf86debf9bc27a2= f725db1b2056e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -363,8 +363,10 @@ static inline void debugfs_slab_add(struct kmem_cache = *s) { } #endif =20 enum stat_item { + ALLOC_PCS, /* Allocation from percpu sheaf */ ALLOC_FASTPATH, /* Allocation from cpu slab */ ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ + FREE_PCS, /* Free to percpu sheaf */ FREE_FASTPATH, /* Free to cpu slab */ FREE_SLOWPATH, /* Freeing not to cpu slab */ FREE_FROZEN, /* Freeing to frozen slab */ @@ -389,6 +391,14 @@ enum stat_item { CPU_PARTIAL_FREE, /* Refill cpu partial on free */ CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */ CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ + SHEAF_FLUSH, /* Objects flushed from a sheaf */ + SHEAF_REFILL, /* Objects refilled to a sheaf */ + SHEAF_ALLOC, /* Allocation of an empty sheaf */ + SHEAF_FREE, /* Freeing of an empty sheaf */ + BARN_GET, /* Got full sheaf from barn */ + BARN_GET_FAIL, /* Failed to get full sheaf from barn */ + BARN_PUT, /* Put full sheaf to barn */ + BARN_PUT_FAIL, /* Failed to put full sheaf to barn */ NR_SLUB_STAT_ITEMS }; =20 @@ -435,6 +445,32 @@ void stat_add(const struct kmem_cache *s, enum stat_it= em si, int v) #endif } =20 +#define MAX_FULL_SHEAVES 10 +#define MAX_EMPTY_SHEAVES 10 + +struct node_barn { + spinlock_t lock; + struct list_head sheaves_full; + struct list_head sheaves_empty; + unsigned int nr_full; + unsigned int nr_empty; +}; + +struct slab_sheaf { + union { + struct rcu_head rcu_head; + struct list_head barn_list; + }; + unsigned int size; + void *objects[]; +}; + +struct slub_percpu_sheaves { + local_trylock_t lock; + struct slab_sheaf *main; /* never NULL when unlocked */ + struct slab_sheaf *spare; /* empty or full, may be NULL */ +}; + /* * The slab lists for all objects. */ @@ -447,6 +483,7 @@ struct kmem_cache_node { atomic_long_t total_objects; struct list_head full; #endif + struct node_barn *barn; }; =20 static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int n= ode) @@ -454,6 +491,12 @@ static inline struct kmem_cache_node *get_node(struct = kmem_cache *s, int node) return s->node[node]; } =20 +/* Get the barn of the current cpu's memory node */ +static inline struct node_barn *get_barn(struct kmem_cache *s) +{ + return get_node(s, numa_mem_id())->barn; +} + /* * Iterator over all nodes. The body will be executed for each node that h= as * a kmem_cache_node structure allocated (which is true for all online nod= es) @@ -470,12 +513,19 @@ static inline struct kmem_cache_node *get_node(struct= kmem_cache *s, int node) */ static nodemask_t slab_nodes; =20 -#ifndef CONFIG_SLUB_TINY /* * Workqueue used for flush_cpu_slab(). */ static struct workqueue_struct *flushwq; -#endif + +struct slub_flush_work { + struct work_struct work; + struct kmem_cache *s; + bool skip; +}; + +static DEFINE_MUTEX(flush_lock); +static DEFINE_PER_CPU(struct slub_flush_work, slub_flush); =20 /******************************************************************** * Core slab cache functions @@ -2473,6 +2523,360 @@ static void *setup_object(struct kmem_cache *s, voi= d *object) return object; } =20 +static struct slab_sheaf *alloc_empty_sheaf(struct kmem_cache *s, gfp_t gf= p) +{ + struct slab_sheaf *sheaf =3D kzalloc(struct_size(sheaf, objects, + s->sheaf_capacity), gfp); + + if (unlikely(!sheaf)) + return NULL; + + stat(s, SHEAF_ALLOC); + + return sheaf; +} + +static void free_empty_sheaf(struct kmem_cache *s, struct slab_sheaf *shea= f) +{ + kfree(sheaf); + + stat(s, SHEAF_FREE); +} + +static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, + size_t size, void **p); + + +static int refill_sheaf(struct kmem_cache *s, struct slab_sheaf *sheaf, + gfp_t gfp) +{ + int to_fill =3D s->sheaf_capacity - sheaf->size; + int filled; + + if (!to_fill) + return 0; + + filled =3D __kmem_cache_alloc_bulk(s, gfp, to_fill, + &sheaf->objects[sheaf->size]); + + sheaf->size +=3D filled; + + stat_add(s, SHEAF_REFILL, filled); + + if (filled < to_fill) + return -ENOMEM; + + return 0; +} + + +static struct slab_sheaf *alloc_full_sheaf(struct kmem_cache *s, gfp_t gfp) +{ + struct slab_sheaf *sheaf =3D alloc_empty_sheaf(s, gfp); + + if (!sheaf) + return NULL; + + if (refill_sheaf(s, sheaf, gfp)) { + free_empty_sheaf(s, sheaf); + return NULL; + } + + return sheaf; +} + +/* + * Maximum number of objects freed during a single flush of main pcs sheaf. + * Translates directly to an on-stack array size. + */ +#define PCS_BATCH_MAX 32U + +static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void= **p); + +/* + * Free all objects from the main sheaf. In order to perform + * __kmem_cache_free_bulk() outside of cpu_sheaves->lock, work in batches = where + * object pointers are moved to a on-stack array under the lock. To bound = the + * stack usage, limit each batch to PCS_BATCH_MAX. + * + * returns true if at least partially flushed + */ +static bool sheaf_flush_main(struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + unsigned int batch, remaining; + void *objects[PCS_BATCH_MAX]; + struct slab_sheaf *sheaf; + bool ret =3D false; + +next_batch: + if (!local_trylock(&s->cpu_sheaves->lock)) + return ret; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + sheaf =3D pcs->main; + + batch =3D min(PCS_BATCH_MAX, sheaf->size); + + sheaf->size -=3D batch; + memcpy(objects, sheaf->objects + sheaf->size, batch * sizeof(void *)); + + remaining =3D sheaf->size; + + local_unlock(&s->cpu_sheaves->lock); + + __kmem_cache_free_bulk(s, batch, &objects[0]); + + stat_add(s, SHEAF_FLUSH, batch); + + ret =3D true; + + if (remaining) + goto next_batch; + + return ret; +} + +/* + * Free all objects from a sheaf that's unused, i.e. not linked to any + * cpu_sheaves, so we need no locking and batching. The locking is also not + * necessary when flushing cpu's sheaves (both spare and main) during cpu + * hotremove as the cpu is not executing anymore. + */ +static void sheaf_flush_unused(struct kmem_cache *s, struct slab_sheaf *sh= eaf) +{ + if (!sheaf->size) + return; + + stat_add(s, SHEAF_FLUSH, sheaf->size); + + __kmem_cache_free_bulk(s, sheaf->size, &sheaf->objects[0]); + + sheaf->size =3D 0; +} + +/* + * Caller needs to make sure migration is disabled in order to fully flush + * single cpu's sheaves + * + * must not be called from an irq + * + * flushing operations are rare so let's keep it simple and flush to slabs + * directly, skipping the barn + */ +static void pcs_flush_all(struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *spare; + + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + spare =3D pcs->spare; + pcs->spare =3D NULL; + + local_unlock(&s->cpu_sheaves->lock); + + if (spare) { + sheaf_flush_unused(s, spare); + free_empty_sheaf(s, spare); + } + + sheaf_flush_main(s); +} + +static void __pcs_flush_all_cpu(struct kmem_cache *s, unsigned int cpu) +{ + struct slub_percpu_sheaves *pcs; + + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + /* The cpu is not executing anymore so we don't need pcs->lock */ + sheaf_flush_unused(s, pcs->main); + if (pcs->spare) { + sheaf_flush_unused(s, pcs->spare); + free_empty_sheaf(s, pcs->spare); + pcs->spare =3D NULL; + } +} + +static void pcs_destroy(struct kmem_cache *s) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct slub_percpu_sheaves *pcs; + + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + /* can happen when unwinding failed create */ + if (!pcs->main) + continue; + + /* + * We have already passed __kmem_cache_shutdown() so everything + * was flushed and there should be no objects allocated from + * slabs, otherwise kmem_cache_destroy() would have aborted. + * Therefore something would have to be really wrong if the + * warnings here trigger, and we should rather leave objects and + * sheaves to leak in that case. + */ + + WARN_ON(pcs->spare); + + if (!WARN_ON(pcs->main->size)) { + free_empty_sheaf(s, pcs->main); + pcs->main =3D NULL; + } + } + + free_percpu(s->cpu_sheaves); + s->cpu_sheaves =3D NULL; +} + +static struct slab_sheaf *barn_get_empty_sheaf(struct node_barn *barn) +{ + struct slab_sheaf *empty =3D NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_empty) { + empty =3D list_first_entry(&barn->sheaves_empty, + struct slab_sheaf, barn_list); + list_del(&empty->barn_list); + barn->nr_empty--; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return empty; +} + +/* + * The following two functions are used mainly in cases where we have to u= ndo an + * intended action due to a race or cpu migration. Thus they do not check = the + * empty or full sheaf limits for simplicity. + */ + +static void barn_put_empty_sheaf(struct node_barn *barn, struct slab_sheaf= *sheaf) +{ + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + list_add(&sheaf->barn_list, &barn->sheaves_empty); + barn->nr_empty++; + + spin_unlock_irqrestore(&barn->lock, flags); +} + +static void barn_put_full_sheaf(struct node_barn *barn, struct slab_sheaf = *sheaf) +{ + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + list_add(&sheaf->barn_list, &barn->sheaves_full); + barn->nr_full++; + + spin_unlock_irqrestore(&barn->lock, flags); +} + +/* + * If a full sheaf is available, return it and put the supplied empty one = to + * barn. We ignore the limit on empty sheaves as the number of sheaves doe= sn't + * change. + */ +static struct slab_sheaf * +barn_replace_empty_sheaf(struct node_barn *barn, struct slab_sheaf *empty) +{ + struct slab_sheaf *full =3D NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full) { + full =3D list_first_entry(&barn->sheaves_full, struct slab_sheaf, + barn_list); + list_del(&full->barn_list); + list_add(&empty->barn_list, &barn->sheaves_empty); + barn->nr_full--; + barn->nr_empty++; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return full; +} + +/* + * If an empty sheaf is available, return it and put the supplied full one= to + * barn. But if there are too many full sheaves, reject this with -E2BIG. + */ +static struct slab_sheaf * +barn_replace_full_sheaf(struct node_barn *barn, struct slab_sheaf *full) +{ + struct slab_sheaf *empty; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full >=3D MAX_FULL_SHEAVES) { + empty =3D ERR_PTR(-E2BIG); + } else if (!barn->nr_empty) { + empty =3D ERR_PTR(-ENOMEM); + } else { + empty =3D list_first_entry(&barn->sheaves_empty, struct slab_sheaf, + barn_list); + list_del(&empty->barn_list); + list_add(&full->barn_list, &barn->sheaves_full); + barn->nr_empty--; + barn->nr_full++; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return empty; +} + +static void barn_init(struct node_barn *barn) +{ + spin_lock_init(&barn->lock); + INIT_LIST_HEAD(&barn->sheaves_full); + INIT_LIST_HEAD(&barn->sheaves_empty); + barn->nr_full =3D 0; + barn->nr_empty =3D 0; +} + +static void barn_shrink(struct kmem_cache *s, struct node_barn *barn) +{ + struct list_head empty_list; + struct list_head full_list; + struct slab_sheaf *sheaf, *sheaf2; + unsigned long flags; + + INIT_LIST_HEAD(&empty_list); + INIT_LIST_HEAD(&full_list); + + spin_lock_irqsave(&barn->lock, flags); + + list_splice_init(&barn->sheaves_full, &full_list); + barn->nr_full =3D 0; + list_splice_init(&barn->sheaves_empty, &empty_list); + barn->nr_empty =3D 0; + + spin_unlock_irqrestore(&barn->lock, flags); + + list_for_each_entry_safe(sheaf, sheaf2, &full_list, barn_list) { + sheaf_flush_unused(s, sheaf); + free_empty_sheaf(s, sheaf); + } + + list_for_each_entry_safe(sheaf, sheaf2, &empty_list, barn_list) + free_empty_sheaf(s, sheaf); +} + /* * Slab allocation and freeing */ @@ -3344,11 +3748,40 @@ static inline void __flush_cpu_slab(struct kmem_cac= he *s, int cpu) put_partials_cpu(s, c); } =20 -struct slub_flush_work { - struct work_struct work; - struct kmem_cache *s; - bool skip; -}; +static inline void flush_this_cpu_slab(struct kmem_cache *s) +{ + struct kmem_cache_cpu *c =3D this_cpu_ptr(s->cpu_slab); + + if (c->slab) + flush_slab(s, c); + + put_partials(s); +} + +static bool has_cpu_slab(int cpu, struct kmem_cache *s) +{ + struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); + + return c->slab || slub_percpu_partial(c); +} + +#else /* CONFIG_SLUB_TINY */ +static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { } +static inline bool has_cpu_slab(int cpu, struct kmem_cache *s) { return fa= lse; } +static inline void flush_this_cpu_slab(struct kmem_cache *s) { } +#endif /* CONFIG_SLUB_TINY */ + +static bool has_pcs_used(int cpu, struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + + if (!s->cpu_sheaves) + return false; + + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + return (pcs->spare || pcs->main->size); +} =20 /* * Flush cpu slab. @@ -3358,30 +3791,18 @@ struct slub_flush_work { static void flush_cpu_slab(struct work_struct *w) { struct kmem_cache *s; - struct kmem_cache_cpu *c; struct slub_flush_work *sfw; =20 sfw =3D container_of(w, struct slub_flush_work, work); =20 s =3D sfw->s; - c =3D this_cpu_ptr(s->cpu_slab); - - if (c->slab) - flush_slab(s, c); - - put_partials(s); -} =20 -static bool has_cpu_slab(int cpu, struct kmem_cache *s) -{ - struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); + if (s->cpu_sheaves) + pcs_flush_all(s); =20 - return c->slab || slub_percpu_partial(c); + flush_this_cpu_slab(s); } =20 -static DEFINE_MUTEX(flush_lock); -static DEFINE_PER_CPU(struct slub_flush_work, slub_flush); - static void flush_all_cpus_locked(struct kmem_cache *s) { struct slub_flush_work *sfw; @@ -3392,7 +3813,7 @@ static void flush_all_cpus_locked(struct kmem_cache *= s) =20 for_each_online_cpu(cpu) { sfw =3D &per_cpu(slub_flush, cpu); - if (!has_cpu_slab(cpu, s)) { + if (!has_cpu_slab(cpu, s) && !has_pcs_used(cpu, s)) { sfw->skip =3D true; continue; } @@ -3428,19 +3849,15 @@ static int slub_cpu_dead(unsigned int cpu) struct kmem_cache *s; =20 mutex_lock(&slab_mutex); - list_for_each_entry(s, &slab_caches, list) + list_for_each_entry(s, &slab_caches, list) { __flush_cpu_slab(s, cpu); + if (s->cpu_sheaves) + __pcs_flush_all_cpu(s, cpu); + } mutex_unlock(&slab_mutex); return 0; } =20 -#else /* CONFIG_SLUB_TINY */ -static inline void flush_all_cpus_locked(struct kmem_cache *s) { } -static inline void flush_all(struct kmem_cache *s) { } -static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { } -static inline int slub_cpu_dead(unsigned int cpu) { return 0; } -#endif /* CONFIG_SLUB_TINY */ - /* * Check if the objects in a per cpu structure fit numa * locality expectations. @@ -4191,30 +4608,240 @@ bool slab_post_alloc_hook(struct kmem_cache *s, st= ruct list_lru *lru, } =20 /* - * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_allo= c) - * have the fastpath folded into their functions. So no function call - * overhead for requests that can be satisfied on the fastpath. - * - * The fastpath works by first checking if the lockless freelist can be us= ed. - * If not then __slab_alloc is called for slow processing. + * Replace the empty main sheaf with a (at least partially) full sheaf. * - * Otherwise we can simply pick the next object from the lockless free lis= t. + * Must be called with the cpu_sheaves local lock locked. If successful, r= eturns + * the pcs pointer and the local lock locked (possibly on a different cpu = than + * initially called). If not successful, returns NULL and the local lock + * unlocked. */ -static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struc= t list_lru *lru, - gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) +static struct slub_percpu_sheaves * +__pcs_replace_empty_main(struct kmem_cache *s, struct slub_percpu_sheaves = *pcs, gfp_t gfp) { - void *object; - bool init =3D false; + struct slab_sheaf *empty =3D NULL; + struct slab_sheaf *full; + struct node_barn *barn; + bool can_alloc; =20 - s =3D slab_pre_alloc_hook(s, gfpflags); - if (unlikely(!s)) - return NULL; + lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); + + if (pcs->spare && pcs->spare->size > 0) { + swap(pcs->main, pcs->spare); + return pcs; + } + + barn =3D get_barn(s); + + full =3D barn_replace_empty_sheaf(barn, pcs->main); + + if (full) { + stat(s, BARN_GET); + pcs->main =3D full; + return pcs; + } + + stat(s, BARN_GET_FAIL); + + can_alloc =3D gfpflags_allow_blocking(gfp); + + if (can_alloc) { + if (pcs->spare) { + empty =3D pcs->spare; + pcs->spare =3D NULL; + } else { + empty =3D barn_get_empty_sheaf(barn); + } + } + + local_unlock(&s->cpu_sheaves->lock); + + if (!can_alloc) + return NULL; + + if (empty) { + if (!refill_sheaf(s, empty, gfp)) { + full =3D empty; + } else { + /* + * we must be very low on memory so don't bother + * with the barn + */ + free_empty_sheaf(s, empty); + } + } else { + full =3D alloc_full_sheaf(s, gfp); + } + + if (!full) + return NULL; + + /* + * we can reach here only when gfpflags_allow_blocking + * so this must not be an irq + */ + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + /* + * If we are returning empty sheaf, we either got it from the + * barn or had to allocate one. If we are returning a full + * sheaf, it's due to racing or being migrated to a different + * cpu. Breaching the barn's sheaf limits should be thus rare + * enough so just ignore them to simplify the recovery. + */ + + if (pcs->main->size =3D=3D 0) { + barn_put_empty_sheaf(barn, pcs->main); + pcs->main =3D full; + return pcs; + } + + if (!pcs->spare) { + pcs->spare =3D full; + return pcs; + } + + if (pcs->spare->size =3D=3D 0) { + barn_put_empty_sheaf(barn, pcs->spare); + pcs->spare =3D full; + return pcs; + } + + barn_put_full_sheaf(barn, full); + stat(s, BARN_PUT); + + return pcs; +} + +static __fastpath_inline +void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) +{ + struct slub_percpu_sheaves *pcs; + void *object; + +#ifdef CONFIG_NUMA + if (static_branch_unlikely(&strict_numa)) { + if (current->mempolicy) + return NULL; + } +#endif + + if (!local_trylock(&s->cpu_sheaves->lock)) + return NULL; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size =3D=3D 0)) { + pcs =3D __pcs_replace_empty_main(s, pcs, gfp); + if (unlikely(!pcs)) + return NULL; + } + + object =3D pcs->main->objects[--pcs->main->size]; + + local_unlock(&s->cpu_sheaves->lock); + + stat(s, ALLOC_PCS); + + return object; +} + +static __fastpath_inline +unsigned int alloc_from_pcs_bulk(struct kmem_cache *s, size_t size, void *= *p) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *main; + unsigned int allocated =3D 0; + unsigned int batch; + +next_batch: + if (!local_trylock(&s->cpu_sheaves->lock)) + return allocated; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size =3D=3D 0)) { + + struct slab_sheaf *full; + + if (pcs->spare && pcs->spare->size > 0) { + swap(pcs->main, pcs->spare); + goto do_alloc; + } + + full =3D barn_replace_empty_sheaf(get_barn(s), pcs->main); + + if (full) { + stat(s, BARN_GET); + pcs->main =3D full; + goto do_alloc; + } + + stat(s, BARN_GET_FAIL); + + local_unlock(&s->cpu_sheaves->lock); + + /* + * Once full sheaves in barn are depleted, let the bulk + * allocation continue from slab pages, otherwise we would just + * be copying arrays of pointers twice. + */ + return allocated; + } + +do_alloc: + + main =3D pcs->main; + batch =3D min(size, main->size); + + main->size -=3D batch; + memcpy(p, main->objects + main->size, batch * sizeof(void *)); + + local_unlock(&s->cpu_sheaves->lock); + + stat_add(s, ALLOC_PCS, batch); + + allocated +=3D batch; + + if (batch < size) { + p +=3D batch; + size -=3D batch; + goto next_batch; + } + + return allocated; +} + + +/* + * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_allo= c) + * have the fastpath folded into their functions. So no function call + * overhead for requests that can be satisfied on the fastpath. + * + * The fastpath works by first checking if the lockless freelist can be us= ed. + * If not then __slab_alloc is called for slow processing. + * + * Otherwise we can simply pick the next object from the lockless free lis= t. + */ +static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struc= t list_lru *lru, + gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) +{ + void *object; + bool init =3D false; + + s =3D slab_pre_alloc_hook(s, gfpflags); + if (unlikely(!s)) + return NULL; =20 object =3D kfence_alloc(s, orig_size, gfpflags); if (unlikely(object)) goto out; =20 - object =3D __slab_alloc_node(s, gfpflags, node, addr, orig_size); + if (s->cpu_sheaves && node =3D=3D NUMA_NO_NODE) + object =3D alloc_from_pcs(s, gfpflags); + + if (!object) + object =3D __slab_alloc_node(s, gfpflags, node, addr, orig_size); =20 maybe_wipe_obj_freeptr(s, object); init =3D slab_want_init_on_alloc(gfpflags, s); @@ -4591,6 +5218,295 @@ static void __slab_free(struct kmem_cache *s, struc= t slab *slab, discard_slab(s, slab); } =20 +/* + * pcs is locked. We should have get rid of the spare sheaf and obtained an + * empty sheaf, while the main sheaf is full. We want to install the empty= sheaf + * as a main sheaf, and make the current main sheaf a spare sheaf. + * + * However due to having relinquished the cpu_sheaves lock when obtaining + * the empty sheaf, we need to handle some unlikely but possible cases. + * + * If we put any sheaf to barn here, it's because we were interrupted or h= ave + * been migrated to a different cpu, which should be rare enough so just i= gnore + * the barn's limits to simplify the handling. + * + * An alternative scenario that gets us here is when we fail + * barn_replace_full_sheaf(), because there's no empty sheaf available in = the + * barn, so we had to allocate it by alloc_empty_sheaf(). But because we s= aw the + * limit on full sheaves was not exceeded, we assume it didn't change and = just + * put the full sheaf there. + */ +static void __pcs_install_empty_sheaf(struct kmem_cache *s, + struct slub_percpu_sheaves *pcs, struct slab_sheaf *empty) +{ + struct node_barn *barn; + + lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); + + /* This is what we expect to find if nobody interrupted us. */ + if (likely(!pcs->spare)) { + pcs->spare =3D pcs->main; + pcs->main =3D empty; + return; + } + + barn =3D get_barn(s); + + /* + * Unlikely because if the main sheaf had space, we would have just + * freed to it. Get rid of our empty sheaf. + */ + if (pcs->main->size < s->sheaf_capacity) { + barn_put_empty_sheaf(barn, empty); + return; + } + + /* Also unlikely for the same reason */ + if (pcs->spare->size < s->sheaf_capacity) { + swap(pcs->main, pcs->spare); + barn_put_empty_sheaf(barn, empty); + return; + } + + /* + * We probably failed barn_replace_full_sheaf() due to no empty sheaf + * available there, but we allocated one, so finish the job. + */ + barn_put_full_sheaf(barn, pcs->main); + stat(s, BARN_PUT); + pcs->main =3D empty; +} + +/* + * Replace the full main sheaf with a (at least partially) empty sheaf. + * + * Must be called with the cpu_sheaves local lock locked. If successful, r= eturns + * the pcs pointer and the local lock locked (possibly on a different cpu = than + * initially called). If not successful, returns NULL and the local lock + * unlocked. + */ +static struct slub_percpu_sheaves * +__pcs_replace_full_main(struct kmem_cache *s, struct slub_percpu_sheaves *= pcs) +{ + struct slab_sheaf *empty; + struct node_barn *barn; + bool put_fail; + +restart: + lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); + + barn =3D get_barn(s); + put_fail =3D false; + + if (!pcs->spare) { + empty =3D barn_get_empty_sheaf(barn); + if (empty) { + pcs->spare =3D pcs->main; + pcs->main =3D empty; + return pcs; + } + goto alloc_empty; + } + + if (pcs->spare->size < s->sheaf_capacity) { + swap(pcs->main, pcs->spare); + return pcs; + } + + empty =3D barn_replace_full_sheaf(barn, pcs->main); + + if (!IS_ERR(empty)) { + stat(s, BARN_PUT); + pcs->main =3D empty; + return pcs; + } + + if (PTR_ERR(empty) =3D=3D -E2BIG) { + /* Since we got here, spare exists and is full */ + struct slab_sheaf *to_flush =3D pcs->spare; + + stat(s, BARN_PUT_FAIL); + + pcs->spare =3D NULL; + local_unlock(&s->cpu_sheaves->lock); + + sheaf_flush_unused(s, to_flush); + empty =3D to_flush; + goto got_empty; + } + + /* + * We could not replace full sheaf because barn had no empty + * sheaves. We can still allocate it and put the full sheaf in + * __pcs_install_empty_sheaf(), but if we fail to allocate it, + * make sure to count the fail. + */ + put_fail =3D true; + +alloc_empty: + local_unlock(&s->cpu_sheaves->lock); + + empty =3D alloc_empty_sheaf(s, GFP_NOWAIT); + if (empty) + goto got_empty; + + if (put_fail) + stat(s, BARN_PUT_FAIL); + + if (!sheaf_flush_main(s)) + return NULL; + + if (!local_trylock(&s->cpu_sheaves->lock)) + return NULL; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + /* + * we flushed the main sheaf so it should be empty now, + * but in case we got preempted or migrated, we need to + * check again + */ + if (pcs->main->size =3D=3D s->sheaf_capacity) + goto restart; + + return pcs; + +got_empty: + if (!local_trylock(&s->cpu_sheaves->lock)) { + barn_put_empty_sheaf(barn, empty); + return NULL; + } + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + __pcs_install_empty_sheaf(s, pcs, empty); + + return pcs; +} + +/* + * Free an object to the percpu sheaves. + * The object is expected to have passed slab_free_hook() already. + */ +static __fastpath_inline +bool free_to_pcs(struct kmem_cache *s, void *object) +{ + struct slub_percpu_sheaves *pcs; + + if (!local_trylock(&s->cpu_sheaves->lock)) + return false; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size =3D=3D s->sheaf_capacity)) { + + pcs =3D __pcs_replace_full_main(s, pcs); + if (unlikely(!pcs)) + return false; + } + + pcs->main->objects[pcs->main->size++] =3D object; + + local_unlock(&s->cpu_sheaves->lock); + + stat(s, FREE_PCS); + + return true; +} + +/* + * Bulk free objects to the percpu sheaves. + * Unlike free_to_pcs() this includes the calls to all necessary hooks + * and the fallback to freeing to slab pages. + */ +static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *main, *empty; + bool init =3D slab_want_init_on_free(s); + unsigned int batch, i =3D 0; + struct node_barn *barn; + + while (i < size) { + struct slab *slab =3D virt_to_slab(p[i]); + + memcg_slab_free_hook(s, slab, p + i, 1); + alloc_tagging_slab_free_hook(s, slab, p + i, 1); + + if (unlikely(!slab_free_hook(s, p[i], init, false))) { + p[i] =3D p[--size]; + if (!size) + return; + continue; + } + + i++; + } + +next_batch: + if (!local_trylock(&s->cpu_sheaves->lock)) + goto fallback; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (likely(pcs->main->size < s->sheaf_capacity)) + goto do_free; + + barn =3D get_barn(s); + + if (!pcs->spare) { + empty =3D barn_get_empty_sheaf(barn); + if (!empty) + goto no_empty; + + pcs->spare =3D pcs->main; + pcs->main =3D empty; + goto do_free; + } + + if (pcs->spare->size < s->sheaf_capacity) { + swap(pcs->main, pcs->spare); + goto do_free; + } + + empty =3D barn_replace_full_sheaf(barn, pcs->main); + if (IS_ERR(empty)) { + stat(s, BARN_PUT_FAIL); + goto no_empty; + } + + stat(s, BARN_PUT); + pcs->main =3D empty; + +do_free: + main =3D pcs->main; + batch =3D min(size, s->sheaf_capacity - main->size); + + memcpy(main->objects + main->size, p, batch * sizeof(void *)); + main->size +=3D batch; + + local_unlock(&s->cpu_sheaves->lock); + + stat_add(s, FREE_PCS, batch); + + if (batch < size) { + p +=3D batch; + size -=3D batch; + goto next_batch; + } + + return; + +no_empty: + local_unlock(&s->cpu_sheaves->lock); + + /* + * if we depleted all empty sheaves in the barn or there are too + * many full sheaves, free the rest to slab pages + */ +fallback: + __kmem_cache_free_bulk(s, size, p); +} + #ifndef CONFIG_SLUB_TINY /* * Fastpath with forced inlining to produce a kfree and kmem_cache_free th= at @@ -4677,7 +5593,10 @@ void slab_free(struct kmem_cache *s, struct slab *sl= ab, void *object, memcg_slab_free_hook(s, slab, &object, 1); alloc_tagging_slab_free_hook(s, slab, &object, 1); =20 - if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) + if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false)= )) + return; + + if (!s->cpu_sheaves || !free_to_pcs(s, object)) do_slab_free(s, slab, object, object, 1, addr); } =20 @@ -5273,6 +6192,15 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size= _t size, void **p) if (!size) return; =20 + /* + * freeing to sheaves is so incompatible with the detached freelist so + * once we go that way, we have to do everything differently + */ + if (s && s->cpu_sheaves) { + free_to_pcs_bulk(s, size, p); + return; + } + do { struct detached_freelist df; =20 @@ -5391,7 +6319,7 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache = *s, gfp_t flags, int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t= size, void **p) { - int i; + unsigned int i =3D 0; =20 if (!size) return 0; @@ -5400,9 +6328,20 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *= s, gfp_t flags, size_t size, if (unlikely(!s)) return 0; =20 - i =3D __kmem_cache_alloc_bulk(s, flags, size, p); - if (unlikely(i =3D=3D 0)) - return 0; + if (s->cpu_sheaves) + i =3D alloc_from_pcs_bulk(s, size, p); + + if (i < size) { + /* + * If we ran out of memory, don't bother with freeing back to + * the percpu sheaves, we have bigger problems. + */ + if (unlikely(__kmem_cache_alloc_bulk(s, flags, size - i, p + i) =3D=3D 0= )) { + if (i > 0) + __kmem_cache_free_bulk(s, i, p); + return 0; + } + } =20 /* * memcg and kmem_cache debug support and memory initialization. @@ -5412,11 +6351,11 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache = *s, gfp_t flags, size_t size, slab_want_init_on_alloc(flags, s), s->object_size))) { return 0; } - return i; + + return size; } EXPORT_SYMBOL(kmem_cache_alloc_bulk_noprof); =20 - /* * Object placement in a slab is made very easy because we always start at * offset 0. If we tune the size of the object to the alignment then we can @@ -5550,7 +6489,7 @@ static inline int calculate_order(unsigned int size) } =20 static void -init_kmem_cache_node(struct kmem_cache_node *n) +init_kmem_cache_node(struct kmem_cache_node *n, struct node_barn *barn) { n->nr_partial =3D 0; spin_lock_init(&n->list_lock); @@ -5560,6 +6499,9 @@ init_kmem_cache_node(struct kmem_cache_node *n) atomic_long_set(&n->total_objects, 0); INIT_LIST_HEAD(&n->full); #endif + n->barn =3D barn; + if (barn) + barn_init(barn); } =20 #ifndef CONFIG_SLUB_TINY @@ -5590,6 +6532,26 @@ static inline int alloc_kmem_cache_cpus(struct kmem_= cache *s) } #endif /* CONFIG_SLUB_TINY */ =20 +static int init_percpu_sheaves(struct kmem_cache *s) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct slub_percpu_sheaves *pcs; + + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + local_trylock_init(&pcs->lock); + + pcs->main =3D alloc_empty_sheaf(s, GFP_KERNEL); + + if (!pcs->main) + return -ENOMEM; + } + + return 0; +} + static struct kmem_cache *kmem_cache_node; =20 /* @@ -5625,7 +6587,7 @@ static void early_kmem_cache_node_alloc(int node) slab->freelist =3D get_freepointer(kmem_cache_node, n); slab->inuse =3D 1; kmem_cache_node->node[node] =3D n; - init_kmem_cache_node(n); + init_kmem_cache_node(n, NULL); inc_slabs_node(kmem_cache_node, node, slab->objects); =20 /* @@ -5641,6 +6603,13 @@ static void free_kmem_cache_nodes(struct kmem_cache = *s) struct kmem_cache_node *n; =20 for_each_kmem_cache_node(s, node, n) { + if (n->barn) { + WARN_ON(n->barn->nr_full); + WARN_ON(n->barn->nr_empty); + kfree(n->barn); + n->barn =3D NULL; + } + s->node[node] =3D NULL; kmem_cache_free(kmem_cache_node, n); } @@ -5649,6 +6618,8 @@ static void free_kmem_cache_nodes(struct kmem_cache *= s) void __kmem_cache_release(struct kmem_cache *s) { cache_random_seq_destroy(s); + if (s->cpu_sheaves) + pcs_destroy(s); #ifndef CONFIG_SLUB_TINY free_percpu(s->cpu_slab); #endif @@ -5661,18 +6632,29 @@ static int init_kmem_cache_nodes(struct kmem_cache = *s) =20 for_each_node_mask(node, slab_nodes) { struct kmem_cache_node *n; + struct node_barn *barn =3D NULL; =20 if (slab_state =3D=3D DOWN) { early_kmem_cache_node_alloc(node); continue; } + + if (s->cpu_sheaves) { + barn =3D kmalloc_node(sizeof(*barn), GFP_KERNEL, node); + + if (!barn) + return 0; + } + n =3D kmem_cache_alloc_node(kmem_cache_node, GFP_KERNEL, node); - - if (!n) + if (!n) { + kfree(barn); return 0; + } + + init_kmem_cache_node(n, barn); =20 - init_kmem_cache_node(n); s->node[node] =3D n; } return 1; @@ -5929,6 +6911,8 @@ int __kmem_cache_shutdown(struct kmem_cache *s) flush_all_cpus_locked(s); /* Attempt to free all objects */ for_each_kmem_cache_node(s, node, n) { + if (n->barn) + barn_shrink(s, n->barn); free_partial(s, n); if (n->nr_partial || node_nr_slabs(n)) return 1; @@ -6132,6 +7116,9 @@ static int __kmem_cache_do_shrink(struct kmem_cache *= s) for (i =3D 0; i < SHRINK_PROMOTE_MAX; i++) INIT_LIST_HEAD(promote + i); =20 + if (n->barn) + barn_shrink(s, n->barn); + spin_lock_irqsave(&n->list_lock, flags); =20 /* @@ -6211,12 +7198,24 @@ static int slab_mem_going_online_callback(int nid) */ mutex_lock(&slab_mutex); list_for_each_entry(s, &slab_caches, list) { + struct node_barn *barn =3D NULL; + /* * The structure may already exist if the node was previously * onlined and offlined. */ if (get_node(s, nid)) continue; + + if (s->cpu_sheaves) { + barn =3D kmalloc_node(sizeof(*barn), GFP_KERNEL, nid); + + if (!barn) { + ret =3D -ENOMEM; + goto out; + } + } + /* * XXX: kmem_cache_alloc_node will fallback to other nodes * since memory is not yet available from the node that @@ -6224,10 +7223,13 @@ static int slab_mem_going_online_callback(int nid) */ n =3D kmem_cache_alloc(kmem_cache_node, GFP_KERNEL); if (!n) { + kfree(barn); ret =3D -ENOMEM; goto out; } - init_kmem_cache_node(n); + + init_kmem_cache_node(n, barn); + s->node[nid] =3D n; } /* @@ -6440,6 +7442,17 @@ int do_kmem_cache_create(struct kmem_cache *s, const= char *name, =20 set_cpu_partial(s); =20 + if (args->sheaf_capacity && !IS_ENABLED(CONFIG_SLUB_TINY) + && !(s->flags & SLAB_DEBUG_FLAGS)) { + s->cpu_sheaves =3D alloc_percpu(struct slub_percpu_sheaves); + if (!s->cpu_sheaves) { + err =3D -ENOMEM; + goto out; + } + // TODO: increase capacity to grow slab_sheaf up to next kmalloc size? + s->sheaf_capacity =3D args->sheaf_capacity; + } + #ifdef CONFIG_NUMA s->remote_node_defrag_ratio =3D 1000; #endif @@ -6456,6 +7469,12 @@ int do_kmem_cache_create(struct kmem_cache *s, const= char *name, if (!alloc_kmem_cache_cpus(s)) goto out; =20 + if (s->cpu_sheaves) { + err =3D init_percpu_sheaves(s); + if (err) + goto out; + } + err =3D 0; =20 /* Mutex is not taken during early boot */ @@ -6908,6 +7927,12 @@ static ssize_t order_show(struct kmem_cache *s, char= *buf) } SLAB_ATTR_RO(order); =20 +static ssize_t sheaf_capacity_show(struct kmem_cache *s, char *buf) +{ + return sysfs_emit(buf, "%u\n", s->sheaf_capacity); +} +SLAB_ATTR_RO(sheaf_capacity); + static ssize_t min_partial_show(struct kmem_cache *s, char *buf) { return sysfs_emit(buf, "%lu\n", s->min_partial); @@ -7255,8 +8280,10 @@ static ssize_t text##_store(struct kmem_cache *s, \ } \ SLAB_ATTR(text); \ =20 +STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf); STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); +STAT_ATTR(FREE_PCS, free_cpu_sheaf); STAT_ATTR(FREE_FASTPATH, free_fastpath); STAT_ATTR(FREE_SLOWPATH, free_slowpath); STAT_ATTR(FREE_FROZEN, free_frozen); @@ -7281,6 +8308,14 @@ STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc); STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free); STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node); STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain); +STAT_ATTR(SHEAF_FLUSH, sheaf_flush); +STAT_ATTR(SHEAF_REFILL, sheaf_refill); +STAT_ATTR(SHEAF_ALLOC, sheaf_alloc); +STAT_ATTR(SHEAF_FREE, sheaf_free); +STAT_ATTR(BARN_GET, barn_get); +STAT_ATTR(BARN_GET_FAIL, barn_get_fail); +STAT_ATTR(BARN_PUT, barn_put); +STAT_ATTR(BARN_PUT_FAIL, barn_put_fail); #endif /* CONFIG_SLUB_STATS */ =20 #ifdef CONFIG_KFENCE @@ -7311,6 +8346,7 @@ static struct attribute *slab_attrs[] =3D { &object_size_attr.attr, &objs_per_slab_attr.attr, &order_attr.attr, + &sheaf_capacity_attr.attr, &min_partial_attr.attr, &cpu_partial_attr.attr, &objects_partial_attr.attr, @@ -7342,8 +8378,10 @@ static struct attribute *slab_attrs[] =3D { &remote_node_defrag_ratio_attr.attr, #endif #ifdef CONFIG_SLUB_STATS + &alloc_cpu_sheaf_attr.attr, &alloc_fastpath_attr.attr, &alloc_slowpath_attr.attr, + &free_cpu_sheaf_attr.attr, &free_fastpath_attr.attr, &free_slowpath_attr.attr, &free_frozen_attr.attr, @@ -7368,6 +8406,14 @@ static struct attribute *slab_attrs[] =3D { &cpu_partial_free_attr.attr, &cpu_partial_node_attr.attr, &cpu_partial_drain_attr.attr, + &sheaf_flush_attr.attr, + &sheaf_refill_attr.attr, + &sheaf_alloc_attr.attr, + &sheaf_free_attr.attr, + &barn_get_attr.attr, + &barn_get_fail_attr.attr, + &barn_put_attr.attr, + &barn_put_fail_attr.attr, #endif #ifdef CONFIG_FAILSLAB &failslab_attr.attr, --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB92531355E for ; Wed, 10 Sep 2025 08:02:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491324; cv=none; b=kXri8mmjffdwhanBmkT0JTYqUjdn+ouYxwsfTKJWY/w01CbqbjmJXByq13pTrmImIHC7+6PWJqA9eybcwI++lt58pa47ncmTqY3H49Gw3WvxmKJzgbhkvJ/2S3PlPRAhUGune1XNCqYCxxQWCudXaijfZ8HabKfQHAjhW5bd9t0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491324; c=relaxed/simple; bh=wYgKXWnCdjOnyFHq8xUqSmBtTlyNnkccd9peMZmVNxA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gUmS2sun9H6VksZBEdlS/FpHqcgsBtpEjZeXTzsugBVTFLgxZR2BTtI+zV5hNc9XfRhUQhRsAHxsIW1sT6b4ZvAJoY4ITv+efzDVHCNkwiRSHZt7fZ+E1vE9KGUpp6GzK1aGG+7WJagrzaM11EQhPS5bQMeJBwbZm+g3jGKTuUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=u1/IunZs; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=QRPwuCwk; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=Wi/pl7pC; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=rlA0zlrL; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="u1/IunZs"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="QRPwuCwk"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="Wi/pl7pC"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="rlA0zlrL" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 1A1FE5CABD; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RBRID3XCadovWwOC1yDjkbntDt14oP+Yib+L6r+EoMM=; b=u1/IunZspPhefI8jb4XSztzfR926ogR9rI/LK+s4/Ri0i2MdZNJuo0nyfd9O4RCreGPFfe BN86/BsK0FJuEjTGKT7laUugiXnAiqq85RGiBjzgQuVu955o3sZbYx6huLIOo11Hc2pmc1 VP9T+oUcxOfiB9FsnA8vGQS9N/ld0MY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RBRID3XCadovWwOC1yDjkbntDt14oP+Yib+L6r+EoMM=; b=QRPwuCwkR3icrm3hwpdcGZhLNdMlaMEo29lWQoHf2kn9tRi6CfQ9suh/afThPKQ0w5YXkU ye0eGyKJI4oNmoDA== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491266; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RBRID3XCadovWwOC1yDjkbntDt14oP+Yib+L6r+EoMM=; b=Wi/pl7pCncZEun0bI5Wt5MwlJdkLCUQeJJh7jYsox441YSz6wSk72YMENMPn/oTMFn7eYj dRPFRwfiK9YmehaBmW4d21WQtF8mjhSpPfrHffk5ZVqmd/0sHpecg+I0Xn2uvP5+D6jLhU HDWxsx051CL9cnGyJf7RAfpbNtYQl7o= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491266; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RBRID3XCadovWwOC1yDjkbntDt14oP+Yib+L6r+EoMM=; b=rlA0zlrLRZ/0SW6eDnFXZr1ybZjcHDTxTIGOQhJ3f3dgsAUZXUXgxSE2vWjef96+M54Nmo JoChmFTODBCeg5AQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 00EE413AD1; Wed, 10 Sep 2025 08:01:05 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id iK6NO0EwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:05 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:06 +0200 Subject: [PATCH v8 04/23] slab: add sheaf support for batching kfree_rcu() operations Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-4-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; ARC_NA(0.00)[]; FUZZY_RATELIMITED(0.00)[rspamd.com]; MIME_TRACE(0.00)[0:+]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; TO_DN_SOME(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:email,suse.cz:mid] X-Spam-Flag: NO X-Spam-Score: -4.30 Extend the sheaf infrastructure for more efficient kfree_rcu() handling. For caches with sheaves, on each cpu maintain a rcu_free sheaf in addition to main and spare sheaves. kfree_rcu() operations will try to put objects on this sheaf. Once full, the sheaf is detached and submitted to call_rcu() with a handler that will try to put it in the barn, or flush to slab pages using bulk free, when the barn is full. Then a new empty sheaf must be obtained to put more objects there. It's possible that no free sheaves are available to use for a new rcu_free sheaf, and the allocation in kfree_rcu() context can only use GFP_NOWAIT and thus may fail. In that case, fall back to the existing kfree_rcu() implementation. Expected advantages: - batching the kfree_rcu() operations, that could eventually replace the existing batching - sheaves can be reused for allocations via barn instead of being flushed to slabs, which is more efficient - this includes cases where only some cpus are allowed to process rcu callbacks (Android) Possible disadvantage: - objects might be waiting for more than their grace period (it is determined by the last object freed into the sheaf), increasing memory usage - but the existing batching does that too. Only implement this for CONFIG_KVFREE_RCU_BATCHED as the tiny implementation favors smaller memory footprint over performance. Also for now skip the usage of rcu sheaf for CONFIG_PREEMPT_RT as the contexts where kfree_rcu() is called might not be compatible with taking a barn spinlock or a GFP_NOWAIT allocation of a new sheaf taking a spinlock - the current kfree_rcu() implementation avoids doing that. Teach kvfree_rcu_barrier() to flush all rcu_free sheaves from all caches that have them. This is not a cheap operation, but the barrier usage is rare - currently kmem_cache_destroy() or on module unload. Add CONFIG_SLUB_STATS counters free_rcu_sheaf and free_rcu_sheaf_fail to count how many kfree_rcu() used the rcu_free sheaf successfully and how many had to fall back to the existing implementation. Signed-off-by: Vlastimil Babka Reviewed-by: Harry Yoo --- mm/slab.h | 3 + mm/slab_common.c | 26 ++++++ mm/slub.c | 266 +++++++++++++++++++++++++++++++++++++++++++++++++++= +++- 3 files changed, 293 insertions(+), 2 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 206987ce44a4d053ebe3b5e50784d2dd23822cd1..e82e51c44bd00042d433ac8b46c= 2b4bbbdded9b1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -435,6 +435,9 @@ static inline bool is_kmalloc_normal(struct kmem_cache = *s) return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT)); } =20 +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj); +void flush_all_rcu_sheaves(void); + #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_PANIC | \ SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS | \ diff --git a/mm/slab_common.c b/mm/slab_common.c index e2b197e47866c30acdbd1fee4159f262a751c5a7..005a4319c06a01d2b616a75396f= cc43766a62ddb 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1608,6 +1608,27 @@ static void kfree_rcu_work(struct work_struct *work) kvfree_rcu_list(head); } =20 +static bool kfree_rcu_sheaf(void *obj) +{ + struct kmem_cache *s; + struct folio *folio; + struct slab *slab; + + if (is_vmalloc_addr(obj)) + return false; + + folio =3D virt_to_folio(obj); + if (unlikely(!folio_test_slab(folio))) + return false; + + slab =3D folio_slab(folio); + s =3D slab->slab_cache; + if (s->cpu_sheaves) + return __kfree_rcu_sheaf(s, obj); + + return false; +} + static bool need_offload_krc(struct kfree_rcu_cpu *krcp) { @@ -1952,6 +1973,9 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) if (!head) might_sleep(); =20 + if (!IS_ENABLED(CONFIG_PREEMPT_RT) && kfree_rcu_sheaf(ptr)) + return; + // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(ptr)) { // Probable double kfree_rcu(), just leak. @@ -2026,6 +2050,8 @@ void kvfree_rcu_barrier(void) bool queued; int i, cpu; =20 + flush_all_rcu_sheaves(); + /* * Firstly we detach objects and queue them over an RCU-batch * for all CPUs. Finally queued works are flushed for each CPU. diff --git a/mm/slub.c b/mm/slub.c index cba188b7e04ddf86debf9bc27a2f725db1b2056e..19cd8444ae5d210c77ae767912c= a1ff3fc69c2a8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -367,6 +367,8 @@ enum stat_item { ALLOC_FASTPATH, /* Allocation from cpu slab */ ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ FREE_PCS, /* Free to percpu sheaf */ + FREE_RCU_SHEAF, /* Free to rcu_free sheaf */ + FREE_RCU_SHEAF_FAIL, /* Failed to free to a rcu_free sheaf */ FREE_FASTPATH, /* Free to cpu slab */ FREE_SLOWPATH, /* Freeing not to cpu slab */ FREE_FROZEN, /* Freeing to frozen slab */ @@ -461,6 +463,7 @@ struct slab_sheaf { struct rcu_head rcu_head; struct list_head barn_list; }; + struct kmem_cache *cache; unsigned int size; void *objects[]; }; @@ -469,6 +472,7 @@ struct slub_percpu_sheaves { local_trylock_t lock; struct slab_sheaf *main; /* never NULL when unlocked */ struct slab_sheaf *spare; /* empty or full, may be NULL */ + struct slab_sheaf *rcu_free; /* for batching kfree_rcu() */ }; =20 /* @@ -2531,6 +2535,8 @@ static struct slab_sheaf *alloc_empty_sheaf(struct km= em_cache *s, gfp_t gfp) if (unlikely(!sheaf)) return NULL; =20 + sheaf->cache =3D s; + stat(s, SHEAF_ALLOC); =20 return sheaf; @@ -2655,6 +2661,43 @@ static void sheaf_flush_unused(struct kmem_cache *s,= struct slab_sheaf *sheaf) sheaf->size =3D 0; } =20 +static void __rcu_free_sheaf_prepare(struct kmem_cache *s, + struct slab_sheaf *sheaf) +{ + bool init =3D slab_want_init_on_free(s); + void **p =3D &sheaf->objects[0]; + unsigned int i =3D 0; + + while (i < sheaf->size) { + struct slab *slab =3D virt_to_slab(p[i]); + + memcg_slab_free_hook(s, slab, p + i, 1); + alloc_tagging_slab_free_hook(s, slab, p + i, 1); + + if (unlikely(!slab_free_hook(s, p[i], init, true))) { + p[i] =3D p[--sheaf->size]; + continue; + } + + i++; + } +} + +static void rcu_free_sheaf_nobarn(struct rcu_head *head) +{ + struct slab_sheaf *sheaf; + struct kmem_cache *s; + + sheaf =3D container_of(head, struct slab_sheaf, rcu_head); + s =3D sheaf->cache; + + __rcu_free_sheaf_prepare(s, sheaf); + + sheaf_flush_unused(s, sheaf); + + free_empty_sheaf(s, sheaf); +} + /* * Caller needs to make sure migration is disabled in order to fully flush * single cpu's sheaves @@ -2667,7 +2710,7 @@ static void sheaf_flush_unused(struct kmem_cache *s, = struct slab_sheaf *sheaf) static void pcs_flush_all(struct kmem_cache *s) { struct slub_percpu_sheaves *pcs; - struct slab_sheaf *spare; + struct slab_sheaf *spare, *rcu_free; =20 local_lock(&s->cpu_sheaves->lock); pcs =3D this_cpu_ptr(s->cpu_sheaves); @@ -2675,6 +2718,9 @@ static void pcs_flush_all(struct kmem_cache *s) spare =3D pcs->spare; pcs->spare =3D NULL; =20 + rcu_free =3D pcs->rcu_free; + pcs->rcu_free =3D NULL; + local_unlock(&s->cpu_sheaves->lock); =20 if (spare) { @@ -2682,6 +2728,9 @@ static void pcs_flush_all(struct kmem_cache *s) free_empty_sheaf(s, spare); } =20 + if (rcu_free) + call_rcu(&rcu_free->rcu_head, rcu_free_sheaf_nobarn); + sheaf_flush_main(s); } =20 @@ -2698,6 +2747,11 @@ static void __pcs_flush_all_cpu(struct kmem_cache *s= , unsigned int cpu) free_empty_sheaf(s, pcs->spare); pcs->spare =3D NULL; } + + if (pcs->rcu_free) { + call_rcu(&pcs->rcu_free->rcu_head, rcu_free_sheaf_nobarn); + pcs->rcu_free =3D NULL; + } } =20 static void pcs_destroy(struct kmem_cache *s) @@ -2723,6 +2777,7 @@ static void pcs_destroy(struct kmem_cache *s) */ =20 WARN_ON(pcs->spare); + WARN_ON(pcs->rcu_free); =20 if (!WARN_ON(pcs->main->size)) { free_empty_sheaf(s, pcs->main); @@ -3780,7 +3835,7 @@ static bool has_pcs_used(int cpu, struct kmem_cache *= s) =20 pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); =20 - return (pcs->spare || pcs->main->size); + return (pcs->spare || pcs->rcu_free || pcs->main->size); } =20 /* @@ -3840,6 +3895,80 @@ static void flush_all(struct kmem_cache *s) cpus_read_unlock(); } =20 +static void flush_rcu_sheaf(struct work_struct *w) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *rcu_free; + struct slub_flush_work *sfw; + struct kmem_cache *s; + + sfw =3D container_of(w, struct slub_flush_work, work); + s =3D sfw->s; + + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + rcu_free =3D pcs->rcu_free; + pcs->rcu_free =3D NULL; + + local_unlock(&s->cpu_sheaves->lock); + + if (rcu_free) + call_rcu(&rcu_free->rcu_head, rcu_free_sheaf_nobarn); +} + + +/* needed for kvfree_rcu_barrier() */ +void flush_all_rcu_sheaves() +{ + struct slub_percpu_sheaves *pcs; + struct slub_flush_work *sfw; + struct kmem_cache *s; + bool flushed =3D false; + unsigned int cpu; + + cpus_read_lock(); + mutex_lock(&slab_mutex); + + list_for_each_entry(s, &slab_caches, list) { + if (!s->cpu_sheaves) + continue; + + mutex_lock(&flush_lock); + + for_each_online_cpu(cpu) { + sfw =3D &per_cpu(slub_flush, cpu); + pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); + + if (!pcs->rcu_free || !pcs->rcu_free->size) { + sfw->skip =3D true; + continue; + } + + INIT_WORK(&sfw->work, flush_rcu_sheaf); + sfw->skip =3D false; + sfw->s =3D s; + queue_work_on(cpu, flushwq, &sfw->work); + flushed =3D true; + } + + for_each_online_cpu(cpu) { + sfw =3D &per_cpu(slub_flush, cpu); + if (sfw->skip) + continue; + flush_work(&sfw->work); + } + + mutex_unlock(&flush_lock); + } + + mutex_unlock(&slab_mutex); + cpus_read_unlock(); + + if (flushed) + rcu_barrier(); +} + /* * Use the cpu notifier to insure that the cpu slabs are flushed when * necessary. @@ -5413,6 +5542,130 @@ bool free_to_pcs(struct kmem_cache *s, void *object) return true; } =20 +static void rcu_free_sheaf(struct rcu_head *head) +{ + struct slab_sheaf *sheaf; + struct node_barn *barn; + struct kmem_cache *s; + + sheaf =3D container_of(head, struct slab_sheaf, rcu_head); + + s =3D sheaf->cache; + + /* + * This may remove some objects due to slab_free_hook() returning false, + * so that the sheaf might no longer be completely full. But it's easier + * to handle it as full (unless it became completely empty), as the code + * handles it fine. The only downside is that sheaf will serve fewer + * allocations when reused. It only happens due to debugging, which is a + * performance hit anyway. + */ + __rcu_free_sheaf_prepare(s, sheaf); + + barn =3D get_node(s, numa_mem_id())->barn; + + /* due to slab_free_hook() */ + if (unlikely(sheaf->size =3D=3D 0)) + goto empty; + + /* + * Checking nr_full/nr_empty outside lock avoids contention in case the + * barn is at the respective limit. Due to the race we might go over the + * limit but that should be rare and harmless. + */ + + if (data_race(barn->nr_full) < MAX_FULL_SHEAVES) { + stat(s, BARN_PUT); + barn_put_full_sheaf(barn, sheaf); + return; + } + + stat(s, BARN_PUT_FAIL); + sheaf_flush_unused(s, sheaf); + +empty: + if (data_race(barn->nr_empty) < MAX_EMPTY_SHEAVES) { + barn_put_empty_sheaf(barn, sheaf); + return; + } + + free_empty_sheaf(s, sheaf); +} + +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *rcu_sheaf; + + if (!local_trylock(&s->cpu_sheaves->lock)) + goto fail; + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(!pcs->rcu_free)) { + + struct slab_sheaf *empty; + struct node_barn *barn; + + if (pcs->spare && pcs->spare->size =3D=3D 0) { + pcs->rcu_free =3D pcs->spare; + pcs->spare =3D NULL; + goto do_free; + } + + barn =3D get_barn(s); + + empty =3D barn_get_empty_sheaf(barn); + + if (empty) { + pcs->rcu_free =3D empty; + goto do_free; + } + + local_unlock(&s->cpu_sheaves->lock); + + empty =3D alloc_empty_sheaf(s, GFP_NOWAIT); + + if (!empty) + goto fail; + + if (!local_trylock(&s->cpu_sheaves->lock)) { + barn_put_empty_sheaf(barn, empty); + goto fail; + } + + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->rcu_free)) + barn_put_empty_sheaf(barn, empty); + else + pcs->rcu_free =3D empty; + } + +do_free: + + rcu_sheaf =3D pcs->rcu_free; + + rcu_sheaf->objects[rcu_sheaf->size++] =3D obj; + + if (likely(rcu_sheaf->size < s->sheaf_capacity)) + rcu_sheaf =3D NULL; + else + pcs->rcu_free =3D NULL; + + local_unlock(&s->cpu_sheaves->lock); + + if (rcu_sheaf) + call_rcu(&rcu_sheaf->rcu_head, rcu_free_sheaf); + + stat(s, FREE_RCU_SHEAF); + return true; + +fail: + stat(s, FREE_RCU_SHEAF_FAIL); + return false; +} + /* * Bulk free objects to the percpu sheaves. * Unlike free_to_pcs() this includes the calls to all necessary hooks @@ -6909,6 +7162,11 @@ int __kmem_cache_shutdown(struct kmem_cache *s) struct kmem_cache_node *n; =20 flush_all_cpus_locked(s); + + /* we might have rcu sheaves in flight */ + if (s->cpu_sheaves) + rcu_barrier(); + /* Attempt to free all objects */ for_each_kmem_cache_node(s, node, n) { if (n->barn) @@ -8284,6 +8542,8 @@ STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf); STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); STAT_ATTR(FREE_PCS, free_cpu_sheaf); +STAT_ATTR(FREE_RCU_SHEAF, free_rcu_sheaf); +STAT_ATTR(FREE_RCU_SHEAF_FAIL, free_rcu_sheaf_fail); STAT_ATTR(FREE_FASTPATH, free_fastpath); STAT_ATTR(FREE_SLOWPATH, free_slowpath); STAT_ATTR(FREE_FROZEN, free_frozen); @@ -8382,6 +8642,8 @@ static struct attribute *slab_attrs[] =3D { &alloc_fastpath_attr.attr, &alloc_slowpath_attr.attr, &free_cpu_sheaf_attr.attr, + &free_rcu_sheaf_attr.attr, + &free_rcu_sheaf_fail_attr.attr, &free_fastpath_attr.attr, &free_slowpath_attr.attr, &free_frozen_attr.attr, --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74D4B30FC24 for ; Wed, 10 Sep 2025 08:01:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491282; cv=none; b=lXDpJjHciJbPQlXKDw8aZ/cC5Zy1ITMfVLX8M8ry7uxsAdz+fWjnyBiDRvd1MSfuxeY5zpMIrJ1uDTD8zeaZF+XOxgZELIKbZdbc1oG0ZdFQp8nOBzUlQPejmho8p7HWGTG/OIqrr6Be91FDOF93wM5J+TcpjUW1v5AOQ/uCuPQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491282; c=relaxed/simple; bh=lLAddhaTlFLxQQyyIQ8PY0xHp+l2uoW+U/euwm5sJTA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Okm6ogqqQdkAZ8Trd/vTgipUV0X8eu+pVUUnFMVc2NvZ7smR4F0Y6FMY/PeO2t7uLESt3ri03n+LTdiPb7GZ2Hl667dH62g8cxHj1TD6gn74p/6miAfTIHcXPN4IDX5JaJ1LngTFc660I0trOUlKCgWRNtFO3bDcs1jcSq7rN7c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=EOWTSAqW; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=onJHf6Kj; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=EOWTSAqW; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=onJHf6Kj; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="EOWTSAqW"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="onJHf6Kj"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="EOWTSAqW"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="onJHf6Kj" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 1AD4634C29; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WIbxX8nViD7ZuHIyrExTg6YKvmTu+ieBiohu6wtWCjo=; b=EOWTSAqWlL0gYVGmiX/D1jKTdcTtJ035TauKKaJns/OLhex6QmiWbvQ9772S7c7nGm07J1 r3AKIc+uzxHFKiMUmIkrYXvl2Pubef8UgFqQM6rRwAt4lC3a7HY0/RRipKvlVgYvwAMzAU GvvQBlib94vVmheqj6huCRzXXhaPSPo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WIbxX8nViD7ZuHIyrExTg6YKvmTu+ieBiohu6wtWCjo=; b=onJHf6KjB53WpN8wiz8MNCJT4fG3q2Jf8g+rZaJA9D5Imxu9ftHZu4iUFv/eUNbI/E9KSf 7BmPgK7XLE6BuqDg== Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WIbxX8nViD7ZuHIyrExTg6YKvmTu+ieBiohu6wtWCjo=; b=EOWTSAqWlL0gYVGmiX/D1jKTdcTtJ035TauKKaJns/OLhex6QmiWbvQ9772S7c7nGm07J1 r3AKIc+uzxHFKiMUmIkrYXvl2Pubef8UgFqQM6rRwAt4lC3a7HY0/RRipKvlVgYvwAMzAU GvvQBlib94vVmheqj6huCRzXXhaPSPo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WIbxX8nViD7ZuHIyrExTg6YKvmTu+ieBiohu6wtWCjo=; b=onJHf6KjB53WpN8wiz8MNCJT4fG3q2Jf8g+rZaJA9D5Imxu9ftHZu4iUFv/eUNbI/E9KSf 7BmPgK7XLE6BuqDg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 14E9A13AD5; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id MNXSBEIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:07 +0200 Subject: [PATCH v8 05/23] slab: sheaf prefilling for guaranteed allocations Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-5-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:email,suse.cz:mid] X-Spam-Flag: NO X-Spam-Score: -4.30 Add functions for efficient guaranteed allocations e.g. in a critical section that cannot sleep, when the exact number of allocations is not known beforehand, but an upper limit can be calculated. kmem_cache_prefill_sheaf() returns a sheaf containing at least given number of objects. kmem_cache_alloc_from_sheaf() will allocate an object from the sheaf and is guaranteed not to fail until depleted. kmem_cache_return_sheaf() is for giving the sheaf back to the slab allocator after the critical section. This will also attempt to refill it to cache's sheaf capacity for better efficiency of sheaves handling, but it's not stricly necessary to succeed. kmem_cache_refill_sheaf() can be used to refill a previously obtained sheaf to requested size. If the current size is sufficient, it does nothing. If the requested size exceeds cache's sheaf_capacity and the sheaf's current capacity, the sheaf will be replaced with a new one, hence the indirect pointer parameter. kmem_cache_sheaf_size() can be used to query the current size. The implementation supports requesting sizes that exceed cache's sheaf_capacity, but it is not efficient - such "oversize" sheaves are allocated fresh in kmem_cache_prefill_sheaf() and flushed and freed immediately by kmem_cache_return_sheaf(). kmem_cache_refill_sheaf() might be especially ineffective when replacing a sheaf with a new one of a larger capacity. It is therefore better to size cache's sheaf_capacity accordingly to make oversize sheaves exceptional. CONFIG_SLUB_STATS counters are added for sheaf prefill and return operations. A prefill or return is considered _fast when it is able to grab or return a percpu spare sheaf (even if the sheaf needs a refill to satisfy the request, as those should amortize over time), and _slow otherwise (when the barn or even sheaf allocation/freeing has to be involved). sheaf_prefill_oversize is provided to determine how many prefills were oversize (counter for oversize returns is not necessary as all oversize refills result in oversize returns). When slub_debug is enabled for a cache with sheaves, no percpu sheaves exist for it, but the prefill functionality is still provided simply by all prefilled sheaves becoming oversize. If percpu sheaves are not created for a cache due to not passing the sheaf_capacity argument on cache creation, the prefills also work through oversize sheaves, but there's a WARN_ON_ONCE() to indicate the omission. Reviewed-by: Suren Baghdasaryan Reviewed-by: Harry Yoo Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 16 ++++ mm/slub.c | 263 +++++++++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 279 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 49acbcdc6696fd120c402adf757b3f41660ad50a..680193356ac7a22f9df5cd9b71f= f8b81e26404ad 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -829,6 +829,22 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *= s, gfp_t flags, int node) __assume_slab_alignment __malloc; #define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node_nopro= f(__VA_ARGS__)) =20 +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int siz= e); + +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size); + +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf); + +void *kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *cachep, gfp_t = gfp, + struct slab_sheaf *sheaf) __assume_slab_alignment __malloc; +#define kmem_cache_alloc_from_sheaf(...) \ + alloc_hooks(kmem_cache_alloc_from_sheaf_noprof(__VA_ARGS__)) + +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf); + /* * These macros allow declaring a kmem_buckets * parameter alongside size,= which * can be compiled out with CONFIG_SLAB_BUCKETS=3Dn so that a large number= of call diff --git a/mm/slub.c b/mm/slub.c index 19cd8444ae5d210c77ae767912ca1ff3fc69c2a8..38f5b865d3093556171e0f6530d= 395718b438099 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -401,6 +401,11 @@ enum stat_item { BARN_GET_FAIL, /* Failed to get full sheaf from barn */ BARN_PUT, /* Put full sheaf to barn */ BARN_PUT_FAIL, /* Failed to put full sheaf to barn */ + SHEAF_PREFILL_FAST, /* Sheaf prefill grabbed the spare sheaf */ + SHEAF_PREFILL_SLOW, /* Sheaf prefill found no spare sheaf */ + SHEAF_PREFILL_OVERSIZE, /* Allocation of oversize sheaf for prefill */ + SHEAF_RETURN_FAST, /* Sheaf return reattached spare sheaf */ + SHEAF_RETURN_SLOW, /* Sheaf return could not reattach spare */ NR_SLUB_STAT_ITEMS }; =20 @@ -462,6 +467,8 @@ struct slab_sheaf { union { struct rcu_head rcu_head; struct list_head barn_list; + /* only used for prefilled sheafs */ + unsigned int capacity; }; struct kmem_cache *cache; unsigned int size; @@ -2838,6 +2845,30 @@ static void barn_put_full_sheaf(struct node_barn *ba= rn, struct slab_sheaf *sheaf spin_unlock_irqrestore(&barn->lock, flags); } =20 +static struct slab_sheaf *barn_get_full_or_empty_sheaf(struct node_barn *b= arn) +{ + struct slab_sheaf *sheaf =3D NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full) { + sheaf =3D list_first_entry(&barn->sheaves_full, struct slab_sheaf, + barn_list); + list_del(&sheaf->barn_list); + barn->nr_full--; + } else if (barn->nr_empty) { + sheaf =3D list_first_entry(&barn->sheaves_empty, + struct slab_sheaf, barn_list); + list_del(&sheaf->barn_list); + barn->nr_empty--; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return sheaf; +} + /* * If a full sheaf is available, return it and put the supplied empty one = to * barn. We ignore the limit on empty sheaves as the number of sheaves doe= sn't @@ -5042,6 +5073,228 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cach= e *s, gfp_t gfpflags, int nod } EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); =20 +/* + * returns a sheaf that has at least the requested size + * when prefilling is needed, do so with given gfp flags + * + * return NULL if sheaf allocation or prefilling failed + */ +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int siz= e) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *sheaf =3D NULL; + + if (unlikely(size > s->sheaf_capacity)) { + + /* + * slab_debug disables cpu sheaves intentionally so all + * prefilled sheaves become "oversize" and we give up on + * performance for the debugging. Same with SLUB_TINY. + * Creating a cache without sheaves and then requesting a + * prefilled sheaf is however not expected, so warn. + */ + WARN_ON_ONCE(s->sheaf_capacity =3D=3D 0 && + !IS_ENABLED(CONFIG_SLUB_TINY) && + !(s->flags & SLAB_DEBUG_FLAGS)); + + sheaf =3D kzalloc(struct_size(sheaf, objects, size), gfp); + if (!sheaf) + return NULL; + + stat(s, SHEAF_PREFILL_OVERSIZE); + sheaf->cache =3D s; + sheaf->capacity =3D size; + + if (!__kmem_cache_alloc_bulk(s, gfp, size, + &sheaf->objects[0])) { + kfree(sheaf); + return NULL; + } + + sheaf->size =3D size; + + return sheaf; + } + + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + + if (pcs->spare) { + sheaf =3D pcs->spare; + pcs->spare =3D NULL; + stat(s, SHEAF_PREFILL_FAST); + } else { + stat(s, SHEAF_PREFILL_SLOW); + sheaf =3D barn_get_full_or_empty_sheaf(get_barn(s)); + if (sheaf && sheaf->size) + stat(s, BARN_GET); + else + stat(s, BARN_GET_FAIL); + } + + local_unlock(&s->cpu_sheaves->lock); + + + if (!sheaf) + sheaf =3D alloc_empty_sheaf(s, gfp); + + if (sheaf && sheaf->size < size) { + if (refill_sheaf(s, sheaf, gfp)) { + sheaf_flush_unused(s, sheaf); + free_empty_sheaf(s, sheaf); + sheaf =3D NULL; + } + } + + if (sheaf) + sheaf->capacity =3D s->sheaf_capacity; + + return sheaf; +} + +/* + * Use this to return a sheaf obtained by kmem_cache_prefill_sheaf() + * + * If the sheaf cannot simply become the percpu spare sheaf, but there's s= pace + * for a full sheaf in the barn, we try to refill the sheaf back to the ca= che's + * sheaf_capacity to avoid handling partially full sheaves. + * + * If the refill fails because gfp is e.g. GFP_NOWAIT, or the barn is full= , the + * sheaf is instead flushed and freed. + */ +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + struct slub_percpu_sheaves *pcs; + struct node_barn *barn; + + if (unlikely(sheaf->capacity !=3D s->sheaf_capacity)) { + sheaf_flush_unused(s, sheaf); + kfree(sheaf); + return; + } + + local_lock(&s->cpu_sheaves->lock); + pcs =3D this_cpu_ptr(s->cpu_sheaves); + barn =3D get_barn(s); + + if (!pcs->spare) { + pcs->spare =3D sheaf; + sheaf =3D NULL; + stat(s, SHEAF_RETURN_FAST); + } + + local_unlock(&s->cpu_sheaves->lock); + + if (!sheaf) + return; + + stat(s, SHEAF_RETURN_SLOW); + + /* + * If the barn has too many full sheaves or we fail to refill the sheaf, + * simply flush and free it. + */ + if (data_race(barn->nr_full) >=3D MAX_FULL_SHEAVES || + refill_sheaf(s, sheaf, gfp)) { + sheaf_flush_unused(s, sheaf); + free_empty_sheaf(s, sheaf); + return; + } + + barn_put_full_sheaf(barn, sheaf); + stat(s, BARN_PUT); +} + +/* + * refill a sheaf previously returned by kmem_cache_prefill_sheaf to at le= ast + * the given size + * + * the sheaf might be replaced by a new one when requesting more than + * s->sheaf_capacity objects if such replacement is necessary, but the ref= ill + * fails (returning -ENOMEM), the existing sheaf is left intact + * + * In practice we always refill to full sheaf's capacity. + */ +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size) +{ + struct slab_sheaf *sheaf; + + /* + * TODO: do we want to support *sheaf =3D=3D NULL to be equivalent of + * kmem_cache_prefill_sheaf() ? + */ + if (!sheafp || !(*sheafp)) + return -EINVAL; + + sheaf =3D *sheafp; + if (sheaf->size >=3D size) + return 0; + + if (likely(sheaf->capacity >=3D size)) { + if (likely(sheaf->capacity =3D=3D s->sheaf_capacity)) + return refill_sheaf(s, sheaf, gfp); + + if (!__kmem_cache_alloc_bulk(s, gfp, sheaf->capacity - sheaf->size, + &sheaf->objects[sheaf->size])) { + return -ENOMEM; + } + sheaf->size =3D sheaf->capacity; + + return 0; + } + + /* + * We had a regular sized sheaf and need an oversize one, or we had an + * oversize one already but need a larger one now. + * This should be a very rare path so let's not complicate it. + */ + sheaf =3D kmem_cache_prefill_sheaf(s, gfp, size); + if (!sheaf) + return -ENOMEM; + + kmem_cache_return_sheaf(s, gfp, *sheafp); + *sheafp =3D sheaf; + return 0; +} + +/* + * Allocate from a sheaf obtained by kmem_cache_prefill_sheaf() + * + * Guaranteed not to fail as many allocations as was the requested size. + * After the sheaf is emptied, it fails - no fallback to the slab cache it= self. + * + * The gfp parameter is meant only to specify __GFP_ZERO or __GFP_ACCOUNT + * memcg charging is forced over limit if necessary, to avoid failure. + */ +void * +kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + void *ret =3D NULL; + bool init; + + if (sheaf->size =3D=3D 0) + goto out; + + ret =3D sheaf->objects[--sheaf->size]; + + init =3D slab_want_init_on_alloc(gfp, s); + + /* add __GFP_NOFAIL to force successful memcg charging */ + slab_post_alloc_hook(s, NULL, gfp | __GFP_NOFAIL, 1, &ret, init, s->objec= t_size); +out: + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfp, NUMA_NO_NODE); + + return ret; +} + +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf) +{ + return sheaf->size; +} /* * To avoid unnecessary overhead, we pass through large allocation requests * directly to the page allocator. We use __GFP_COMP, because we will need= to @@ -8576,6 +8829,11 @@ STAT_ATTR(BARN_GET, barn_get); STAT_ATTR(BARN_GET_FAIL, barn_get_fail); STAT_ATTR(BARN_PUT, barn_put); STAT_ATTR(BARN_PUT_FAIL, barn_put_fail); +STAT_ATTR(SHEAF_PREFILL_FAST, sheaf_prefill_fast); +STAT_ATTR(SHEAF_PREFILL_SLOW, sheaf_prefill_slow); +STAT_ATTR(SHEAF_PREFILL_OVERSIZE, sheaf_prefill_oversize); +STAT_ATTR(SHEAF_RETURN_FAST, sheaf_return_fast); +STAT_ATTR(SHEAF_RETURN_SLOW, sheaf_return_slow); #endif /* CONFIG_SLUB_STATS */ =20 #ifdef CONFIG_KFENCE @@ -8676,6 +8934,11 @@ static struct attribute *slab_attrs[] =3D { &barn_get_fail_attr.attr, &barn_put_attr.attr, &barn_put_fail_attr.attr, + &sheaf_prefill_fast_attr.attr, + &sheaf_prefill_slow_attr.attr, + &sheaf_prefill_oversize_attr.attr, + &sheaf_return_fast_attr.attr, + &sheaf_return_slow_attr.attr, #endif #ifdef CONFIG_FAILSLAB &failslab_attr.attr, --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69D523101C6 for ; Wed, 10 Sep 2025 08:01:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491291; cv=none; b=sgyYAyyBdrFGB4u1pl2CoPMHo04PgXUJwtOYhYYlaNqzgZseKcCZoCrEMrkNGSQ+cJl57k0XunvTQ4LcwdAwlyB02p1RzFIzab2GlGW4kvG6XrEVLiQpviRnaBzMT2IoTxcmiqc1WmEdg5daGzR+0SkUvt6Gb69Nw/QF5GVjbQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491291; c=relaxed/simple; bh=ufv0weIUQSDw/94baAcsmerWNPVFG0iQYpO7dDKSXvM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ixj/KA4H6P6O5Meydm7av4E5g9Wn14Nc3CvUMV2B8qMt4EsGjD0VEscIx2T658u6Q+lnZJzDQy1MqcaKSXYLKldGK3ZLTr3HCvfSrNXgHfzFd0mODdyzHeFPtY8XJ9y8JxsbqFOf2tBFlcCCwzf4iWNUcuvWYrcSP+yppUyiIUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=V5c547z5; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=l4TTmia/; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=V5c547z5; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=l4TTmia/; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="V5c547z5"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="l4TTmia/"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="V5c547z5"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="l4TTmia/" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 1E6EA5CAC2; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CVEIM/MDCKtvnIXrVRcMDO5sWDmIfgRmhE+MlQ41QWI=; b=V5c547z5wNthVq/56PoWXgs+qXWml7gJY+grLW4djI68pMUJKJcIdPe0g2NgQEHk0zelrg YBRVqZchcddgXJ+aDPDcehOzBvcE+cLUxxd8HCAce+g1w/sbrAxac0giN03FonM2pCAN7r 7vlsrx2mdJbFUYT9OV+63xWfoSHNVHY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CVEIM/MDCKtvnIXrVRcMDO5sWDmIfgRmhE+MlQ41QWI=; b=l4TTmia/x1XZGQPqPIl0vAooT4jjY5NWGJY/yYgDFbpY060DOgsAzYqq6wYK2KxaAJK48K kkQ9RUVbKxQXpgBQ== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CVEIM/MDCKtvnIXrVRcMDO5sWDmIfgRmhE+MlQ41QWI=; b=V5c547z5wNthVq/56PoWXgs+qXWml7gJY+grLW4djI68pMUJKJcIdPe0g2NgQEHk0zelrg YBRVqZchcddgXJ+aDPDcehOzBvcE+cLUxxd8HCAce+g1w/sbrAxac0giN03FonM2pCAN7r 7vlsrx2mdJbFUYT9OV+63xWfoSHNVHY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CVEIM/MDCKtvnIXrVRcMDO5sWDmIfgRmhE+MlQ41QWI=; b=l4TTmia/x1XZGQPqPIl0vAooT4jjY5NWGJY/yYgDFbpY060DOgsAzYqq6wYK2KxaAJK48K kkQ9RUVbKxQXpgBQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 28A5913AD6; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id mPqkCUIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:08 +0200 Subject: [PATCH v8 06/23] slab: determine barn status racily outside of lock Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-6-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Spam-Score: -4.30 The possibility of many barn operations is determined by the current number of full or empty sheaves. Taking the barn->lock just to find out that e.g. there are no empty sheaves results in unnecessary overhead and lock contention. Thus perform these checks outside of the lock with a data_race() annotated variable read and fail quickly without taking the lock. Checks for sheaf availability that racily succeed have to be obviously repeated under the lock for correctness, but we can skip repeating checks if there are too many sheaves on the given list as the limits don't need to be strict. Reviewed-by: Suren Baghdasaryan Reviewed-by: Harry Yoo Signed-off-by: Vlastimil Babka --- mm/slub.c | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 38f5b865d3093556171e0f6530d395718b438099..35274ce4e709c9da7ac8f9006c8= 24f28709e923d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2801,9 +2801,12 @@ static struct slab_sheaf *barn_get_empty_sheaf(struc= t node_barn *barn) struct slab_sheaf *empty =3D NULL; unsigned long flags; =20 + if (!data_race(barn->nr_empty)) + return NULL; + spin_lock_irqsave(&barn->lock, flags); =20 - if (barn->nr_empty) { + if (likely(barn->nr_empty)) { empty =3D list_first_entry(&barn->sheaves_empty, struct slab_sheaf, barn_list); list_del(&empty->barn_list); @@ -2850,6 +2853,9 @@ static struct slab_sheaf *barn_get_full_or_empty_shea= f(struct node_barn *barn) struct slab_sheaf *sheaf =3D NULL; unsigned long flags; =20 + if (!data_race(barn->nr_full) && !data_race(barn->nr_empty)) + return NULL; + spin_lock_irqsave(&barn->lock, flags); =20 if (barn->nr_full) { @@ -2880,9 +2886,12 @@ barn_replace_empty_sheaf(struct node_barn *barn, str= uct slab_sheaf *empty) struct slab_sheaf *full =3D NULL; unsigned long flags; =20 + if (!data_race(barn->nr_full)) + return NULL; + spin_lock_irqsave(&barn->lock, flags); =20 - if (barn->nr_full) { + if (likely(barn->nr_full)) { full =3D list_first_entry(&barn->sheaves_full, struct slab_sheaf, barn_list); list_del(&full->barn_list); @@ -2906,19 +2915,23 @@ barn_replace_full_sheaf(struct node_barn *barn, str= uct slab_sheaf *full) struct slab_sheaf *empty; unsigned long flags; =20 + /* we don't repeat this check under barn->lock as it's not critical */ + if (data_race(barn->nr_full) >=3D MAX_FULL_SHEAVES) + return ERR_PTR(-E2BIG); + if (!data_race(barn->nr_empty)) + return ERR_PTR(-ENOMEM); + spin_lock_irqsave(&barn->lock, flags); =20 - if (barn->nr_full >=3D MAX_FULL_SHEAVES) { - empty =3D ERR_PTR(-E2BIG); - } else if (!barn->nr_empty) { - empty =3D ERR_PTR(-ENOMEM); - } else { + if (likely(barn->nr_empty)) { empty =3D list_first_entry(&barn->sheaves_empty, struct slab_sheaf, barn_list); list_del(&empty->barn_list); list_add(&full->barn_list, &barn->sheaves_full); barn->nr_empty--; barn->nr_full++; + } else { + empty =3D ERR_PTR(-ENOMEM); } =20 spin_unlock_irqrestore(&barn->lock, flags); --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1D8231196F for ; Wed, 10 Sep 2025 08:01:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491303; cv=none; b=TkllncoZtU94xLJwhXHzlVqfwI2PC3oifTDMBRnudndPqQQLZ7oWUK8IqzSj55OBgBPWdyQO9x2raypROFAYOgb2eUSkXOGaRGJf/FRAiwjRtVYPzvrL1b8egJTSZoQ9LNGY7ive3qVRhz/xnkuvqz0TIFbtkbIvyVAH+4LhiOs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491303; c=relaxed/simple; bh=aL2nWErz5p6fHWAf468/JnPT2VZiBsM7veP0wYqatpM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Sj9AYHXhoZVy6FiQPgheD5bzhncoVNH29yWrKzNEqMDgipG4bvpbiWSbNjbu0K/prvsYfA6wFHj1P5evSxEolqxAsuhQmYfUNKYsG3agJkR90YQR5lQtUuMgdE006LmQRfUEcp/dRo5xqVl8h7wDNrdRi9Z8jzPVUav7kV8dhf8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=ZIRf3cRQ; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=/wVAxdHP; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=ZIRf3cRQ; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=/wVAxdHP; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="ZIRf3cRQ"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="/wVAxdHP"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="ZIRf3cRQ"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="/wVAxdHP" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 2871E34C34; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e6xXnWAdQI//pOYwufYbMgKCqE7B9/pqugFW+usImNk=; b=ZIRf3cRQDIeoF8QNqTpe70Yezy2fsPpQaCTF2Rph+2e91RhK9uLhMcyRX6xt/30bQz/f3k xHb3BvvHe1ZfUTvyigy2R3x/tEqjnfPKhw/Ib1ECB8ueKhvougCBI+NMQxZp0ZHBmrcU9p /9nSL/oO2H+ic655SZK63lvlwJz5h8o= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e6xXnWAdQI//pOYwufYbMgKCqE7B9/pqugFW+usImNk=; b=/wVAxdHPTbII3WwcuHbYke6k+S/Itkv3pMv6JozOngvB1ipL/GOotw3E/8W6cDygDH48fY HXURtXKx+6ILpuCA== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=ZIRf3cRQ; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="/wVAxdHP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e6xXnWAdQI//pOYwufYbMgKCqE7B9/pqugFW+usImNk=; b=ZIRf3cRQDIeoF8QNqTpe70Yezy2fsPpQaCTF2Rph+2e91RhK9uLhMcyRX6xt/30bQz/f3k xHb3BvvHe1ZfUTvyigy2R3x/tEqjnfPKhw/Ib1ECB8ueKhvougCBI+NMQxZp0ZHBmrcU9p /9nSL/oO2H+ic655SZK63lvlwJz5h8o= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e6xXnWAdQI//pOYwufYbMgKCqE7B9/pqugFW+usImNk=; b=/wVAxdHPTbII3WwcuHbYke6k+S/Itkv3pMv6JozOngvB1ipL/GOotw3E/8W6cDygDH48fY HXURtXKx+6ILpuCA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 3C87813AE0; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 0Pd+DkIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:09 +0200 Subject: [PATCH v8 07/23] slab: skip percpu sheaves for remote object freeing Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-7-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: 2871E34C34 X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:dkim,suse.cz:mid,suse.cz:email] X-Spam-Score: -4.51 Since we don't control the NUMA locality of objects in percpu sheaves, allocations with node restrictions bypass them. Allocations without restrictions may however still expect to get local objects with high probability, and the introduction of sheaves can decrease it due to freed object from a remote node ending up in percpu sheaves. The fraction of such remote frees seems low (5% on an 8-node machine) but it can be expected that some cache or workload specific corner cases exist. We can either conclude that this is not a problem due to the low fraction, or we can make remote frees bypass percpu sheaves and go directly to their slabs. This will make the remote frees more expensive, but if if's only a small fraction, most frees will still benefit from the lower overhead of percpu sheaves. This patch thus makes remote object freeing bypass percpu sheaves, including bulk freeing, and kfree_rcu() via the rcu_free sheaf. However it's not intended to be 100% guarantee that percpu sheaves will only contain local objects. The refill from slabs does not provide that guarantee in the first place, and there might be cpu migrations happening when we need to unlock the local_lock. Avoiding all that could be possible but complicated so we can leave it for later investigation whether it would be worth it. It can be expected that the more selective freeing will itself prevent accumulation of remote objects in percpu sheaves so any such violations would have only short-term effects. Reviewed-by: Harry Yoo Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- mm/slab_common.c | 7 +++++-- mm/slub.c | 42 ++++++++++++++++++++++++++++++++++++------ 2 files changed, 41 insertions(+), 8 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 005a4319c06a01d2b616a75396fcc43766a62ddb..b6601e0fe598e24bd8d456dce4f= c82c65b342bfd 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1623,8 +1623,11 @@ static bool kfree_rcu_sheaf(void *obj) =20 slab =3D folio_slab(folio); s =3D slab->slab_cache; - if (s->cpu_sheaves) - return __kfree_rcu_sheaf(s, obj); + if (s->cpu_sheaves) { + if (likely(!IS_ENABLED(CONFIG_NUMA) || + slab_nid(slab) =3D=3D numa_mem_id())) + return __kfree_rcu_sheaf(s, obj); + } =20 return false; } diff --git a/mm/slub.c b/mm/slub.c index 35274ce4e709c9da7ac8f9006c824f28709e923d..9699d048b2cd08ee75c4cc3d1e4= 60868704520b1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -472,6 +472,7 @@ struct slab_sheaf { }; struct kmem_cache *cache; unsigned int size; + int node; /* only used for rcu_sheaf */ void *objects[]; }; =20 @@ -5828,7 +5829,7 @@ static void rcu_free_sheaf(struct rcu_head *head) */ __rcu_free_sheaf_prepare(s, sheaf); =20 - barn =3D get_node(s, numa_mem_id())->barn; + barn =3D get_node(s, sheaf->node)->barn; =20 /* due to slab_free_hook() */ if (unlikely(sheaf->size =3D=3D 0)) @@ -5914,10 +5915,12 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void *= obj) =20 rcu_sheaf->objects[rcu_sheaf->size++] =3D obj; =20 - if (likely(rcu_sheaf->size < s->sheaf_capacity)) + if (likely(rcu_sheaf->size < s->sheaf_capacity)) { rcu_sheaf =3D NULL; - else + } else { pcs->rcu_free =3D NULL; + rcu_sheaf->node =3D numa_mem_id(); + } =20 local_unlock(&s->cpu_sheaves->lock); =20 @@ -5944,7 +5947,11 @@ static void free_to_pcs_bulk(struct kmem_cache *s, s= ize_t size, void **p) bool init =3D slab_want_init_on_free(s); unsigned int batch, i =3D 0; struct node_barn *barn; + void *remote_objects[PCS_BATCH_MAX]; + unsigned int remote_nr =3D 0; + int node =3D numa_mem_id(); =20 +next_remote_batch: while (i < size) { struct slab *slab =3D virt_to_slab(p[i]); =20 @@ -5954,7 +5961,15 @@ static void free_to_pcs_bulk(struct kmem_cache *s, s= ize_t size, void **p) if (unlikely(!slab_free_hook(s, p[i], init, false))) { p[i] =3D p[--size]; if (!size) - return; + goto flush_remote; + continue; + } + + if (unlikely(IS_ENABLED(CONFIG_NUMA) && slab_nid(slab) !=3D node)) { + remote_objects[remote_nr] =3D p[i]; + p[i] =3D p[--size]; + if (++remote_nr >=3D PCS_BATCH_MAX) + goto flush_remote; continue; } =20 @@ -6024,6 +6039,15 @@ static void free_to_pcs_bulk(struct kmem_cache *s, s= ize_t size, void **p) */ fallback: __kmem_cache_free_bulk(s, size, p); + +flush_remote: + if (remote_nr) { + __kmem_cache_free_bulk(s, remote_nr, &remote_objects[0]); + if (i < size) { + remote_nr =3D 0; + goto next_remote_batch; + } + } } =20 #ifndef CONFIG_SLUB_TINY @@ -6115,8 +6139,14 @@ void slab_free(struct kmem_cache *s, struct slab *sl= ab, void *object, if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false)= )) return; =20 - if (!s->cpu_sheaves || !free_to_pcs(s, object)) - do_slab_free(s, slab, object, object, 1, addr); + if (s->cpu_sheaves && likely(!IS_ENABLED(CONFIG_NUMA) || + slab_nid(slab) =3D=3D numa_mem_id())) { + if (likely(free_to_pcs(s, object))) { + return; + } + } + + do_slab_free(s, slab, object, object, 1, addr); } =20 #ifdef CONFIG_MEMCG --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A1993112DC for ; Wed, 10 Sep 2025 08:01:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491297; cv=none; b=T0lXjpQJBsQylgvjkS2CjR+cU1YMTY+yo+IfEBj9NIm4G4BDJUrT5/9aN9siPapx4FQC8K2CUbo29wmc5Di4A9AzKGJw67+AxGnO9AYgo49tK++76p4zdshqtMVHpzgQ6lxde1fwyKITk/349pO/4xWWY2px3gTJTNSwYwtVeGU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491297; c=relaxed/simple; bh=EXAOnPXjefoUrecgIHjn7X8EjCKh7/2qQPPq0aUcBXQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=uvydeqB9ujrkCCMVutB9cBBpcMsZGO6+kqL3rjwLk5f3Ru/c/z1zaGU4QoArT8NylvOCt73UV3jjvDkmUuTmvzBxr7861Li7xA34NlobecizKHithoGdvkXrOhNkuUmvk6azcRH54GciBOdBGEVGk4g4y3O+YIjrfGYOHz/EuIw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=S8o6M1Oj; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=1Ww9I553; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=S8o6M1Oj; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=1Ww9I553; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="S8o6M1Oj"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="1Ww9I553"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="S8o6M1Oj"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="1Ww9I553" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 27A965CAD6; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ojBiEAx4m3ERLpo5puWtBF2UDNNQAO38/NqqWUvTcPs=; b=S8o6M1Ojk7gzwtMKOg8b976ug9apuNL7AsBon9WvCsdf1c83u5FxrjupOHFFPs58D+Iiwq B+/uTgRJguGcB5Nl4xz44wkda+FSReOQCGUlsgaxXdEINUgWvXzYxCshzN8sQPBT5P/jHg sjOVfEIqcZNS5XqKaaAtkaX1Px53jDc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ojBiEAx4m3ERLpo5puWtBF2UDNNQAO38/NqqWUvTcPs=; b=1Ww9I5532MCXIeEO2xHKhlJsON+aojWDoH4RgLrJ+654npRn7I6iTXr8bUP3P4A6l4X3Pq gKZHFInyKPTGPlAQ== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=S8o6M1Oj; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=1Ww9I553 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ojBiEAx4m3ERLpo5puWtBF2UDNNQAO38/NqqWUvTcPs=; b=S8o6M1Ojk7gzwtMKOg8b976ug9apuNL7AsBon9WvCsdf1c83u5FxrjupOHFFPs58D+Iiwq B+/uTgRJguGcB5Nl4xz44wkda+FSReOQCGUlsgaxXdEINUgWvXzYxCshzN8sQPBT5P/jHg sjOVfEIqcZNS5XqKaaAtkaX1Px53jDc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ojBiEAx4m3ERLpo5puWtBF2UDNNQAO38/NqqWUvTcPs=; b=1Ww9I5532MCXIeEO2xHKhlJsON+aojWDoH4RgLrJ+654npRn7I6iTXr8bUP3P4A6l4X3Pq gKZHFInyKPTGPlAQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 5101A13AF8; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id +EiDE0IwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:10 +0200 Subject: [PATCH v8 08/23] slab: allow NUMA restricted allocations to use percpu sheaves Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-8-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FUZZY_RATELIMITED(0.00)[rspamd.com]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:106:10:150:64:167:received,2a07:de40:b281:104:10:150:64:97:from]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:dkim,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 27A965CAD6 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 Currently allocations asking for a specific node explicitly or via mempolicy in strict_numa node bypass percpu sheaves. Since sheaves contain mostly local objects, we can try allocating from them if the local node happens to be the requested node or allowed by the mempolicy. If we find the object from percpu sheaves is not from the expected node, we skip the sheaves - this should be rare. Reviewed-by: Harry Yoo Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- mm/slub.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 46 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 9699d048b2cd08ee75c4cc3d1e460868704520b1..3746c0229cc2f9658a589416c63= c21fbf2850c44 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4888,18 +4888,43 @@ __pcs_replace_empty_main(struct kmem_cache *s, stru= ct slub_percpu_sheaves *pcs, } =20 static __fastpath_inline -void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) +void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp, int node) { struct slub_percpu_sheaves *pcs; + bool node_requested; void *object; =20 #ifdef CONFIG_NUMA - if (static_branch_unlikely(&strict_numa)) { - if (current->mempolicy) - return NULL; + if (static_branch_unlikely(&strict_numa) && + node =3D=3D NUMA_NO_NODE) { + + struct mempolicy *mpol =3D current->mempolicy; + + if (mpol) { + /* + * Special BIND rule support. If the local node + * is in permitted set then do not redirect + * to a particular node. + * Otherwise we apply the memory policy to get + * the node we need to allocate on. + */ + if (mpol->mode !=3D MPOL_BIND || + !node_isset(numa_mem_id(), mpol->nodes)) + + node =3D mempolicy_slab_node(); + } } #endif =20 + node_requested =3D IS_ENABLED(CONFIG_NUMA) && node !=3D NUMA_NO_NODE; + + /* + * We assume the percpu sheaves contain only local objects although it's + * not completely guaranteed, so we verify later. + */ + if (unlikely(node_requested && node !=3D numa_mem_id())) + return NULL; + if (!local_trylock(&s->cpu_sheaves->lock)) return NULL; =20 @@ -4911,7 +4936,21 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) return NULL; } =20 - object =3D pcs->main->objects[--pcs->main->size]; + object =3D pcs->main->objects[pcs->main->size - 1]; + + if (unlikely(node_requested)) { + /* + * Verify that the object was from the node we want. This could + * be false because of cpu migration during an unlocked part of + * the current allocation or previous freeing process. + */ + if (folio_nid(virt_to_folio(object)) !=3D node) { + local_unlock(&s->cpu_sheaves->lock); + return NULL; + } + } + + pcs->main->size--; =20 local_unlock(&s->cpu_sheaves->lock); =20 @@ -5011,8 +5050,8 @@ static __fastpath_inline void *slab_alloc_node(struct= kmem_cache *s, struct list if (unlikely(object)) goto out; =20 - if (s->cpu_sheaves && node =3D=3D NUMA_NO_NODE) - object =3D alloc_from_pcs(s, gfpflags); + if (s->cpu_sheaves) + object =3D alloc_from_pcs(s, gfpflags, node); =20 if (!object) object =3D __slab_alloc_node(s, gfpflags, node, addr, orig_size); --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 839A63148A9 for ; Wed, 10 Sep 2025 08:02:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491331; cv=none; b=jwDMfdVlyxrXla4UOqRxfmDrh3B2JiNrZeLf71yB1SevoLcI3NsHLwRyxT8rYEUy07p/2RjYvAs4Pagf2oy0YzNd9uy6eOxtiuE6iUeiSLo3achVI9pg42jiy314BBSo696XBte+Ev1Kxj/ZG3Sdt004nomY2AsvCocTpDy5qWk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491331; c=relaxed/simple; bh=FINhSf1K8oG3NyvpCAz8PnZ+wOCcUWskrNLcKXQDC68=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=kw9mvosDmGxSxOZoFbTfHdmUY8mAPw5V2nanOJ0u1fC/SzjW+z81nhOT9evjj851oi8wDtys5RuBkyQnRh/7fSelG9bIcdvx+crb0SVXLV2gzZBVER2PZN2VADQq5tXhv1uTALhxdxJfECukyy/H1D2eOYVWyEsQU4sD3nOa+dw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=LodN7bzu; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=+Y1JntXq; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="LodN7bzu"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="+Y1JntXq" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 2AC845CAE7; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dlC5rsgvDhfAffIlycTkJFaG9lSl4Vf5QxNcsetL14c=; b=LodN7bzutE7Li/V2EfwBNo4CuHvBVnPrKn7N0CJHxkP6oZ+sCh0VtaNamgRGLIEbFtrTwE dqoK3vZjGYZGlxgQbI3hFtniCG7KYYOmMENRmFlAPyfSHh/S7FPOKdp4qEN3Vb5EMXCcDy SFOg5lKQvKf9w2qVLXoVh65hgGJguOM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dlC5rsgvDhfAffIlycTkJFaG9lSl4Vf5QxNcsetL14c=; b=+Y1JntXqx0cxt+zFArNgueEShnHg/9pKpuK+RXieCuu5zrPcCctkmAIBa1hgYTY57coLkV gn4XLtuRcRGskUAA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 6605D13AFB; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id OACgGEIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:11 +0200 Subject: [PATCH v8 09/23] maple_tree: remove redundant __GFP_NOWARN Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-9-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, Qianfeng Rong , Wei Yang , "Matthew Wilcox (Oracle)" , Andrew Morton X-Mailer: b4 0.14.2 X-Rspamd-Action: no action X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[]; TAGGED_RCPT(0.00)[]; R_RATELIMIT(0.00)[to(RL941jgdop1fyjkq8h4),to_ip_from(RLfsjnp7neds983g95ihcnuzgq)] X-Rspamd-Queue-Id: 2AC845CAE7 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spam-Score: -4.00 From: Qianfeng Rong Commit 16f5dfbc851b ("gfp: include __GFP_NOWARN in GFP_NOWAIT") made GFP_NOWAIT implicitly include __GFP_NOWARN. Therefore, explicit __GFP_NOWARN combined with GFP_NOWAIT (e.g., `GFP_NOWAIT | __GFP_NOWARN`) is now redundant. Let's clean up these redundant flags across subsystems. No functional changes. Link: https://lkml.kernel.org/r/20250804125657.482109-1-rongqianfeng@vivo.c= om Signed-off-by: Qianfeng Rong Reviewed-by: Wei Yang Reviewed-by: Liam R. Howlett Cc: Matthew Wilcox (Oracle) Signed-off-by: Andrew Morton Signed-off-by: Vlastimil Babka --- lib/maple_tree.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index b4ee2d29d7a962ca374467d0533185f2db3d35ff..38fb68c082915211c80f473d313= 159599fe97e2c 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -1344,11 +1344,11 @@ static void mas_node_count_gfp(struct ma_state *mas= , int count, gfp_t gfp) * @mas: The maple state * @count: The number of nodes needed * - * Note: Uses GFP_NOWAIT | __GFP_NOWARN for gfp flags. + * Note: Uses GFP_NOWAIT for gfp flags. */ static void mas_node_count(struct ma_state *mas, int count) { - return mas_node_count_gfp(mas, count, GFP_NOWAIT | __GFP_NOWARN); + return mas_node_count_gfp(mas, count, GFP_NOWAIT); } =20 /* --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6DE0313264 for ; Wed, 10 Sep 2025 08:01:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491317; cv=none; b=hL28hp8otr/XOf+KmI4sNzuKPGZCoZKRC6BqD2ufr5tHVeuco2f7aXZwzWuL7mF7MRUNgRkXHIFxhzs6VL7eJE31mxMgaUB2nla8t1lyzbP3vL0Vbyxb6mjKWT8jwRIoBV54+zihzuWtyzhA0O9eMkOIatcxr4kXfxqC3VOCmp8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491317; c=relaxed/simple; bh=tc0my3fQoLwAWw0ndzrIL04sHQNvGTBVTn1a3Ysr0DI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=GeVTYDAVcFdihnZdsw9llFkMv0/iux8FmL52UkwCxNEW81TKaKSRY3u8ZFkUnpwjd2xBIgMJk1J8BdWnmjzX11wDNJMDRbrHoYFU5E8SQ3BSI9BuIOS7byGCDnJR5wbEKIDTkMxaMQ0cxdJ1UtDpAg5k+P28H81W1GAFlzMqCQI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=ERgfoKTg; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=hozs6Owg; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=ERgfoKTg; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=hozs6Owg; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="ERgfoKTg"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="hozs6Owg"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="ERgfoKTg"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="hozs6Owg" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 2E91B5CAE9; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mzg1TCVobED1/AQhFq++VLVyKBPlbLMYE7LGG7XCRZ0=; b=ERgfoKTgRTLkYNlVw80V9Xg3zPFjP5v5ZJXJWl/GCxHJkIctN0GlQ+AhibmOwcW8GTYPyt KMs6m7EklV8NutCDKoIS3yARG8haYw7l/9erBzjOu6pzQFScB6ntrjGPwcflVZxtwvS2c5 UNXkhATnMzJ/IG60X9wTA0KUr9Qu6Og= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mzg1TCVobED1/AQhFq++VLVyKBPlbLMYE7LGG7XCRZ0=; b=hozs6Owg3FUSclY4koxhH7cFPwh4S9FDMp0Cu1baao81SUBfCyAii0DRWzIZFPcwq89/r9 1Yt5dsf5SaN6kXDg== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=ERgfoKTg; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=hozs6Owg DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mzg1TCVobED1/AQhFq++VLVyKBPlbLMYE7LGG7XCRZ0=; b=ERgfoKTgRTLkYNlVw80V9Xg3zPFjP5v5ZJXJWl/GCxHJkIctN0GlQ+AhibmOwcW8GTYPyt KMs6m7EklV8NutCDKoIS3yARG8haYw7l/9erBzjOu6pzQFScB6ntrjGPwcflVZxtwvS2c5 UNXkhATnMzJ/IG60X9wTA0KUr9Qu6Og= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mzg1TCVobED1/AQhFq++VLVyKBPlbLMYE7LGG7XCRZ0=; b=hozs6Owg3FUSclY4koxhH7cFPwh4S9FDMp0Cu1baao81SUBfCyAii0DRWzIZFPcwq89/r9 1Yt5dsf5SaN6kXDg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 7ED8013AFF; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id WByuHkIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:12 +0200 Subject: [PATCH v8 10/23] tools/testing/vma: clean up stubs in vma_internal.h Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-10-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, Lorenzo Stoakes , WangYuli , Jann Horn , Andrew Morton X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: 2E91B5CAE9 X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[17]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:106:10:150:64:167:received,2a07:de40:b281:104:10:150:64:97:from]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,uniontech.com,google.com,linux-foundation.org]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:dkim,suse.cz:mid,suse.cz:email] X-Spam-Score: -4.51 From: Lorenzo Stoakes We do not need to references arguments just to avoid compiler warnings, the warning in question does not arise here, so remove all of the instances of '(void)xxx' introduced purely to avoid this warning. As reported by WagYuli in the referenced mail, GCC 8.3 and before will have issues compiling this file if parameter names are not provided, so ensure these are always provided. Finally, perform a trivial fix up of kmem_cache_alloc() which technically has parameters in the incorrect order (as reported by Vlastimil Babka off-list). Link: https://lkml.kernel.org/r/20250826102824.22730-1-lorenzo.stoakes@orac= le.com Signed-off-by: Lorenzo Stoakes Reported-by: WangYuli Closes: https://lore.kernel.org/linux-mm/EFCEBE7E301589DE+20250729084700.20= 8767-1-wangyuli@uniontech.com/ Reported-by: Vlastimil Babka Acked-by: Vlastimil Babka Reviewed-by: Liam R. Howlett Cc: Jann Horn Cc: WangYuli Signed-off-by: Andrew Morton Signed-off-by: Vlastimil Babka --- tools/testing/vma/vma_internal.h | 167 +++++++++++++----------------------= ---- 1 file changed, 57 insertions(+), 110 deletions(-) diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 3639aa8dd2b06ebe5b9cfcfe6669994fd38c482d..f8cf5b184d5b51dd627ff440943= a7af3c549f482 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -676,9 +676,7 @@ static inline struct kmem_cache *__kmem_cache_create(co= nst char *name, =20 static inline void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) { - (void)gfpflags; - - return calloc(s->object_size, 1); + return calloc(1, s->object_size); } =20 static inline void kmem_cache_free(struct kmem_cache *s, void *x) @@ -842,11 +840,11 @@ static inline unsigned long vma_pages(struct vm_area_= struct *vma) return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; } =20 -static inline void fput(struct file *) +static inline void fput(struct file *file) { } =20 -static inline void mpol_put(struct mempolicy *) +static inline void mpol_put(struct mempolicy *pol) { } =20 @@ -854,15 +852,15 @@ static inline void lru_add_drain(void) { } =20 -static inline void tlb_gather_mmu(struct mmu_gather *, struct mm_struct *) +static inline void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct= *mm) { } =20 -static inline void update_hiwater_rss(struct mm_struct *) +static inline void update_hiwater_rss(struct mm_struct *mm) { } =20 -static inline void update_hiwater_vm(struct mm_struct *) +static inline void update_hiwater_vm(struct mm_struct *mm) { } =20 @@ -871,36 +869,23 @@ static inline void unmap_vmas(struct mmu_gather *tlb,= struct ma_state *mas, unsigned long end_addr, unsigned long tree_end, bool mm_wr_locked) { - (void)tlb; - (void)mas; - (void)vma; - (void)start_addr; - (void)end_addr; - (void)tree_end; - (void)mm_wr_locked; } =20 static inline void free_pgtables(struct mmu_gather *tlb, struct ma_state *= mas, struct vm_area_struct *vma, unsigned long floor, unsigned long ceiling, bool mm_wr_locked) { - (void)tlb; - (void)mas; - (void)vma; - (void)floor; - (void)ceiling; - (void)mm_wr_locked; } =20 -static inline void mapping_unmap_writable(struct address_space *) +static inline void mapping_unmap_writable(struct address_space *mapping) { } =20 -static inline void flush_dcache_mmap_lock(struct address_space *) +static inline void flush_dcache_mmap_lock(struct address_space *mapping) { } =20 -static inline void tlb_finish_mmu(struct mmu_gather *) +static inline void tlb_finish_mmu(struct mmu_gather *tlb) { } =20 @@ -909,7 +894,7 @@ static inline struct file *get_file(struct file *f) return f; } =20 -static inline int vma_dup_policy(struct vm_area_struct *, struct vm_area_s= truct *) +static inline int vma_dup_policy(struct vm_area_struct *src, struct vm_are= a_struct *dst) { return 0; } @@ -936,10 +921,6 @@ static inline void vma_adjust_trans_huge(struct vm_are= a_struct *vma, unsigned long end, struct vm_area_struct *next) { - (void)vma; - (void)start; - (void)end; - (void)next; } =20 static inline void hugetlb_split(struct vm_area_struct *, unsigned long) {} @@ -959,51 +940,48 @@ static inline void vm_acct_memory(long pages) { } =20 -static inline void vma_interval_tree_insert(struct vm_area_struct *, - struct rb_root_cached *) +static inline void vma_interval_tree_insert(struct vm_area_struct *vma, + struct rb_root_cached *rb) { } =20 -static inline void vma_interval_tree_remove(struct vm_area_struct *, - struct rb_root_cached *) +static inline void vma_interval_tree_remove(struct vm_area_struct *vma, + struct rb_root_cached *rb) { } =20 -static inline void flush_dcache_mmap_unlock(struct address_space *) +static inline void flush_dcache_mmap_unlock(struct address_space *mapping) { } =20 -static inline void anon_vma_interval_tree_insert(struct anon_vma_chain*, - struct rb_root_cached *) +static inline void anon_vma_interval_tree_insert(struct anon_vma_chain *av= c, + struct rb_root_cached *rb) { } =20 -static inline void anon_vma_interval_tree_remove(struct anon_vma_chain*, - struct rb_root_cached *) +static inline void anon_vma_interval_tree_remove(struct anon_vma_chain *av= c, + struct rb_root_cached *rb) { } =20 -static inline void uprobe_mmap(struct vm_area_struct *) +static inline void uprobe_mmap(struct vm_area_struct *vma) { } =20 static inline void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - (void)vma; - (void)start; - (void)end; } =20 -static inline void i_mmap_lock_write(struct address_space *) +static inline void i_mmap_lock_write(struct address_space *mapping) { } =20 -static inline void anon_vma_lock_write(struct anon_vma *) +static inline void anon_vma_lock_write(struct anon_vma *anon_vma) { } =20 -static inline void vma_assert_write_locked(struct vm_area_struct *) +static inline void vma_assert_write_locked(struct vm_area_struct *vma) { } =20 @@ -1013,16 +991,16 @@ static inline void unlink_anon_vmas(struct vm_area_s= truct *vma) vma->anon_vma->was_unlinked =3D true; } =20 -static inline void anon_vma_unlock_write(struct anon_vma *) +static inline void anon_vma_unlock_write(struct anon_vma *anon_vma) { } =20 -static inline void i_mmap_unlock_write(struct address_space *) +static inline void i_mmap_unlock_write(struct address_space *mapping) { } =20 -static inline void anon_vma_merge(struct vm_area_struct *, - struct vm_area_struct *) +static inline void anon_vma_merge(struct vm_area_struct *vma, + struct vm_area_struct *next) { } =20 @@ -1031,27 +1009,22 @@ static inline int userfaultfd_unmap_prep(struct vm_= area_struct *vma, unsigned long end, struct list_head *unmaps) { - (void)vma; - (void)start; - (void)end; - (void)unmaps; - return 0; } =20 -static inline void mmap_write_downgrade(struct mm_struct *) +static inline void mmap_write_downgrade(struct mm_struct *mm) { } =20 -static inline void mmap_read_unlock(struct mm_struct *) +static inline void mmap_read_unlock(struct mm_struct *mm) { } =20 -static inline void mmap_write_unlock(struct mm_struct *) +static inline void mmap_write_unlock(struct mm_struct *mm) { } =20 -static inline int mmap_write_lock_killable(struct mm_struct *) +static inline int mmap_write_lock_killable(struct mm_struct *mm) { return 0; } @@ -1060,10 +1033,6 @@ static inline bool can_modify_mm(struct mm_struct *m= m, unsigned long start, unsigned long end) { - (void)mm; - (void)start; - (void)end; - return true; } =20 @@ -1071,16 +1040,13 @@ static inline void arch_unmap(struct mm_struct *mm, unsigned long start, unsigned long end) { - (void)mm; - (void)start; - (void)end; } =20 -static inline void mmap_assert_locked(struct mm_struct *) +static inline void mmap_assert_locked(struct mm_struct *mm) { } =20 -static inline bool mpol_equal(struct mempolicy *, struct mempolicy *) +static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b) { return true; } @@ -1088,63 +1054,62 @@ static inline bool mpol_equal(struct mempolicy *, s= truct mempolicy *) static inline void khugepaged_enter_vma(struct vm_area_struct *vma, vm_flags_t vm_flags) { - (void)vma; - (void)vm_flags; } =20 -static inline bool mapping_can_writeback(struct address_space *) +static inline bool mapping_can_writeback(struct address_space *mapping) { return true; } =20 -static inline bool is_vm_hugetlb_page(struct vm_area_struct *) +static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma) { return false; } =20 -static inline bool vma_soft_dirty_enabled(struct vm_area_struct *) +static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma) { return false; } =20 -static inline bool userfaultfd_wp(struct vm_area_struct *) +static inline bool userfaultfd_wp(struct vm_area_struct *vma) { return false; } =20 -static inline void mmap_assert_write_locked(struct mm_struct *) +static inline void mmap_assert_write_locked(struct mm_struct *mm) { } =20 -static inline void mutex_lock(struct mutex *) +static inline void mutex_lock(struct mutex *lock) { } =20 -static inline void mutex_unlock(struct mutex *) +static inline void mutex_unlock(struct mutex *lock) { } =20 -static inline bool mutex_is_locked(struct mutex *) +static inline bool mutex_is_locked(struct mutex *lock) { return true; } =20 -static inline bool signal_pending(void *) +static inline bool signal_pending(void *p) { return false; } =20 -static inline bool is_file_hugepages(struct file *) +static inline bool is_file_hugepages(struct file *file) { return false; } =20 -static inline int security_vm_enough_memory_mm(struct mm_struct *, long) +static inline int security_vm_enough_memory_mm(struct mm_struct *mm, long = pages) { return 0; } =20 -static inline bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned = long) +static inline bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, + unsigned long npages) { return true; } @@ -1169,7 +1134,7 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma->__vm_flags &=3D ~flags; } =20 -static inline int shmem_zero_setup(struct vm_area_struct *) +static inline int shmem_zero_setup(struct vm_area_struct *vma) { return 0; } @@ -1179,20 +1144,20 @@ static inline void vma_set_anonymous(struct vm_area= _struct *vma) vma->vm_ops =3D NULL; } =20 -static inline void ksm_add_vma(struct vm_area_struct *) +static inline void ksm_add_vma(struct vm_area_struct *vma) { } =20 -static inline void perf_event_mmap(struct vm_area_struct *) +static inline void perf_event_mmap(struct vm_area_struct *vma) { } =20 -static inline bool vma_is_dax(struct vm_area_struct *) +static inline bool vma_is_dax(struct vm_area_struct *vma) { return false; } =20 -static inline struct vm_area_struct *get_gate_vma(struct mm_struct *) +static inline struct vm_area_struct *get_gate_vma(struct mm_struct *mm) { return NULL; } @@ -1217,16 +1182,16 @@ static inline void vma_set_page_prot(struct vm_area= _struct *vma) WRITE_ONCE(vma->vm_page_prot, vm_page_prot); } =20 -static inline bool arch_validate_flags(vm_flags_t) +static inline bool arch_validate_flags(vm_flags_t flags) { return true; } =20 -static inline void vma_close(struct vm_area_struct *) +static inline void vma_close(struct vm_area_struct *vma) { } =20 -static inline int mmap_file(struct file *, struct vm_area_struct *) +static inline int mmap_file(struct file *file, struct vm_area_struct *vma) { return 0; } @@ -1388,8 +1353,6 @@ static inline int mapping_map_writable(struct address= _space *mapping) =20 static inline unsigned long move_page_tables(struct pagetable_move_control= *pmc) { - (void)pmc; - return 0; } =20 @@ -1397,51 +1360,36 @@ static inline void free_pgd_range(struct mmu_gather= *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) { - (void)tlb; - (void)addr; - (void)end; - (void)floor; - (void)ceiling; } =20 static inline int ksm_execve(struct mm_struct *mm) { - (void)mm; - return 0; } =20 static inline void ksm_exit(struct mm_struct *mm) { - (void)mm; } =20 static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_re= fcnt) { - (void)vma; - (void)reset_refcnt; } =20 static inline void vma_numab_state_init(struct vm_area_struct *vma) { - (void)vma; } =20 static inline void vma_numab_state_free(struct vm_area_struct *vma) { - (void)vma; } =20 static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma, struct vm_area_struct *new_vma) { - (void)orig_vma; - (void)new_vma; } =20 static inline void free_anon_vma_name(struct vm_area_struct *vma) { - (void)vma; } =20 /* Declared in vma.h. */ @@ -1495,7 +1443,6 @@ static inline int vfs_mmap_prepare(struct file *file,= struct vm_area_desc *desc) =20 static inline void fixup_hugetlb_reservations(struct vm_area_struct *vma) { - (void)vma; } =20 static inline void vma_set_file(struct vm_area_struct *vma, struct file *f= ile) @@ -1506,13 +1453,13 @@ static inline void vma_set_file(struct vm_area_stru= ct *vma, struct file *file) fput(file); } =20 -static inline bool shmem_file(struct file *) +static inline bool shmem_file(struct file *file) { return false; } =20 -static inline vm_flags_t ksm_vma_flags(const struct mm_struct *, const str= uct file *, - vm_flags_t vm_flags) +static inline vm_flags_t ksm_vma_flags(const struct mm_struct *mm, + const struct file *file, vm_flags_t vm_flags) { return vm_flags; } --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8037B310624 for ; Wed, 10 Sep 2025 08:01:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491296; cv=none; b=eqkXXJIE3MV0w6f1tt/Q1H3tAtABwo+SS3FlRZIZM0oWKfAlwHDS/5Ei3U82Z6Fdfgpk/j5Z6aFzeu78jPzm6awDqylarYTf8vC5h+uwp73gqUts2T5uW9yGENs2LgJswz44eNmXa2wFz6wscf31svxgGqZN6BRyyVeOxd3SIPs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491296; c=relaxed/simple; bh=g9cbuUAWUkFUAyf/4k3/s7/bNyOkiitIzQ05JCCKOuQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qopJHL6TMovKrEvTVbtzTv00HvQdaMBkbbyXe8F8JFs5xkQA8/m3bdDipdiKe1JAjM9zNCgH0Aa7W7RQrLGmcZ/jnLMPdqyR8F7ApKgMJ7oV00uUsenvH7yg+gad9TxQSwdwZlz6OTPTtsfUBn8Kf11sVO/mCsfOhrtw3Bjm8S8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=Eu1749Ij; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=J3j1WwYY; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=Eu1749Ij; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=J3j1WwYY; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="Eu1749Ij"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="J3j1WwYY"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="Eu1749Ij"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="J3j1WwYY" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 42A5C34C3D; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CTAufz5l+Vwi1MYujf4i5OOI04SHSqMr5qvqknMVzaY=; b=Eu1749IjuQ3nDFCZbxxBwSyUbpOaR3IvEXYMV309JnAIXH/J+2N/R1WOZxOH8xMgSbMEXP 3Xl2K2LsF5fHS0MFnmdO4bDx3o0Z0cX59UwNY2Lx/WzFxtMTX8iAJu7275KTO1K5wXj2Fr Tzgp3LzEw8o1GS1vu8d1mpc1fT/Qe9w= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CTAufz5l+Vwi1MYujf4i5OOI04SHSqMr5qvqknMVzaY=; b=J3j1WwYYsK2o+YK+R5guQ7k613W2ZX/QcuM27yXm5TwQ0wzR9P0vmyRkAlgfnwtfMdQYmW uhncGsLDOy3AhvCg== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Eu1749Ij; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=J3j1WwYY DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CTAufz5l+Vwi1MYujf4i5OOI04SHSqMr5qvqknMVzaY=; b=Eu1749IjuQ3nDFCZbxxBwSyUbpOaR3IvEXYMV309JnAIXH/J+2N/R1WOZxOH8xMgSbMEXP 3Xl2K2LsF5fHS0MFnmdO4bDx3o0Z0cX59UwNY2Lx/WzFxtMTX8iAJu7275KTO1K5wXj2Fr Tzgp3LzEw8o1GS1vu8d1mpc1fT/Qe9w= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CTAufz5l+Vwi1MYujf4i5OOI04SHSqMr5qvqknMVzaY=; b=J3j1WwYYsK2o+YK+R5guQ7k613W2ZX/QcuM27yXm5TwQ0wzR9P0vmyRkAlgfnwtfMdQYmW uhncGsLDOy3AhvCg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 993F113B01; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id ANYiJUIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:13 +0200 Subject: [PATCH v8 11/23] maple_tree: Drop bulk insert support Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-11-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:106:10:150:64:167:received,2a07:de40:b281:104:10:150:64:97:from]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:dkim,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 42A5C34C3D X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 From: "Liam R. Howlett" Bulk insert mode was added to facilitate forking faster, but forking now uses __mt_dup() to duplicate the tree. The addition of sheaves has made the bulk allocations difficult to maintain - since the expected entries would preallocate into the maple state. A big part of the maple state node allocation was the ability to push nodes back onto the state for later use, which was essential to the bulk insert algorithm. Remove mas_expected_entries() and mas_destroy_rebalance() functions as well as the MA_STATE_BULK and MA_STATE_REBALANCE maple state flags since there are no users anymore. Drop the associated testing as well. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- lib/maple_tree.c | 270 +----------------------------------= ---- lib/test_maple_tree.c | 137 -------------------- tools/testing/radix-tree/maple.c | 36 ------ 3 files changed, 4 insertions(+), 439 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 38fb68c082915211c80f473d313159599fe97e2c..4f0e30b57b0cef9e5cf791f3f64= f5898752db402 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -83,13 +83,9 @@ =20 /* * Maple state flags - * * MA_STATE_BULK - Bulk insert mode - * * MA_STATE_REBALANCE - Indicate a rebalance during bulk insert * * MA_STATE_PREALLOC - Preallocated nodes, WARN_ON allocation */ -#define MA_STATE_BULK 1 -#define MA_STATE_REBALANCE 2 -#define MA_STATE_PREALLOC 4 +#define MA_STATE_PREALLOC 1 =20 #define ma_parent_ptr(x) ((struct maple_pnode *)(x)) #define mas_tree_parent(x) ((unsigned long)(x->tree) | MA_ROOT_PARENT) @@ -1031,24 +1027,6 @@ static inline void mas_descend(struct ma_state *mas) mas->node =3D mas_slot(mas, slots, mas->offset); } =20 -/* - * mte_set_gap() - Set a maple node gap. - * @mn: The encoded maple node - * @gap: The offset of the gap to set - * @val: The gap value - */ -static inline void mte_set_gap(const struct maple_enode *mn, - unsigned char gap, unsigned long val) -{ - switch (mte_node_type(mn)) { - default: - break; - case maple_arange_64: - mte_to_node(mn)->ma64.gap[gap] =3D val; - break; - } -} - /* * mas_ascend() - Walk up a level of the tree. * @mas: The maple state @@ -1878,21 +1856,7 @@ static inline int mab_calc_split(struct ma_state *ma= s, * end on a NULL entry, with the exception of the left-most leaf. The * limitation means that the split of a node must be checked for this con= dition * and be able to put more data in one direction or the other. - */ - if (unlikely((mas->mas_flags & MA_STATE_BULK))) { - *mid_split =3D 0; - split =3D b_end - mt_min_slots[bn->type]; - - if (!ma_is_leaf(bn->type)) - return split; - - mas->mas_flags |=3D MA_STATE_REBALANCE; - if (!bn->slot[split]) - split--; - return split; - } - - /* + * * Although extremely rare, it is possible to enter what is known as the = 3-way * split scenario. The 3-way split comes about by means of a store of a = range * that overwrites the end and beginning of two full nodes. The result i= s a set @@ -2039,27 +2003,6 @@ static inline void mab_mas_cp(struct maple_big_node = *b_node, } } =20 -/* - * mas_bulk_rebalance() - Rebalance the end of a tree after a bulk insert. - * @mas: The maple state - * @end: The maple node end - * @mt: The maple node type - */ -static inline void mas_bulk_rebalance(struct ma_state *mas, unsigned char = end, - enum maple_type mt) -{ - if (!(mas->mas_flags & MA_STATE_BULK)) - return; - - if (mte_is_root(mas->node)) - return; - - if (end > mt_min_slots[mt]) { - mas->mas_flags &=3D ~MA_STATE_REBALANCE; - return; - } -} - /* * mas_store_b_node() - Store an @entry into the b_node while also copying= the * data from a maple encoded node. @@ -2109,9 +2052,6 @@ static noinline_for_kasan void mas_store_b_node(struc= t ma_wr_state *wr_mas, /* Handle new range ending before old range ends */ piv =3D mas_safe_pivot(mas, wr_mas->pivots, offset_end, wr_mas->type); if (piv > mas->last) { - if (piv =3D=3D ULONG_MAX) - mas_bulk_rebalance(mas, b_node->b_end, wr_mas->type); - if (offset_end !=3D slot) wr_mas->content =3D mas_slot_locked(mas, wr_mas->slots, offset_end); @@ -3011,126 +2951,6 @@ static inline void mas_rebalance(struct ma_state *m= as, return mas_spanning_rebalance(mas, &mast, empty_count); } =20 -/* - * mas_destroy_rebalance() - Rebalance left-most node while destroying the= maple - * state. - * @mas: The maple state - * @end: The end of the left-most node. - * - * During a mass-insert event (such as forking), it may be necessary to - * rebalance the left-most node when it is not sufficient. - */ -static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned ch= ar end) -{ - enum maple_type mt =3D mte_node_type(mas->node); - struct maple_node reuse, *newnode, *parent, *new_left, *left, *node; - struct maple_enode *eparent, *old_eparent; - unsigned char offset, tmp, split =3D mt_slots[mt] / 2; - void __rcu **l_slots, **slots; - unsigned long *l_pivs, *pivs, gap; - bool in_rcu =3D mt_in_rcu(mas->tree); - unsigned char new_height =3D mas_mt_height(mas); - - MA_STATE(l_mas, mas->tree, mas->index, mas->last); - - l_mas =3D *mas; - mas_prev_sibling(&l_mas); - - /* set up node. */ - if (in_rcu) { - newnode =3D mas_pop_node(mas); - } else { - newnode =3D &reuse; - } - - node =3D mas_mn(mas); - newnode->parent =3D node->parent; - slots =3D ma_slots(newnode, mt); - pivs =3D ma_pivots(newnode, mt); - left =3D mas_mn(&l_mas); - l_slots =3D ma_slots(left, mt); - l_pivs =3D ma_pivots(left, mt); - if (!l_slots[split]) - split++; - tmp =3D mas_data_end(&l_mas) - split; - - memcpy(slots, l_slots + split + 1, sizeof(void *) * tmp); - memcpy(pivs, l_pivs + split + 1, sizeof(unsigned long) * tmp); - pivs[tmp] =3D l_mas.max; - memcpy(slots + tmp, ma_slots(node, mt), sizeof(void *) * end); - memcpy(pivs + tmp, ma_pivots(node, mt), sizeof(unsigned long) * end); - - l_mas.max =3D l_pivs[split]; - mas->min =3D l_mas.max + 1; - old_eparent =3D mt_mk_node(mte_parent(l_mas.node), - mas_parent_type(&l_mas, l_mas.node)); - tmp +=3D end; - if (!in_rcu) { - unsigned char max_p =3D mt_pivots[mt]; - unsigned char max_s =3D mt_slots[mt]; - - if (tmp < max_p) - memset(pivs + tmp, 0, - sizeof(unsigned long) * (max_p - tmp)); - - if (tmp < mt_slots[mt]) - memset(slots + tmp, 0, sizeof(void *) * (max_s - tmp)); - - memcpy(node, newnode, sizeof(struct maple_node)); - ma_set_meta(node, mt, 0, tmp - 1); - mte_set_pivot(old_eparent, mte_parent_slot(l_mas.node), - l_pivs[split]); - - /* Remove data from l_pivs. */ - tmp =3D split + 1; - memset(l_pivs + tmp, 0, sizeof(unsigned long) * (max_p - tmp)); - memset(l_slots + tmp, 0, sizeof(void *) * (max_s - tmp)); - ma_set_meta(left, mt, 0, split); - eparent =3D old_eparent; - - goto done; - } - - /* RCU requires replacing both l_mas, mas, and parent. */ - mas->node =3D mt_mk_node(newnode, mt); - ma_set_meta(newnode, mt, 0, tmp); - - new_left =3D mas_pop_node(mas); - new_left->parent =3D left->parent; - mt =3D mte_node_type(l_mas.node); - slots =3D ma_slots(new_left, mt); - pivs =3D ma_pivots(new_left, mt); - memcpy(slots, l_slots, sizeof(void *) * split); - memcpy(pivs, l_pivs, sizeof(unsigned long) * split); - ma_set_meta(new_left, mt, 0, split); - l_mas.node =3D mt_mk_node(new_left, mt); - - /* replace parent. */ - offset =3D mte_parent_slot(mas->node); - mt =3D mas_parent_type(&l_mas, l_mas.node); - parent =3D mas_pop_node(mas); - slots =3D ma_slots(parent, mt); - pivs =3D ma_pivots(parent, mt); - memcpy(parent, mte_to_node(old_eparent), sizeof(struct maple_node)); - rcu_assign_pointer(slots[offset], mas->node); - rcu_assign_pointer(slots[offset - 1], l_mas.node); - pivs[offset - 1] =3D l_mas.max; - eparent =3D mt_mk_node(parent, mt); -done: - gap =3D mas_leaf_max_gap(mas); - mte_set_gap(eparent, mte_parent_slot(mas->node), gap); - gap =3D mas_leaf_max_gap(&l_mas); - mte_set_gap(eparent, mte_parent_slot(l_mas.node), gap); - mas_ascend(mas); - - if (in_rcu) { - mas_replace_node(mas, old_eparent, new_height); - mas_adopt_children(mas, mas->node); - } - - mas_update_gap(mas); -} - /* * mas_split_final_node() - Split the final node in a subtree operation. * @mast: the maple subtree state @@ -3837,8 +3657,6 @@ static inline void mas_wr_node_store(struct ma_wr_sta= te *wr_mas, =20 if (mas->last =3D=3D wr_mas->end_piv) offset_end++; /* don't copy this offset */ - else if (unlikely(wr_mas->r_max =3D=3D ULONG_MAX)) - mas_bulk_rebalance(mas, mas->end, wr_mas->type); =20 /* set up node. */ if (in_rcu) { @@ -4255,7 +4073,7 @@ static inline enum store_type mas_wr_store_type(struc= t ma_wr_state *wr_mas) new_end =3D mas_wr_new_end(wr_mas); /* Potential spanning rebalance collapsing a node */ if (new_end < mt_min_slots[wr_mas->type]) { - if (!mte_is_root(mas->node) && !(mas->mas_flags & MA_STATE_BULK)) + if (!mte_is_root(mas->node)) return wr_rebalance; return wr_node_store; } @@ -5562,25 +5380,7 @@ void mas_destroy(struct ma_state *mas) struct maple_alloc *node; unsigned long total; =20 - /* - * When using mas_for_each() to insert an expected number of elements, - * it is possible that the number inserted is less than the expected - * number. To fix an invalid final node, a check is performed here to - * rebalance the previous node with the final node. - */ - if (mas->mas_flags & MA_STATE_REBALANCE) { - unsigned char end; - if (mas_is_err(mas)) - mas_reset(mas); - mas_start(mas); - mtree_range_walk(mas); - end =3D mas->end + 1; - if (end < mt_min_slot_count(mas->node) - 1) - mas_destroy_rebalance(mas, end); - - mas->mas_flags &=3D ~MA_STATE_REBALANCE; - } - mas->mas_flags &=3D ~(MA_STATE_BULK|MA_STATE_PREALLOC); + mas->mas_flags &=3D ~MA_STATE_PREALLOC; =20 total =3D mas_allocated(mas); while (total) { @@ -5600,68 +5400,6 @@ void mas_destroy(struct ma_state *mas) } EXPORT_SYMBOL_GPL(mas_destroy); =20 -/* - * mas_expected_entries() - Set the expected number of entries that will b= e inserted. - * @mas: The maple state - * @nr_entries: The number of expected entries. - * - * This will attempt to pre-allocate enough nodes to store the expected nu= mber - * of entries. The allocations will occur using the bulk allocator interf= ace - * for speed. Please call mas_destroy() on the @mas after inserting the e= ntries - * to ensure any unused nodes are freed. - * - * Return: 0 on success, -ENOMEM if memory could not be allocated. - */ -int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries) -{ - int nonleaf_cap =3D MAPLE_ARANGE64_SLOTS - 2; - struct maple_enode *enode =3D mas->node; - int nr_nodes; - int ret; - - /* - * Sometimes it is necessary to duplicate a tree to a new tree, such as - * forking a process and duplicating the VMAs from one tree to a new - * tree. When such a situation arises, it is known that the new tree is - * not going to be used until the entire tree is populated. For - * performance reasons, it is best to use a bulk load with RCU disabled. - * This allows for optimistic splitting that favours the left and reuse - * of nodes during the operation. - */ - - /* Optimize splitting for bulk insert in-order */ - mas->mas_flags |=3D MA_STATE_BULK; - - /* - * Avoid overflow, assume a gap between each entry and a trailing null. - * If this is wrong, it just means allocation can happen during - * insertion of entries. - */ - nr_nodes =3D max(nr_entries, nr_entries * 2 + 1); - if (!mt_is_alloc(mas->tree)) - nonleaf_cap =3D MAPLE_RANGE64_SLOTS - 2; - - /* Leaves; reduce slots to keep space for expansion */ - nr_nodes =3D DIV_ROUND_UP(nr_nodes, MAPLE_RANGE64_SLOTS - 2); - /* Internal nodes */ - nr_nodes +=3D DIV_ROUND_UP(nr_nodes, nonleaf_cap); - /* Add working room for split (2 nodes) + new parents */ - mas_node_count_gfp(mas, nr_nodes + 3, GFP_KERNEL); - - /* Detect if allocations run out */ - mas->mas_flags |=3D MA_STATE_PREALLOC; - - if (!mas_is_err(mas)) - return 0; - - ret =3D xa_err(mas->node); - mas->node =3D enode; - mas_destroy(mas); - return ret; - -} -EXPORT_SYMBOL_GPL(mas_expected_entries); - static void mas_may_activate(struct ma_state *mas) { if (!mas->node) { diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c index cb3936595b0d56a9682ff100eba54693a1427829..14fbbee32046a13d54d60dcac2b= 45be2bd190ac4 100644 --- a/lib/test_maple_tree.c +++ b/lib/test_maple_tree.c @@ -2746,139 +2746,6 @@ static noinline void __init check_fuzzer(struct map= le_tree *mt) mtree_test_erase(mt, ULONG_MAX - 10); } =20 -/* duplicate the tree with a specific gap */ -static noinline void __init check_dup_gaps(struct maple_tree *mt, - unsigned long nr_entries, bool zero_start, - unsigned long gap) -{ - unsigned long i =3D 0; - struct maple_tree newmt; - int ret; - void *tmp; - MA_STATE(mas, mt, 0, 0); - MA_STATE(newmas, &newmt, 0, 0); - struct rw_semaphore newmt_lock; - - init_rwsem(&newmt_lock); - mt_set_external_lock(&newmt, &newmt_lock); - - if (!zero_start) - i =3D 1; - - mt_zero_nr_tallocated(); - for (; i <=3D nr_entries; i++) - mtree_store_range(mt, i*10, (i+1)*10 - gap, - xa_mk_value(i), GFP_KERNEL); - - mt_init_flags(&newmt, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN); - mt_set_non_kernel(99999); - down_write(&newmt_lock); - ret =3D mas_expected_entries(&newmas, nr_entries); - mt_set_non_kernel(0); - MT_BUG_ON(mt, ret !=3D 0); - - rcu_read_lock(); - mas_for_each(&mas, tmp, ULONG_MAX) { - newmas.index =3D mas.index; - newmas.last =3D mas.last; - mas_store(&newmas, tmp); - } - rcu_read_unlock(); - mas_destroy(&newmas); - - __mt_destroy(&newmt); - up_write(&newmt_lock); -} - -/* Duplicate many sizes of trees. Mainly to test expected entry values */ -static noinline void __init check_dup(struct maple_tree *mt) -{ - int i; - int big_start =3D 100010; - - /* Check with a value at zero */ - for (i =3D 10; i < 1000; i++) { - mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); - check_dup_gaps(mt, i, true, 5); - mtree_destroy(mt); - rcu_barrier(); - } - - cond_resched(); - mt_cache_shrink(); - /* Check with a value at zero, no gap */ - for (i =3D 1000; i < 2000; i++) { - mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); - check_dup_gaps(mt, i, true, 0); - mtree_destroy(mt); - rcu_barrier(); - } - - cond_resched(); - mt_cache_shrink(); - /* Check with a value at zero and unreasonably large */ - for (i =3D big_start; i < big_start + 10; i++) { - mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); - check_dup_gaps(mt, i, true, 5); - mtree_destroy(mt); - rcu_barrier(); - } - - cond_resched(); - mt_cache_shrink(); - /* Small to medium size not starting at zero*/ - for (i =3D 200; i < 1000; i++) { - mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); - check_dup_gaps(mt, i, false, 5); - mtree_destroy(mt); - rcu_barrier(); - } - - cond_resched(); - mt_cache_shrink(); - /* Unreasonably large not starting at zero*/ - for (i =3D big_start; i < big_start + 10; i++) { - mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); - check_dup_gaps(mt, i, false, 5); - mtree_destroy(mt); - rcu_barrier(); - cond_resched(); - mt_cache_shrink(); - } - - /* Check non-allocation tree not starting at zero */ - for (i =3D 1500; i < 3000; i++) { - mt_init_flags(mt, 0); - check_dup_gaps(mt, i, false, 5); - mtree_destroy(mt); - rcu_barrier(); - cond_resched(); - if (i % 2 =3D=3D 0) - mt_cache_shrink(); - } - - mt_cache_shrink(); - /* Check non-allocation tree starting at zero */ - for (i =3D 200; i < 1000; i++) { - mt_init_flags(mt, 0); - check_dup_gaps(mt, i, true, 5); - mtree_destroy(mt); - rcu_barrier(); - cond_resched(); - } - - mt_cache_shrink(); - /* Unreasonably large */ - for (i =3D big_start + 5; i < big_start + 10; i++) { - mt_init_flags(mt, 0); - check_dup_gaps(mt, i, true, 5); - mtree_destroy(mt); - rcu_barrier(); - mt_cache_shrink(); - cond_resched(); - } -} - static noinline void __init check_bnode_min_spanning(struct maple_tree *mt) { int i =3D 50; @@ -4077,10 +3944,6 @@ static int __init maple_tree_seed(void) check_fuzzer(&tree); mtree_destroy(&tree); =20 - mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); - check_dup(&tree); - mtree_destroy(&tree); - mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); check_bnode_min_spanning(&tree); mtree_destroy(&tree); diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/ma= ple.c index 172700fb7784d29f9403003b4484a5ebd7aa316b..c0543060dae2510477963331fb0= ccdffd78ea965 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -35455,17 +35455,6 @@ static void check_dfs_preorder(struct maple_tree *= mt) MT_BUG_ON(mt, count !=3D e); mtree_destroy(mt); =20 - mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); - mas_reset(&mas); - mt_zero_nr_tallocated(); - mt_set_non_kernel(200); - mas_expected_entries(&mas, max); - for (count =3D 0; count <=3D max; count++) { - mas.index =3D mas.last =3D count; - mas_store(&mas, xa_mk_value(count)); - MT_BUG_ON(mt, mas_is_err(&mas)); - } - mas_destroy(&mas); rcu_barrier(); /* * pr_info(" ->seq test of 0-%lu %luK in %d active (%d total)\n", @@ -36454,27 +36443,6 @@ static inline int check_vma_modification(struct ma= ple_tree *mt) return 0; } =20 -/* - * test to check that bulk stores do not use wr_rebalance as the store - * type. - */ -static inline void check_bulk_rebalance(struct maple_tree *mt) -{ - MA_STATE(mas, mt, ULONG_MAX, ULONG_MAX); - int max =3D 10; - - build_full_tree(mt, 0, 2); - - /* erase every entry in the tree */ - do { - /* set up bulk store mode */ - mas_expected_entries(&mas, max); - mas_erase(&mas); - MT_BUG_ON(mt, mas.store_type =3D=3D wr_rebalance); - } while (mas_prev(&mas, 0) !=3D NULL); - - mas_destroy(&mas); -} =20 void farmer_tests(void) { @@ -36487,10 +36455,6 @@ void farmer_tests(void) check_vma_modification(&tree); mtree_destroy(&tree); =20 - mt_init(&tree); - check_bulk_rebalance(&tree); - mtree_destroy(&tree); - tree.ma_root =3D xa_mk_value(0); mt_dump(&tree, mt_dump_dec); =20 --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D220530F7E8 for ; Wed, 10 Sep 2025 08:01:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491277; cv=none; b=TxFhGg2cz/12bzDtd1TKadu/MJTTSEbOkFarfPpMOPwESqER1oMtVxrUMrDx/aK46+ofj5m+3OOFjuPDB9xkJotwt04hfG3ehAyy0PORIY9qV0/2BkCRxgvrz+i7Gw8TFNuY92eMjM9BW//POllxI87oga5+1FLJ5OMhOcMAh6c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491277; c=relaxed/simple; bh=qXPM1Ew8/JntXWuqVpN7a+OvhDZNQagk5/amz0MEe40=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=nRaqPQsQJLuBJkNbcD07S+9PfQyHWZFLgnxb22RPGOv0jGQrQ+imA3avw8/b/WNe17w359+vueViIfbd0D8FBoqVMOykKI0VVNEXAnRWeDcddh+mVJ2N5zkNnaulJsx1uB8zW/FD0dR7dC1QMDqdERMKOp1mG1dfw/VyWq4W+eU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=hn2urHv2; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=hH2gs4l0; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=hn2urHv2; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=hH2gs4l0; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="hn2urHv2"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="hH2gs4l0"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="hn2urHv2"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="hH2gs4l0" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 4159D5CAF3; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rtSF476Ds9zbd6MrnqNt5+BMzWFkMQ3IyLGQJMUX73s=; b=hn2urHv2OiGDeG2e77bSB5GER9p4pdJ++ttQNeukFcxzmDTFL1R94/ETAf5iNKiSBwR+6x ZYkrNBrQDTlWBvyTDBzQhNjSR/WOnFyoSj0iyiuRMnsb1wIvmEfvLFUcAYSWx8KUF0+mEr SnI+RpORYz5Ma98hX+6mHa/6FVAo+PY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rtSF476Ds9zbd6MrnqNt5+BMzWFkMQ3IyLGQJMUX73s=; b=hH2gs4l0wjzny7y+HOMHTr+laUPULtcUp47P7if3WBPoc92+bHFc6mQ32QvuGbOkqQQk8a 9cZ0vzGBTGKnyMBA== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=hn2urHv2; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=hH2gs4l0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rtSF476Ds9zbd6MrnqNt5+BMzWFkMQ3IyLGQJMUX73s=; b=hn2urHv2OiGDeG2e77bSB5GER9p4pdJ++ttQNeukFcxzmDTFL1R94/ETAf5iNKiSBwR+6x ZYkrNBrQDTlWBvyTDBzQhNjSR/WOnFyoSj0iyiuRMnsb1wIvmEfvLFUcAYSWx8KUF0+mEr SnI+RpORYz5Ma98hX+6mHa/6FVAo+PY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rtSF476Ds9zbd6MrnqNt5+BMzWFkMQ3IyLGQJMUX73s=; b=hH2gs4l0wjzny7y+HOMHTr+laUPULtcUp47P7if3WBPoc92+bHFc6mQ32QvuGbOkqQQk8a 9cZ0vzGBTGKnyMBA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id AE4CF13B02; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 2A5JKkIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:14 +0200 Subject: [PATCH v8 12/23] tools/testing/vma: Implement vm_refcnt reset Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-12-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: 4159D5CAF3 X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:dkim,suse.cz:mid,suse.cz:email] X-Spam-Score: -4.51 From: "Liam R. Howlett" Add the reset of the ref count in vma_lock_init(). This is needed if the vma memory is not zeroed on allocation. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- tools/testing/vma/vma_internal.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index f8cf5b184d5b51dd627ff440943a7af3c549f482..6b6e2b05918c9f95b537f26e20a= 943b34082825a 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -1373,6 +1373,8 @@ static inline void ksm_exit(struct mm_struct *mm) =20 static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_re= fcnt) { + if (reset_refcnt) + refcount_set(&vma->vm_refcnt, 0); } =20 static inline void vma_numab_state_init(struct vm_area_struct *vma) --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2FDE313522 for ; Wed, 10 Sep 2025 08:01:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491315; cv=none; b=tMUK//+ZLgIVVpReJakpQBkrWeaBP4lB8jfaQxu+B+k7e7fNpN/jSglxWkS5luYAKbuX8fm0ceqFR/iRVOAarEYHdSJQwpwuFH/5FOZUqacU3gZOPT7aSahEfoA+7A2wkJnKUxULa2n9dfy8FhVt3vhYovH7zrOofKQA5pMtE2E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491315; c=relaxed/simple; bh=hZaFSXsWtqvmhZwYXzk91cHXjtM7PbbmIwGUEo5RuT8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=X8bIdOe9Fz7G9kobRbY1Tzv9qpz3ibmjq2vwQXHuneSahsW/xmRxB8ddHQR7upAEHZKgBZIyIlK2EKBmKtJnPlkoGM/BfPJzpipyJ2d8D0vwMBB1TnFu+rtFI7Sik1P60+Lz7X2uJWBGlyrAqlYtvUR04OZDxdUHAkYgrQgNCeM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=pm/ZH8ui; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=39y++7wt; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=pm/ZH8ui; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=39y++7wt; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="pm/ZH8ui"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="39y++7wt"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="pm/ZH8ui"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="39y++7wt" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 4491134C4C; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=svSQV11Yo8FARaPQKORg7oH4qaytNWNITJXkEtgw9mY=; b=pm/ZH8uikXCX3oNciw6mLvEfYz3eXVMZkk0o3oj36hkG6+TJDhwQ9GUbRNwLctIeWsexjz 1U1B14wupcf1KhBG+Or+1p6PRHQdP3VUsVt+1X0pFmw/7TBApQLUAZnvAD4LcOgQxgSfjj 1mmkcditv/002iyOZkGavHiYoQS54oQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=svSQV11Yo8FARaPQKORg7oH4qaytNWNITJXkEtgw9mY=; b=39y++7wtiRDAmGju397ZjgvNgru4MbxLvr7tiW+mKsihSPL+BSoc5Xms2tVazFPF9+0EGG WG7plpUVmedNWQBQ== Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=svSQV11Yo8FARaPQKORg7oH4qaytNWNITJXkEtgw9mY=; b=pm/ZH8uikXCX3oNciw6mLvEfYz3eXVMZkk0o3oj36hkG6+TJDhwQ9GUbRNwLctIeWsexjz 1U1B14wupcf1KhBG+Or+1p6PRHQdP3VUsVt+1X0pFmw/7TBApQLUAZnvAD4LcOgQxgSfjj 1mmkcditv/002iyOZkGavHiYoQS54oQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=svSQV11Yo8FARaPQKORg7oH4qaytNWNITJXkEtgw9mY=; b=39y++7wtiRDAmGju397ZjgvNgru4MbxLvr7tiW+mKsihSPL+BSoc5Xms2tVazFPF9+0EGG WG7plpUVmedNWQBQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id C247613B03; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id iOAmL0IwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:15 +0200 Subject: [PATCH v8 13/23] tools/testing: Add support for changes to slab for sheaves Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-13-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, "Liam R. Howlett" X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[14]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,Oracle.com]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Spam-Score: -4.30 From: "Liam R. Howlett" The slab changes for sheaves requires more effort in the testing code. Unite all the kmem_cache work into the tools/include slab header for both the vma and maple tree testing. The vma test code also requires importing more #defines to allow for seamless use of the shared kmem_cache code. This adds the pthread header to the slab header in the tools directory to allow for the pthread_mutex in linux.c. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- tools/include/linux/slab.h | 137 ++++++++++++++++++++++++++++++++++= ++-- tools/testing/shared/linux.c | 26 ++------ tools/testing/shared/maple-shim.c | 1 + tools/testing/vma/vma_internal.h | 92 +------------------------ 4 files changed, 142 insertions(+), 114 deletions(-) diff --git a/tools/include/linux/slab.h b/tools/include/linux/slab.h index c87051e2b26f5a7fee0362697fae067076b8e84d..c5c5cc6db5668be2cc94c29065c= cfa7ca7b4bb08 100644 --- a/tools/include/linux/slab.h +++ b/tools/include/linux/slab.h @@ -4,11 +4,31 @@ =20 #include #include +#include =20 -#define SLAB_PANIC 2 #define SLAB_RECLAIM_ACCOUNT 0x00020000UL /* Objects are rec= laimable */ =20 #define kzalloc_node(size, flags, node) kmalloc(size, flags) +enum _slab_flag_bits { + _SLAB_KMALLOC, + _SLAB_HWCACHE_ALIGN, + _SLAB_PANIC, + _SLAB_TYPESAFE_BY_RCU, + _SLAB_ACCOUNT, + _SLAB_FLAGS_LAST_BIT +}; + +#define __SLAB_FLAG_BIT(nr) ((unsigned int __force)(1U << (nr))) +#define __SLAB_FLAG_UNUSED ((unsigned int __force)(0U)) + +#define SLAB_HWCACHE_ALIGN __SLAB_FLAG_BIT(_SLAB_HWCACHE_ALIGN) +#define SLAB_PANIC __SLAB_FLAG_BIT(_SLAB_PANIC) +#define SLAB_TYPESAFE_BY_RCU __SLAB_FLAG_BIT(_SLAB_TYPESAFE_BY_RCU) +#ifdef CONFIG_MEMCG +# define SLAB_ACCOUNT __SLAB_FLAG_BIT(_SLAB_ACCOUNT) +#else +# define SLAB_ACCOUNT __SLAB_FLAG_UNUSED +#endif =20 void *kmalloc(size_t size, gfp_t gfp); void kfree(void *p); @@ -23,6 +43,86 @@ enum slab_state { FULL }; =20 +struct kmem_cache { + pthread_mutex_t lock; + unsigned int size; + unsigned int align; + unsigned int sheaf_capacity; + int nr_objs; + void *objs; + void (*ctor)(void *); + bool non_kernel_enabled; + unsigned int non_kernel; + unsigned long nr_allocated; + unsigned long nr_tallocated; + bool exec_callback; + void (*callback)(void *); + void *private; +}; + +struct kmem_cache_args { + /** + * @align: The required alignment for the objects. + * + * %0 means no specific alignment is requested. + */ + unsigned int align; + /** + * @sheaf_capacity: The maximum size of the sheaf. + */ + unsigned int sheaf_capacity; + /** + * @useroffset: Usercopy region offset. + * + * %0 is a valid offset, when @usersize is non-%0 + */ + unsigned int useroffset; + /** + * @usersize: Usercopy region size. + * + * %0 means no usercopy region is specified. + */ + unsigned int usersize; + /** + * @freeptr_offset: Custom offset for the free pointer + * in &SLAB_TYPESAFE_BY_RCU caches + * + * By default &SLAB_TYPESAFE_BY_RCU caches place the free pointer + * outside of the object. This might cause the object to grow in size. + * Cache creators that have a reason to avoid this can specify a custom + * free pointer offset in their struct where the free pointer will be + * placed. + * + * Note that placing the free pointer inside the object requires the + * caller to ensure that no fields are invalidated that are required to + * guard against object recycling (See &SLAB_TYPESAFE_BY_RCU for + * details). + * + * Using %0 as a value for @freeptr_offset is valid. If @freeptr_offset + * is specified, %use_freeptr_offset must be set %true. + * + * Note that @ctor currently isn't supported with custom free pointers + * as a @ctor requires an external free pointer. + */ + unsigned int freeptr_offset; + /** + * @use_freeptr_offset: Whether a @freeptr_offset is used. + */ + bool use_freeptr_offset; + /** + * @ctor: A constructor for the objects. + * + * The constructor is invoked for each object in a newly allocated slab + * page. It is the cache user's responsibility to free object in the + * same state as after calling the constructor, or deal appropriately + * with any differences between a freshly constructed and a reallocated + * object. + * + * %NULL means no constructor. + */ + void (*ctor)(void *); +}; + static inline void *kzalloc(size_t size, gfp_t gfp) { return kmalloc(size, gfp | __GFP_ZERO); @@ -37,9 +137,38 @@ static inline void *kmem_cache_alloc(struct kmem_cache = *cachep, int flags) } void kmem_cache_free(struct kmem_cache *cachep, void *objp); =20 -struct kmem_cache *kmem_cache_create(const char *name, unsigned int size, - unsigned int align, unsigned int flags, - void (*ctor)(void *)); + +struct kmem_cache * +__kmem_cache_create_args(const char *name, unsigned int size, + struct kmem_cache_args *args, unsigned int flags); + +/* If NULL is passed for @args, use this variant with default arguments. */ +static inline struct kmem_cache * +__kmem_cache_default_args(const char *name, unsigned int size, + struct kmem_cache_args *args, unsigned int flags) +{ + struct kmem_cache_args kmem_default_args =3D {}; + + return __kmem_cache_create_args(name, size, &kmem_default_args, flags); +} + +static inline struct kmem_cache * +__kmem_cache_create(const char *name, unsigned int size, unsigned int alig= n, + unsigned int flags, void (*ctor)(void *)) +{ + struct kmem_cache_args kmem_args =3D { + .align =3D align, + .ctor =3D ctor, + }; + + return __kmem_cache_create_args(name, size, &kmem_args, flags); +} + +#define kmem_cache_create(__name, __object_size, __args, ...) \ + _Generic((__args), \ + struct kmem_cache_args *: __kmem_cache_create_args, \ + void *: __kmem_cache_default_args, \ + default: __kmem_cache_create)(__name, __object_size, __args, __VA_ARGS__) =20 void kmem_cache_free_bulk(struct kmem_cache *cachep, size_t size, void **l= ist); int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t siz= e, diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c index 0f97fb0d19e19c327aa4843a35b45cc086f4f366..97b8412ccbb6d222604c7b397c5= 3c65618d8d51b 100644 --- a/tools/testing/shared/linux.c +++ b/tools/testing/shared/linux.c @@ -16,21 +16,6 @@ int nr_allocated; int preempt_count; int test_verbose; =20 -struct kmem_cache { - pthread_mutex_t lock; - unsigned int size; - unsigned int align; - int nr_objs; - void *objs; - void (*ctor)(void *); - unsigned int non_kernel; - unsigned long nr_allocated; - unsigned long nr_tallocated; - bool exec_callback; - void (*callback)(void *); - void *private; -}; - void kmem_cache_set_callback(struct kmem_cache *cachep, void (*callback)(v= oid *)) { cachep->callback =3D callback; @@ -234,23 +219,26 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, = gfp_t gfp, size_t size, } =20 struct kmem_cache * -kmem_cache_create(const char *name, unsigned int size, unsigned int align, - unsigned int flags, void (*ctor)(void *)) +__kmem_cache_create_args(const char *name, unsigned int size, + struct kmem_cache_args *args, + unsigned int flags) { struct kmem_cache *ret =3D malloc(sizeof(*ret)); =20 pthread_mutex_init(&ret->lock, NULL); ret->size =3D size; - ret->align =3D align; + ret->align =3D args->align; + ret->sheaf_capacity =3D args->sheaf_capacity; ret->nr_objs =3D 0; ret->nr_allocated =3D 0; ret->nr_tallocated =3D 0; ret->objs =3D NULL; - ret->ctor =3D ctor; + ret->ctor =3D args->ctor; ret->non_kernel =3D 0; ret->exec_callback =3D false; ret->callback =3D NULL; ret->private =3D NULL; + return ret; } =20 diff --git a/tools/testing/shared/maple-shim.c b/tools/testing/shared/maple= -shim.c index 640df76f483e09f3b6f85612786060dd273e2362..9d7b743415660305416e972fa75= b56824211b0eb 100644 --- a/tools/testing/shared/maple-shim.c +++ b/tools/testing/shared/maple-shim.c @@ -3,5 +3,6 @@ /* Very simple shim around the maple tree. */ =20 #include "maple-shared.h" +#include =20 #include "../../../lib/maple_tree.c" diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 6b6e2b05918c9f95b537f26e20a943b34082825a..d5b87fa6a133f6d676488de2538= c509e0f0e1d54 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -26,6 +26,7 @@ #include #include #include +#include =20 extern unsigned long stack_guard_gap; #ifdef CONFIG_MMU @@ -509,65 +510,6 @@ struct pagetable_move_control { .len_in =3D len_, \ } =20 -struct kmem_cache_args { - /** - * @align: The required alignment for the objects. - * - * %0 means no specific alignment is requested. - */ - unsigned int align; - /** - * @useroffset: Usercopy region offset. - * - * %0 is a valid offset, when @usersize is non-%0 - */ - unsigned int useroffset; - /** - * @usersize: Usercopy region size. - * - * %0 means no usercopy region is specified. - */ - unsigned int usersize; - /** - * @freeptr_offset: Custom offset for the free pointer - * in &SLAB_TYPESAFE_BY_RCU caches - * - * By default &SLAB_TYPESAFE_BY_RCU caches place the free pointer - * outside of the object. This might cause the object to grow in size. - * Cache creators that have a reason to avoid this can specify a custom - * free pointer offset in their struct where the free pointer will be - * placed. - * - * Note that placing the free pointer inside the object requires the - * caller to ensure that no fields are invalidated that are required to - * guard against object recycling (See &SLAB_TYPESAFE_BY_RCU for - * details). - * - * Using %0 as a value for @freeptr_offset is valid. If @freeptr_offset - * is specified, %use_freeptr_offset must be set %true. - * - * Note that @ctor currently isn't supported with custom free pointers - * as a @ctor requires an external free pointer. - */ - unsigned int freeptr_offset; - /** - * @use_freeptr_offset: Whether a @freeptr_offset is used. - */ - bool use_freeptr_offset; - /** - * @ctor: A constructor for the objects. - * - * The constructor is invoked for each object in a newly allocated slab - * page. It is the cache user's responsibility to free object in the - * same state as after calling the constructor, or deal appropriately - * with any differences between a freshly constructed and a reallocated - * object. - * - * %NULL means no constructor. - */ - void (*ctor)(void *); -}; - static inline void vma_iter_invalidate(struct vma_iterator *vmi) { mas_pause(&vmi->mas); @@ -652,38 +594,6 @@ static inline void vma_init(struct vm_area_struct *vma= , struct mm_struct *mm) vma->vm_lock_seq =3D UINT_MAX; } =20 -struct kmem_cache { - const char *name; - size_t object_size; - struct kmem_cache_args *args; -}; - -static inline struct kmem_cache *__kmem_cache_create(const char *name, - size_t object_size, - struct kmem_cache_args *args) -{ - struct kmem_cache *ret =3D malloc(sizeof(struct kmem_cache)); - - ret->name =3D name; - ret->object_size =3D object_size; - ret->args =3D args; - - return ret; -} - -#define kmem_cache_create(__name, __object_size, __args, ...) \ - __kmem_cache_create((__name), (__object_size), (__args)) - -static inline void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) -{ - return calloc(1, s->object_size); -} - -static inline void kmem_cache_free(struct kmem_cache *s, void *x) -{ - free(x); -} - /* * These are defined in vma.h, but sadly vm_stat_account() is referenced by * kernel/fork.c, so we have to these broadly available there, and tempora= rily --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D75B312812 for ; Wed, 10 Sep 2025 08:01:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491310; cv=none; b=mulG8xUAybeBz64WT44/irBSiVNr/PljVjkDvQbaxW/UQGbdoVYmJOCFx7bfXFcaGNzyibVhhGqW4NsLkDzgizCC30ZONzWM0+5aAEUkX4tj/ds7amq+qIRl0/wgqz3WqeQB23ofsHz69P3PABr44l6mFEUWVsdOWag+Q5UK/ZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491310; c=relaxed/simple; bh=zXygsHQ8M75ldUrzHX4v4b361hHPjjuRMUX5R7ROHQM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=h5/e/miyQxmuVFwWau4cV3sXtGti1srYJDq94wpmSIe6nGwwoW/Jzl0rgFhMaybNEeQxJik6+atbdiTwZekLccsV4cNSI2WmMgaI3c4iLJ4TtmJAu6W2kx27VLq8W/oDaQlMcnTdYtgPN4QNct5lnzz9pUqXtDua9G2fXBttB+8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=kRgXuUVH; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=v0Gmqs4y; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=kRgXuUVH; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=v0Gmqs4y; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="kRgXuUVH"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="v0Gmqs4y"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="kRgXuUVH"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="v0Gmqs4y" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 4B9CB5CB08; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GkJLUdth35DaRyhM3TWCLrM77sXSdG1lPON3otYkBww=; b=kRgXuUVHckP5nGYOk9us7XwUxag1poVZHhieSiHTDGmQaabS697+H203GEA0HCiV/CqcM6 gC5KI1Uih9wclAxbHLuiecmQX8sDy9gHY1rHD4e0RYvjELbSZNgEV0vpjF1XGsRs0uy0Gn MG4yepVyOww0/i8ykq9Dd1Lfg9vjyl8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GkJLUdth35DaRyhM3TWCLrM77sXSdG1lPON3otYkBww=; b=v0Gmqs4yLSYMHbcmi4X2wd9y1mMcEnhGcOr6ySAvGAsX+qKc8Ycf4MarNwM+rsOxbtin7i XaZ95nBcMlXyNUAg== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=kRgXuUVH; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=v0Gmqs4y DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GkJLUdth35DaRyhM3TWCLrM77sXSdG1lPON3otYkBww=; b=kRgXuUVHckP5nGYOk9us7XwUxag1poVZHhieSiHTDGmQaabS697+H203GEA0HCiV/CqcM6 gC5KI1Uih9wclAxbHLuiecmQX8sDy9gHY1rHD4e0RYvjELbSZNgEV0vpjF1XGsRs0uy0Gn MG4yepVyOww0/i8ykq9Dd1Lfg9vjyl8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GkJLUdth35DaRyhM3TWCLrM77sXSdG1lPON3otYkBww=; b=v0Gmqs4yLSYMHbcmi4X2wd9y1mMcEnhGcOr6ySAvGAsX+qKc8Ycf4MarNwM+rsOxbtin7i XaZ95nBcMlXyNUAg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id DB2BD13B04; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id uAU6NUIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:16 +0200 Subject: [PATCH v8 14/23] mm, vma: use percpu sheaves for vm_area_struct cache Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-14-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: 4B9CB5CB08 X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[99.99%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:dkim,suse.cz:mid,suse.cz:email] X-Spam-Score: -4.51 Create the vm_area_struct cache with percpu sheaves of size 32 to improve its performance. Reviewed-by: Suren Baghdasaryan Signed-off-by: Vlastimil Babka --- mm/vma_init.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/vma_init.c b/mm/vma_init.c index 8e53c7943561e7324e7992946b4065dec1149b82..52c6b55fac4519e0da39ca75ad0= 18e14449d1d95 100644 --- a/mm/vma_init.c +++ b/mm/vma_init.c @@ -16,6 +16,7 @@ void __init vma_state_init(void) struct kmem_cache_args args =3D { .use_freeptr_offset =3D true, .freeptr_offset =3D offsetof(struct vm_area_struct, vm_freeptr), + .sheaf_capacity =3D 32, }; =20 vm_area_cachep =3D kmem_cache_create("vm_area_struct", --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E62AD3148AC for ; Wed, 10 Sep 2025 08:02:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491332; cv=none; b=mZ1Z7BNiY29+I7cJYlaZcVOmVxCJtAdo3Pr5Ishpdi7WfKaFFEk5EkdTWOtDT+HnN71okCXbUCaOO3Aj7m/Nbxq/4F0Ifmz0tbEQ1UdzrcNBONbiDAHmG97d7h/o7DlF2oAM3/I67O0phtnUogYdj/dyp0sbgl6cY45jB5ljeuA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491332; c=relaxed/simple; bh=hbU2t/ML85hBV/IMMVqL9yISjEsAJnbj3Ji5aIxTqjA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=jgdxWxic62ODm4xqn9tv0mGOuE5PSd+Lu79P8IZk01FnTwh8l2Nci/98jVUaIYmWbyP8/kfr9qC9abMKzI3YYJLd4sW2/mafgfUzlNek8hT96/4Pruhsncr0r2cwkYCgRabKmlnwJl4LnWTmYkGubaJKTbMcijKeFo7GBfYayvs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=uj8nV6zM; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=RKOm/P8Z; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=uj8nV6zM; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=RKOm/P8Z; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="uj8nV6zM"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="RKOm/P8Z"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="uj8nV6zM"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="RKOm/P8Z" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 523C834C57; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ynsc07sxqAYxTDrBUJOoa/Bn9nwG7erBp//UIQgFkWA=; b=uj8nV6zMzC/X6EkNSTlNcIpAeSk4jvyl1rMWK3jrJhQVkVYp+W8Zvaoqb0bpVwphKaSFdR WkeJrRmogsazbVcO2UDw9QnjvRhZIZcbJyKraY7UETW2U6LwfiLfkRWaYpemkA61eYQjJg 6RdkIVbYPDwciKcGUMEmL0zSpLZbD7M= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ynsc07sxqAYxTDrBUJOoa/Bn9nwG7erBp//UIQgFkWA=; b=RKOm/P8ZnTjxWcdQiJRPf9DbZAmsXc2caBMPSBD3vKcFNOD6G+6srjaUd6qFWTlpnPFwe2 vwd4/VYfuUWDKQDA== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=uj8nV6zM; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="RKOm/P8Z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ynsc07sxqAYxTDrBUJOoa/Bn9nwG7erBp//UIQgFkWA=; b=uj8nV6zMzC/X6EkNSTlNcIpAeSk4jvyl1rMWK3jrJhQVkVYp+W8Zvaoqb0bpVwphKaSFdR WkeJrRmogsazbVcO2UDw9QnjvRhZIZcbJyKraY7UETW2U6LwfiLfkRWaYpemkA61eYQjJg 6RdkIVbYPDwciKcGUMEmL0zSpLZbD7M= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ynsc07sxqAYxTDrBUJOoa/Bn9nwG7erBp//UIQgFkWA=; b=RKOm/P8ZnTjxWcdQiJRPf9DbZAmsXc2caBMPSBD3vKcFNOD6G+6srjaUd6qFWTlpnPFwe2 vwd4/VYfuUWDKQDA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id F041D13B05; Wed, 10 Sep 2025 08:01:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 6GVeOkIwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:06 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:17 +0200 Subject: [PATCH v8 15/23] maple_tree: use percpu sheaves for maple_node_cache Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-15-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: 523C834C57 X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:dkim,suse.cz:mid,suse.cz:email] X-Spam-Score: -4.51 Setup the maple_node_cache with percpu sheaves of size 32 to hopefully improve its performance. Note this will not immediately take advantage of sheaf batching of kfree_rcu() operations due to the maple tree using call_rcu with custom callbacks. The followup changes to maple tree will change that and also make use of the prefilled sheaves functionality. Reviewed-by: Sidhartha Kumar Reviewed-by: Suren Baghdasaryan Signed-off-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- lib/maple_tree.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 4f0e30b57b0cef9e5cf791f3f64f5898752db402..d034f170ac897341b40cfd050b6= aee86b6d2cf60 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -6040,9 +6040,14 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp) =20 void __init maple_tree_init(void) { + struct kmem_cache_args args =3D { + .align =3D sizeof(struct maple_node), + .sheaf_capacity =3D 32, + }; + maple_node_cache =3D kmem_cache_create("maple_node", - sizeof(struct maple_node), sizeof(struct maple_node), - SLAB_PANIC, NULL); + sizeof(struct maple_node), &args, + SLAB_PANIC); } =20 /** --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDBC0312827 for ; Wed, 10 Sep 2025 08:01:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491309; cv=none; b=VcBBx/X0TGf1xmRIdUZZAmT8+OcMh1kdTjh96jHTUdFIb3I687ctn2mLNGS3jG09ie2BL7cnvndEWeMUF0fvF0D1ixSPYzpaKNiEgPoG09rKO+HHUK4E1iMw0GAD5gOgU34ewB2esoveVqPbK/OZOwuWKEu38JKT1YQFxAK1VYg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491309; c=relaxed/simple; bh=IbW3Hwc2yVi+Ksjat79Bt0c8mqQegJtd4O77r1T6JHg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WRMs7teGrCqiRDsz2JbqOsR2dtyKY+YxOa24aLRwMWuHC8nWYInRCsHmbKUrwwd8l1pfajQUTDdcQVLSt8zMrJMN4pJT32SfSUEc1Ib3hzVNejUn83dfTFTZ1uGjPLsFG4KNhbI5bCy9P0uC4Ho8E0ekWU6j/WquLDCjk6epyoA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=XOJHhG2v; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=bhV8GnZN; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=XOJHhG2v; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=bhV8GnZN; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="XOJHhG2v"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="bhV8GnZN"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="XOJHhG2v"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="bhV8GnZN" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 52F8B34C65; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yjD49SbUXnEu+0JzrLTF+521Hhj3syJQd2OSJQchv9c=; b=XOJHhG2vZC0MBDIR8MKa611bMkqdlds+kx5f2bVvAK7EafB0g36SkYhXp3kG8oqctrpgSV 3M3EhLAgK6fqbyKh4Wq9YzldQn3iaG2tc7CXpjIPAlr3rQVNDOT0KH0cdLcWvU8Qdb0vk4 wEGb4g3fR8/JvKoGU7VZM3eI7L3pjoI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yjD49SbUXnEu+0JzrLTF+521Hhj3syJQd2OSJQchv9c=; b=bhV8GnZNs78mOmK/I+PAZDXMSRDwlJbqkgYKgIw7ozPEpkoL2S+VIXoYn6D9VO0Yo1w2fo vR9SeuVIChwey9BQ== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=XOJHhG2v; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=bhV8GnZN DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yjD49SbUXnEu+0JzrLTF+521Hhj3syJQd2OSJQchv9c=; b=XOJHhG2vZC0MBDIR8MKa611bMkqdlds+kx5f2bVvAK7EafB0g36SkYhXp3kG8oqctrpgSV 3M3EhLAgK6fqbyKh4Wq9YzldQn3iaG2tc7CXpjIPAlr3rQVNDOT0KH0cdLcWvU8Qdb0vk4 wEGb4g3fR8/JvKoGU7VZM3eI7L3pjoI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yjD49SbUXnEu+0JzrLTF+521Hhj3syJQd2OSJQchv9c=; b=bhV8GnZNs78mOmK/I+PAZDXMSRDwlJbqkgYKgIw7ozPEpkoL2S+VIXoYn6D9VO0Yo1w2fo vR9SeuVIChwey9BQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 1185313B06; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id gK7/A0MwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:07 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:18 +0200 Subject: [PATCH v8 16/23] tools/testing: include maple-shim.c in maple.c Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-16-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:dkim,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 52F8B34C65 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 There's some duplicated code and we are about to add more functionality in maple-shared.h that we will need in the userspace maple test to be available, so include it via maple-shim.c Co-developed-by: Liam R. Howlett Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- tools/testing/radix-tree/maple.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/ma= ple.c index c0543060dae2510477963331fb0ccdffd78ea965..4a35e1e7c64b7ce347cbd1693be= eaacb0c4c330e 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -8,14 +8,6 @@ * difficult to handle in kernel tests. */ =20 -#define CONFIG_DEBUG_MAPLE_TREE -#define CONFIG_MAPLE_SEARCH -#define MAPLE_32BIT (MAPLE_NODE_SLOTS > 31) -#include "test.h" -#include -#include -#include - #define module_init(x) #define module_exit(x) #define MODULE_AUTHOR(x) @@ -23,7 +15,9 @@ #define MODULE_LICENSE(x) #define dump_stack() assert(0) =20 -#include "../../../lib/maple_tree.c" +#include "test.h" + +#include "../shared/maple-shim.c" #include "../../../lib/test_maple_tree.c" =20 #define RCU_RANGE_COUNT 1000 --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 949E2311973 for ; Wed, 10 Sep 2025 08:01:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491304; cv=none; b=rJXiSCOI9m9rs4aS44BGlGeRaSHC9HSiwyFGcYd9sWv1WnYyYx/Y3x2hNB+T0zc/HpCAASjNOsyN2FEf1XeeDhpWplvToDVs6CFF1BXk/BfPV3uzvcjL6Iut8p5aXMn0YdyKl346UMk2QOaVhAvM7ae5peEeKBaDfDVB8Ujv6v8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491304; c=relaxed/simple; bh=8zhwdOkNAvH6snya/EkLPN9kkVJJEEo+XjxQ5KsrJ7Y=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=U4zXqHpEHoiCTZiwyTg+TiOqZRzjUIVanD40OynWlkX9IUZAl8Adz+ck2WUJhLyT9GI+df23HDgAf4XX3sy6RAcRrXP34L5W/fQ503AlQjQphOx6LXTFzE9Ou4djLnW6dOw2E2sHyVxcerA+9uEvsLnM1tSfrJx6NyVhqK2EA/Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=YPw5IO+o; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=sgSvKRdD; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=YPw5IO+o; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=sgSvKRdD; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="YPw5IO+o"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="sgSvKRdD"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="YPw5IO+o"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="sgSvKRdD" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 5304A5CB34; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JknvsOxfmdGK99z5Y7Du3/KRq6UYdu6X7lKlwYsXCV8=; b=YPw5IO+oJdz5rFhGGiXcLxntBg4dQtv/AZfDa58nfUW0pulrlTBA78HLoJ8DalVdbjN3AJ C0egF1mCnxK3hEeFFci2DMnofdQigpmjOllksLUOJifn1F9oZDsgEoSbaLzbMkhxOmzqkE fcLhoNGshkR+ZB083dHYP3RKv71CQ98= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JknvsOxfmdGK99z5Y7Du3/KRq6UYdu6X7lKlwYsXCV8=; b=sgSvKRdDq2XsiiXk/7Crr8DG5iWigMyXMXUcR0U8vCxpaJ7jxU3b7OtFGpoLxNh6s3912i WhVESaxaLatiyjAQ== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JknvsOxfmdGK99z5Y7Du3/KRq6UYdu6X7lKlwYsXCV8=; b=YPw5IO+oJdz5rFhGGiXcLxntBg4dQtv/AZfDa58nfUW0pulrlTBA78HLoJ8DalVdbjN3AJ C0egF1mCnxK3hEeFFci2DMnofdQigpmjOllksLUOJifn1F9oZDsgEoSbaLzbMkhxOmzqkE fcLhoNGshkR+ZB083dHYP3RKv71CQ98= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JknvsOxfmdGK99z5Y7Du3/KRq6UYdu6X7lKlwYsXCV8=; b=sgSvKRdDq2XsiiXk/7Crr8DG5iWigMyXMXUcR0U8vCxpaJ7jxU3b7OtFGpoLxNh6s3912i WhVESaxaLatiyjAQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 2542513A54; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wLDRCEMwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:07 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:19 +0200 Subject: [PATCH v8 17/23] testing/radix-tree/maple: Hack around kfree_rcu not existing Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-17-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, Pedro Falcato X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[14]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,suse.de]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.de:email,suse.cz:mid,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Spam-Score: -4.30 From: "Liam R. Howlett" liburcu doesn't have kfree_rcu (or anything similar). Despite that, we can hack around it in a trivial fashion, by adding a wrapper. The wrapper only works for maple_nodes because we cannot get the kmem_cache pointer any other way in the test code. Link: https://lore.kernel.org/all/20250812162124.59417-1-pfalcato@suse.de/ Suggested-by: Pedro Falcato Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- tools/testing/shared/maple-shared.h | 11 +++++++++++ tools/testing/shared/maple-shim.c | 6 ++++++ 2 files changed, 17 insertions(+) diff --git a/tools/testing/shared/maple-shared.h b/tools/testing/shared/map= le-shared.h index dc4d30f3860b9bd23b4177c7d7926ac686887815..2a1e9a8594a2834326cd9374738= b2a2c7c3f9f7c 100644 --- a/tools/testing/shared/maple-shared.h +++ b/tools/testing/shared/maple-shared.h @@ -10,4 +10,15 @@ #include #include "linux/init.h" =20 +void maple_rcu_cb(struct rcu_head *head); +#define rcu_cb maple_rcu_cb + +#define kfree_rcu(_struct, _memb) \ +do { \ + typeof(_struct) _p_struct =3D (_struct); \ + \ + call_rcu(&((_p_struct)->_memb), rcu_cb); \ +} while(0); + + #endif /* __MAPLE_SHARED_H__ */ diff --git a/tools/testing/shared/maple-shim.c b/tools/testing/shared/maple= -shim.c index 9d7b743415660305416e972fa75b56824211b0eb..16252ee616c0489c80490ff25b8= d255427bf9fdc 100644 --- a/tools/testing/shared/maple-shim.c +++ b/tools/testing/shared/maple-shim.c @@ -6,3 +6,9 @@ #include =20 #include "../../../lib/maple_tree.c" + +void maple_rcu_cb(struct rcu_head *head) { + struct maple_node *node =3D container_of(head, struct maple_node, rcu); + + kmem_cache_free(maple_node_cache, node); +} --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF0B8312827 for ; Wed, 10 Sep 2025 08:02:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491325; cv=none; b=Uad3HohxY5ZcjQGWNSj21P2mOJIsF2om2/+EO5YH8NLtVQJiHWTeLsn/AYr6/9s/LzQuVXkBCA1f7/Hm/THXgcIwRZJdY8mOmhTOrrDmhDJtvTsKX+PfmBbeW1pdsV2VWP0EyoU1EOvr5szjryCdRjlvpV6llBQK9TlKIH/Ld28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491325; c=relaxed/simple; bh=WzPjyy36jprPHGMdn+7b39tM0IxZONzlFwPhCfr4SFU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WqBA7NcC44UMWld1cCakmfAdYTUJj9Z7KoxjCD1HldPOMEI9HaLCLHTLLWDhUNpptIi5D3advUXWGZXSrSSzE85hE2KRa60wS7V+AAlx05JE2H/nGBDz7aXvt1OpqcTFGCnLsbUGRcnNgpVOpcc51Rhr43xTiD+ZP7u1EU4LQks= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=io7YEaN7; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=PVOZcPyd; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=io7YEaN7; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=PVOZcPyd; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="io7YEaN7"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="PVOZcPyd"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="io7YEaN7"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="PVOZcPyd" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 5789034C8E; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7f5rvf5fiuDnTXDrW5D1lkLwPQ7P+BbSXdEN380kRB4=; b=io7YEaN7JfCJ1wu/yUuk2NAxP9Ebp6nAU9lmg28Sm4KjjDMVgh799CMRWcT2yS9Lc5n3k1 98EsPw1qk2B9hiea2Ui4sIxXyvKuS4hQBSixxqf8tuQqLa4z4InXrFV8QtBooKpQL4K0z3 nRr+iuU0nGFBgrdULLbOWVlzeKGLI+s= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7f5rvf5fiuDnTXDrW5D1lkLwPQ7P+BbSXdEN380kRB4=; b=PVOZcPydeVLCQhR20WGg2D7fJbtC0/8gpIAw07BzvaQ+xA2G4nKfu//a/0QOOBklIm+0QA 6WD9nVCajNnGg0Ag== Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=io7YEaN7; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=PVOZcPyd DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7f5rvf5fiuDnTXDrW5D1lkLwPQ7P+BbSXdEN380kRB4=; b=io7YEaN7JfCJ1wu/yUuk2NAxP9Ebp6nAU9lmg28Sm4KjjDMVgh799CMRWcT2yS9Lc5n3k1 98EsPw1qk2B9hiea2Ui4sIxXyvKuS4hQBSixxqf8tuQqLa4z4InXrFV8QtBooKpQL4K0z3 nRr+iuU0nGFBgrdULLbOWVlzeKGLI+s= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7f5rvf5fiuDnTXDrW5D1lkLwPQ7P+BbSXdEN380kRB4=; b=PVOZcPydeVLCQhR20WGg2D7fJbtC0/8gpIAw07BzvaQ+xA2G4nKfu//a/0QOOBklIm+0QA 6WD9nVCajNnGg0Ag== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 3A7B213ABD; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id sMf8DUMwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:07 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:20 +0200 Subject: [PATCH v8 18/23] maple_tree: Use kfree_rcu in ma_free_rcu Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-18-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, Pedro Falcato X-Mailer: b4 0.14.2 X-Spam-Level: X-Spam-Flag: NO X-Rspamd-Queue-Id: 5789034C8E X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[14]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:106:10:150:64:167:received,2a07:de40:b281:104:10:150:64:97:from]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,suse.de]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.de:email,suse.cz:dkim,suse.cz:mid,suse.cz:email] X-Spam-Score: -4.51 From: Pedro Falcato kfree_rcu is an optimized version of call_rcu + kfree. It used to not be possible to call it on non-kmalloc objects, but this restriction was lifted ever since SLOB was dropped from the kernel, and since commit 6c6c47b063b5 ("mm, slab: call kvfree_rcu_barrier() from kmem_cache_destroy(= )"). Thus, replace call_rcu + mt_free_rcu with kfree_rcu. Signed-off-by: Pedro Falcato Signed-off-by: Vlastimil Babka Reviewed-by: Harry Yoo Reviewed-by: Suren Baghdasaryan --- lib/maple_tree.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index d034f170ac897341b40cfd050b6aee86b6d2cf60..c706e2e48f884fd156e25be2b17= eb5e154774db7 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -187,13 +187,6 @@ static inline void mt_free_bulk(size_t size, void __rc= u **nodes) kmem_cache_free_bulk(maple_node_cache, size, (void **)nodes); } =20 -static void mt_free_rcu(struct rcu_head *head) -{ - struct maple_node *node =3D container_of(head, struct maple_node, rcu); - - kmem_cache_free(maple_node_cache, node); -} - /* * ma_free_rcu() - Use rcu callback to free a maple node * @node: The node to free @@ -204,7 +197,7 @@ static void mt_free_rcu(struct rcu_head *head) static void ma_free_rcu(struct maple_node *node) { WARN_ON(node->parent !=3D ma_parent_ptr(node)); - call_rcu(&node->rcu, mt_free_rcu); + kfree_rcu(node, rcu); } =20 static void mt_set_height(struct maple_tree *mt, unsigned char height) @@ -5099,7 +5092,7 @@ static void mt_free_walk(struct rcu_head *head) mt_free_bulk(node->slot_len, slots); =20 free_leaf: - mt_free_rcu(&node->rcu); + mt_free_one(node); } =20 static inline void __rcu **mte_destroy_descend(struct maple_enode **enode, @@ -5183,7 +5176,7 @@ static void mt_destroy_walk(struct maple_enode *enode= , struct maple_tree *mt, =20 free_leaf: if (free) - mt_free_rcu(&node->rcu); + mt_free_one(node); else mt_clear_meta(mt, node, node->type); } --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA83A314A77 for ; Wed, 10 Sep 2025 08:02:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491345; cv=none; b=QbdTrrd6H17kD5+5jBKXXR81lC0l9cy3NUjf/Li3UbNhE57m2i8PI7Yqf+3I1oHX2w4qlfWbd+5074TOVMe4aHwpGzGkF9KTYiRUPyLf4OkJjJ1Qlh++vKChM2xIziYD0Dtof4DJsgoKhsyM8owtQ43ACCfpPACrvfV5JQ+brtg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491345; c=relaxed/simple; bh=th11ShjVAeX5+cSm0X+revws6o8w5NssszS+FOPUo4g=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=tfsHCl5KmNHuCdCWCdhu30bdoA3ivPFX3jk1WB60mVOmfm3M1tDuf0nIy2asUDkJQKxisONwbhfuDdxXe6Q+EWpsSz4zl/7zdQ/ZrLKzZqe57RxK8YEWKL/uNnqENU8o8vqLGSp6cTXBE3CXcTsecDFBC87t/+Y8VqyFXHajVUY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=XH1P/pCm; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=96GHuDHG; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=XH1P/pCm; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=96GHuDHG; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="XH1P/pCm"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="96GHuDHG"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="XH1P/pCm"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="96GHuDHG" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 65D705CB7F; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=suObi2/GY4gIdOy8/pJdudh4YHfdoLo9Lt+1E266qUA=; b=XH1P/pCm7aIhdWR2K+hIhZNEHdMZ3IZbS3VhIImYaSvMZvBS+/fs5djvmdoTXkHjT78KPG KCtg3OkOWqkWN6Qut5anMLrs+F+LOsMsjQjHugmwQJikMbosqj/tBQTLL2ft0qZRI1ucjk eEAdqrh908w1crlBF+wFd9tGnM8Wwnw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=suObi2/GY4gIdOy8/pJdudh4YHfdoLo9Lt+1E266qUA=; b=96GHuDHGXNaNLLeyYvM8kaj/9vvyLDirzWizLmA/t8xdUUVXJF6oXmSnEj3YQkPpJT5iuk ZBzeuMMKVchYMCDQ== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="XH1P/pCm"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=96GHuDHG DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=suObi2/GY4gIdOy8/pJdudh4YHfdoLo9Lt+1E266qUA=; b=XH1P/pCm7aIhdWR2K+hIhZNEHdMZ3IZbS3VhIImYaSvMZvBS+/fs5djvmdoTXkHjT78KPG KCtg3OkOWqkWN6Qut5anMLrs+F+LOsMsjQjHugmwQJikMbosqj/tBQTLL2ft0qZRI1ucjk eEAdqrh908w1crlBF+wFd9tGnM8Wwnw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=suObi2/GY4gIdOy8/pJdudh4YHfdoLo9Lt+1E266qUA=; b=96GHuDHGXNaNLLeyYvM8kaj/9vvyLDirzWizLmA/t8xdUUVXJF6oXmSnEj3YQkPpJT5iuk ZBzeuMMKVchYMCDQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 4EFFA13AD1; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id UI7/EkMwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:07 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:21 +0200 Subject: [PATCH v8 19/23] maple_tree: Replace mt_free_one() with kfree() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-19-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, Pedro Falcato X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.51 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_RATELIMITED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCPT_COUNT_TWELVE(0.00)[14]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; RCVD_TLS_ALL(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:106:10:150:64:167:received,2a07:de40:b281:104:10:150:64:97:from]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,suse.de]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; DKIM_TRACE(0.00)[suse.cz:+]; R_RATELIMIT(0.00)[to_ip_from(RLfsjnp7neds983g95ihcnuzgq)]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:dkim,suse.cz:email,suse.de:email] X-Spam-Flag: NO X-Spam-Level: X-Rspamd-Queue-Id: 65D705CB7F X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -4.51 From: Pedro Falcato kfree() is a little shorter and works with kmem_cache_alloc'd pointers too. Also lets us remove one more helper. Signed-off-by: Pedro Falcato Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- lib/maple_tree.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index c706e2e48f884fd156e25be2b17eb5e154774db7..0439aaacf6cb1f39d0d23af2e2a= 5af1d27ab32be 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -177,11 +177,6 @@ static inline int mt_alloc_bulk(gfp_t gfp, size_t size= , void **nodes) return kmem_cache_alloc_bulk(maple_node_cache, gfp, size, nodes); } =20 -static inline void mt_free_one(struct maple_node *node) -{ - kmem_cache_free(maple_node_cache, node); -} - static inline void mt_free_bulk(size_t size, void __rcu **nodes) { kmem_cache_free_bulk(maple_node_cache, size, (void **)nodes); @@ -5092,7 +5087,7 @@ static void mt_free_walk(struct rcu_head *head) mt_free_bulk(node->slot_len, slots); =20 free_leaf: - mt_free_one(node); + kfree(node); } =20 static inline void __rcu **mte_destroy_descend(struct maple_enode **enode, @@ -5176,7 +5171,7 @@ static void mt_destroy_walk(struct maple_enode *enode= , struct maple_tree *mt, =20 free_leaf: if (free) - mt_free_one(node); + kfree(node); else mt_clear_meta(mt, node, node->type); } @@ -5385,7 +5380,7 @@ void mas_destroy(struct ma_state *mas) mt_free_bulk(count, (void __rcu **)&node->slot[1]); total -=3D count; } - mt_free_one(ma_mnode_ptr(node)); + kfree(ma_mnode_ptr(node)); total--; } =20 @@ -6373,7 +6368,7 @@ static void mas_dup_free(struct ma_state *mas) } =20 node =3D mte_to_node(mas->node); - mt_free_one(node); + kfree(node); } =20 /* --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09F2D314B8A for ; Wed, 10 Sep 2025 08:02:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491363; cv=none; b=eOJcOgG91lW1XjLU+5B2pH++zgpcVQII295rGfD43OajT3q8fXkh6m2dqqvEty9FoTllTbr1Bzz2wYqT5sqoDpi4kJIo/6jU4jsU+cAhlMIeGA+VO0y+S2oXM199k6UgjO1x8NRr/d4NtQjT81zY9P5toZ4zVU6JpALQskZc5B8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491363; c=relaxed/simple; bh=hL/uYA3U34m1+gJpgX8WkZHdiwJVs5hstD7odj5T4U0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LkffZCOMul+sy+bpqf97R6TovkfbT6NOo8ZYgYGB+4/7lRmsGLQ6JExr4lEhHTSpzDmUh4cQjISwrV9iFMbkod3rqlYCCYEfr4e6hBOP0d521mCqMIxNUPJqsw66UoXddqHcTj9RuzsQV1WmXC61orelj643DTRnKyXVRE0p4Co= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=MnPJRboF; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=esJ5Hvj2; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=MnPJRboF; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=esJ5Hvj2; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="MnPJRboF"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="esJ5Hvj2"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="MnPJRboF"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="esJ5Hvj2" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 78E7E5CB97; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rqh/ojynYkYiIA62N7CbXckVi4xdirpy4Yia9XnPMB4=; b=MnPJRboFgZMIbt5IKIPfJ6iLTv8kHaJ/WplabiJTV8bqN/ghZC3smI8JFrAxu6K5HJlvVT ORVtIah1FGK/csqvSMRCBd9CP1uNxoCbtgLJb7t34guamaJrPAjZ6gw+C0BY6tZYyJ8/QN j/R++l4QTRXYqwbfxLgCfT4Z7paEWck= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rqh/ojynYkYiIA62N7CbXckVi4xdirpy4Yia9XnPMB4=; b=esJ5Hvj2liQq/UumgoT0wUhi+UkAhJVchiVMkWRwQpaoHKltMer0pkevyzBz5htNbm3uXR MszEvC4j57xYb6DA== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rqh/ojynYkYiIA62N7CbXckVi4xdirpy4Yia9XnPMB4=; b=MnPJRboFgZMIbt5IKIPfJ6iLTv8kHaJ/WplabiJTV8bqN/ghZC3smI8JFrAxu6K5HJlvVT ORVtIah1FGK/csqvSMRCBd9CP1uNxoCbtgLJb7t34guamaJrPAjZ6gw+C0BY6tZYyJ8/QN j/R++l4QTRXYqwbfxLgCfT4Z7paEWck= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rqh/ojynYkYiIA62N7CbXckVi4xdirpy4Yia9XnPMB4=; b=esJ5Hvj2liQq/UumgoT0wUhi+UkAhJVchiVMkWRwQpaoHKltMer0pkevyzBz5htNbm3uXR MszEvC4j57xYb6DA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 638E713ABF; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id IDsFGEMwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:07 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:22 +0200 Subject: [PATCH v8 20/23] tools/testing: Add support for prefilled slab sheafs Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-20-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to_ip_from(RLwn5r54y1cp81no5tmbbew5oc),to(RL941jgdop1fyjkq8h4)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:email,suse.cz:mid] X-Spam-Flag: NO X-Spam-Score: -4.30 From: "Liam R. Howlett" Add the prefilled sheaf structs to the slab header and the associated functions to the testing/shared/linux.c file. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- tools/include/linux/slab.h | 28 ++++++++++++++ tools/testing/shared/linux.c | 89 ++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 117 insertions(+) diff --git a/tools/include/linux/slab.h b/tools/include/linux/slab.h index c5c5cc6db5668be2cc94c29065ccfa7ca7b4bb08..94937a699402bd1f31887dfb52b= 6fd0a3c986f43 100644 --- a/tools/include/linux/slab.h +++ b/tools/include/linux/slab.h @@ -123,6 +123,18 @@ struct kmem_cache_args { void (*ctor)(void *); }; =20 +struct slab_sheaf { + union { + struct list_head barn_list; + /* only used for prefilled sheafs */ + unsigned int capacity; + }; + struct kmem_cache *cache; + unsigned int size; + int node; /* only used for rcu_sheaf */ + void *objects[]; +}; + static inline void *kzalloc(size_t size, gfp_t gfp) { return kmalloc(size, gfp | __GFP_ZERO); @@ -173,5 +185,21 @@ __kmem_cache_create(const char *name, unsigned int siz= e, unsigned int align, void kmem_cache_free_bulk(struct kmem_cache *cachep, size_t size, void **l= ist); int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t siz= e, void **list); +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int siz= e); + +void * +kmem_cache_alloc_from_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf); + +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf); +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size); + +static inline unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf) +{ + return sheaf->size; +} =20 #endif /* _TOOLS_SLAB_H */ diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c index 97b8412ccbb6d222604c7b397c53c65618d8d51b..4ceff7969b78cf8e33cd1e021c6= 8bc9f8a02a7a1 100644 --- a/tools/testing/shared/linux.c +++ b/tools/testing/shared/linux.c @@ -137,6 +137,12 @@ void kmem_cache_free_bulk(struct kmem_cache *cachep, s= ize_t size, void **list) if (kmalloc_verbose) pr_debug("Bulk free %p[0-%zu]\n", list, size - 1); =20 + if (cachep->exec_callback) { + if (cachep->callback) + cachep->callback(cachep->private); + cachep->exec_callback =3D false; + } + pthread_mutex_lock(&cachep->lock); for (int i =3D 0; i < size; i++) kmem_cache_free_locked(cachep, list[i]); @@ -242,6 +248,89 @@ __kmem_cache_create_args(const char *name, unsigned in= t size, return ret; } =20 +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int siz= e) +{ + struct slab_sheaf *sheaf; + unsigned int capacity; + + if (s->exec_callback) { + if (s->callback) + s->callback(s->private); + s->exec_callback =3D false; + } + + capacity =3D max(size, s->sheaf_capacity); + + sheaf =3D calloc(1, sizeof(*sheaf) + sizeof(void *) * capacity); + if (!sheaf) + return NULL; + + sheaf->cache =3D s; + sheaf->capacity =3D capacity; + sheaf->size =3D kmem_cache_alloc_bulk(s, gfp, size, sheaf->objects); + if (!sheaf->size) { + free(sheaf); + return NULL; + } + + return sheaf; +} + +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size) +{ + struct slab_sheaf *sheaf =3D *sheafp; + int refill; + + if (sheaf->size >=3D size) + return 0; + + if (size > sheaf->capacity) { + sheaf =3D kmem_cache_prefill_sheaf(s, gfp, size); + if (!sheaf) + return -ENOMEM; + + kmem_cache_return_sheaf(s, gfp, *sheafp); + *sheafp =3D sheaf; + return 0; + } + + refill =3D kmem_cache_alloc_bulk(s, gfp, size - sheaf->size, + &sheaf->objects[sheaf->size]); + if (!refill) + return -ENOMEM; + + sheaf->size +=3D refill; + return 0; +} + +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + if (sheaf->size) + kmem_cache_free_bulk(s, sheaf->size, &sheaf->objects[0]); + + free(sheaf); +} + +void * +kmem_cache_alloc_from_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + void *obj; + + if (sheaf->size =3D=3D 0) { + printf("Nothing left in sheaf!\n"); + return NULL; + } + + obj =3D sheaf->objects[--sheaf->size]; + sheaf->objects[sheaf->size] =3D NULL; + + return obj; +} + /* * Test the test infrastructure for kem_cache_alloc/free and bulk counterp= arts. */ --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C3EA314A68 for ; Wed, 10 Sep 2025 08:02:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491338; cv=none; b=DLQfDjc3qTq0wTfbrWB0JJ7JqGe9fP9/cdf7S4V/ELh7njePEdjw5TnD+AQeePRD9G9G2sdbCOQYDjfpytD/t2+NhgeIkOPLHBkyl7E9Dh6q1SqOQ09h/cGPAWAv6FojjfJ23zF9V1ZVKaD4K2rBOAM/5jaAGFMb6Q/vQb8vEQE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491338; c=relaxed/simple; bh=97jPb6isJgk5mGRKtaJ/u0UwBbsN8pchOcvJjyq4Ons=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gWOkX+uOqcfJ6idNVIT2FFTAwqF0eUqMyu5Ok4xFtNHoUO+Oi4Z8iyEAuEr6YF0Lt0UTBtm/82irouam1ZdoS1ta2BDlVaYgec0h7EsyAYD3iCfdNhA9mm7RFZVNm8pROkDK8SesjNvGrjUMkasTOW6+SZKpvwYfO/G25od2/IQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=qPRN7z58; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=KRBOcgyK; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=qPRN7z58; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=KRBOcgyK; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="qPRN7z58"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="KRBOcgyK"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="qPRN7z58"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="KRBOcgyK" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 8E2E25CBB3; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pnz03cwcLV8EB9QhMsDn/e799l+WXLPMjbfVlH7gPQQ=; b=qPRN7z58tu0fwS2dWCLm4Bz2vROD3Hj2xzGSbjWALLyxAogugw03kK57JqL82UG1reqdNo hnaUeMRTGOScTfT5oCFY2llxyfF+5oJbgwg2r0UOJAK0tefkovl4TVlY3k0rxuNSmXJz7F eem7k8abeghLrv2/3Nncm09EhB65Wwg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pnz03cwcLV8EB9QhMsDn/e799l+WXLPMjbfVlH7gPQQ=; b=KRBOcgyKsTdvIDhkL1umTw328jyrpxBTf4eur8VePhbpitpdsPy6PRbmdyOQCwWooxI+Qp cJUytgQrp9GHWcDg== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pnz03cwcLV8EB9QhMsDn/e799l+WXLPMjbfVlH7gPQQ=; b=qPRN7z58tu0fwS2dWCLm4Bz2vROD3Hj2xzGSbjWALLyxAogugw03kK57JqL82UG1reqdNo hnaUeMRTGOScTfT5oCFY2llxyfF+5oJbgwg2r0UOJAK0tefkovl4TVlY3k0rxuNSmXJz7F eem7k8abeghLrv2/3Nncm09EhB65Wwg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pnz03cwcLV8EB9QhMsDn/e799l+WXLPMjbfVlH7gPQQ=; b=KRBOcgyKsTdvIDhkL1umTw328jyrpxBTf4eur8VePhbpitpdsPy6PRbmdyOQCwWooxI+Qp cJUytgQrp9GHWcDg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 7805D13A54; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id OGUJHUMwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:07 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:23 +0200 Subject: [PATCH v8 21/23] maple_tree: Prefilled sheaf conversion and testing Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-21-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz]; R_RATELIMIT(0.00)[to(RL941jgdop1fyjkq8h4),to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:mid,suse.cz:email] X-Spam-Flag: NO X-Spam-Level: X-Spam-Score: -4.30 From: "Liam R. Howlett" Use prefilled sheaves instead of bulk allocations. This should speed up the allocations and the return path of unused allocations. Remove the push and pop of nodes from the maple state as this is now handled by the slab layer with sheaves. Testing has been removed as necessary since the features of the tree have been reduced. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- include/linux/maple_tree.h | 6 +- lib/maple_tree.c | 326 ++++++--------------------- tools/testing/radix-tree/maple.c | 461 ++---------------------------------= ---- tools/testing/shared/linux.c | 5 +- 4 files changed, 88 insertions(+), 710 deletions(-) diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h index bafe143b1f783202e27b32567fffee4149e8e266..166fd67e00d882b1e6de1f80c1b= 590bba7497cd3 100644 --- a/include/linux/maple_tree.h +++ b/include/linux/maple_tree.h @@ -442,7 +442,8 @@ struct ma_state { struct maple_enode *node; /* The node containing this entry */ unsigned long min; /* The minimum index of this node - implied pivot min= */ unsigned long max; /* The maximum index of this node - implied pivot max= */ - struct maple_alloc *alloc; /* Allocated nodes for this operation */ + struct slab_sheaf *sheaf; /* Allocated nodes for this operation */ + unsigned long node_request; enum maple_status status; /* The status of the state (active, start, none= , etc) */ unsigned char depth; /* depth of tree descent during write */ unsigned char offset; @@ -490,7 +491,8 @@ struct ma_wr_state { .status =3D ma_start, \ .min =3D 0, \ .max =3D ULONG_MAX, \ - .alloc =3D NULL, \ + .node_request=3D 0, \ + .sheaf =3D NULL, \ .mas_flags =3D 0, \ .store_type =3D wr_invalid, \ } diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 0439aaacf6cb1f39d0d23af2e2a5af1d27ab32be..a3fcb20227e506ed209554cc8c0= 41a53f7ef4903 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -182,6 +182,22 @@ static inline void mt_free_bulk(size_t size, void __rc= u **nodes) kmem_cache_free_bulk(maple_node_cache, size, (void **)nodes); } =20 +static void mt_return_sheaf(struct slab_sheaf *sheaf) +{ + kmem_cache_return_sheaf(maple_node_cache, GFP_NOWAIT, sheaf); +} + +static struct slab_sheaf *mt_get_sheaf(gfp_t gfp, int count) +{ + return kmem_cache_prefill_sheaf(maple_node_cache, gfp, count); +} + +static int mt_refill_sheaf(gfp_t gfp, struct slab_sheaf **sheaf, + unsigned int size) +{ + return kmem_cache_refill_sheaf(maple_node_cache, gfp, sheaf, size); +} + /* * ma_free_rcu() - Use rcu callback to free a maple node * @node: The node to free @@ -574,67 +590,6 @@ static __always_inline bool mte_dead_node(const struct= maple_enode *enode) return ma_dead_node(node); } =20 -/* - * mas_allocated() - Get the number of nodes allocated in a maple state. - * @mas: The maple state - * - * The ma_state alloc member is overloaded to hold a pointer to the first - * allocated node or to the number of requested nodes to allocate. If bit= 0 is - * set, then the alloc contains the number of requested nodes. If there i= s an - * allocated node, then the total allocated nodes is in that node. - * - * Return: The total number of nodes allocated - */ -static inline unsigned long mas_allocated(const struct ma_state *mas) -{ - if (!mas->alloc || ((unsigned long)mas->alloc & 0x1)) - return 0; - - return mas->alloc->total; -} - -/* - * mas_set_alloc_req() - Set the requested number of allocations. - * @mas: the maple state - * @count: the number of allocations. - * - * The requested number of allocations is either in the first allocated no= de, - * located in @mas->alloc->request_count, or directly in @mas->alloc if th= ere is - * no allocated node. Set the request either in the node or do the necess= ary - * encoding to store in @mas->alloc directly. - */ -static inline void mas_set_alloc_req(struct ma_state *mas, unsigned long c= ount) -{ - if (!mas->alloc || ((unsigned long)mas->alloc & 0x1)) { - if (!count) - mas->alloc =3D NULL; - else - mas->alloc =3D (struct maple_alloc *)(((count) << 1U) | 1U); - return; - } - - mas->alloc->request_count =3D count; -} - -/* - * mas_alloc_req() - get the requested number of allocations. - * @mas: The maple state - * - * The alloc count is either stored directly in @mas, or in - * @mas->alloc->request_count if there is at least one node allocated. De= code - * the request count if it's stored directly in @mas->alloc. - * - * Return: The allocation request count. - */ -static inline unsigned int mas_alloc_req(const struct ma_state *mas) -{ - if ((unsigned long)mas->alloc & 0x1) - return (unsigned long)(mas->alloc) >> 1; - else if (mas->alloc) - return mas->alloc->request_count; - return 0; -} - /* * ma_pivots() - Get a pointer to the maple node pivots. * @node: the maple node @@ -1120,77 +1075,15 @@ static int mas_ascend(struct ma_state *mas) */ static inline struct maple_node *mas_pop_node(struct ma_state *mas) { - struct maple_alloc *ret, *node =3D mas->alloc; - unsigned long total =3D mas_allocated(mas); - unsigned int req =3D mas_alloc_req(mas); + struct maple_node *ret; =20 - /* nothing or a request pending. */ - if (WARN_ON(!total)) + if (WARN_ON_ONCE(!mas->sheaf)) return NULL; =20 - if (total =3D=3D 1) { - /* single allocation in this ma_state */ - mas->alloc =3D NULL; - ret =3D node; - goto single_node; - } - - if (node->node_count =3D=3D 1) { - /* Single allocation in this node. */ - mas->alloc =3D node->slot[0]; - mas->alloc->total =3D node->total - 1; - ret =3D node; - goto new_head; - } - node->total--; - ret =3D node->slot[--node->node_count]; - node->slot[node->node_count] =3D NULL; - -single_node: -new_head: - if (req) { - req++; - mas_set_alloc_req(mas, req); - } - + ret =3D kmem_cache_alloc_from_sheaf(maple_node_cache, GFP_NOWAIT, mas->sh= eaf); memset(ret, 0, sizeof(*ret)); - return (struct maple_node *)ret; -} - -/* - * mas_push_node() - Push a node back on the maple state allocation. - * @mas: The maple state - * @used: The used maple node - * - * Stores the maple node back into @mas->alloc for reuse. Updates allocat= ed and - * requested node count as necessary. - */ -static inline void mas_push_node(struct ma_state *mas, struct maple_node *= used) -{ - struct maple_alloc *reuse =3D (struct maple_alloc *)used; - struct maple_alloc *head =3D mas->alloc; - unsigned long count; - unsigned int requested =3D mas_alloc_req(mas); =20 - count =3D mas_allocated(mas); - - reuse->request_count =3D 0; - reuse->node_count =3D 0; - if (count) { - if (head->node_count < MAPLE_ALLOC_SLOTS) { - head->slot[head->node_count++] =3D reuse; - head->total++; - goto done; - } - reuse->slot[0] =3D head; - reuse->node_count =3D 1; - } - - reuse->total =3D count + 1; - mas->alloc =3D reuse; -done: - if (requested > 1) - mas_set_alloc_req(mas, requested - 1); + return ret; } =20 /* @@ -1200,75 +1093,32 @@ static inline void mas_push_node(struct ma_state *m= as, struct maple_node *used) */ static inline void mas_alloc_nodes(struct ma_state *mas, gfp_t gfp) { - struct maple_alloc *node; - unsigned long allocated =3D mas_allocated(mas); - unsigned int requested =3D mas_alloc_req(mas); - unsigned int count; - void **slots =3D NULL; - unsigned int max_req =3D 0; - - if (!requested) - return; + if (unlikely(mas->sheaf)) { + unsigned long refill =3D mas->node_request; =20 - mas_set_alloc_req(mas, 0); - if (mas->mas_flags & MA_STATE_PREALLOC) { - if (allocated) + if(kmem_cache_sheaf_size(mas->sheaf) >=3D refill) { + mas->node_request =3D 0; return; - WARN_ON(!allocated); - } - - if (!allocated || mas->alloc->node_count =3D=3D MAPLE_ALLOC_SLOTS) { - node =3D (struct maple_alloc *)mt_alloc_one(gfp); - if (!node) - goto nomem_one; - - if (allocated) { - node->slot[0] =3D mas->alloc; - node->node_count =3D 1; - } else { - node->node_count =3D 0; } =20 - mas->alloc =3D node; - node->total =3D ++allocated; - node->request_count =3D 0; - requested--; - } + if (mt_refill_sheaf(gfp, &mas->sheaf, refill)) + goto error; =20 - node =3D mas->alloc; - while (requested) { - max_req =3D MAPLE_ALLOC_SLOTS - node->node_count; - slots =3D (void **)&node->slot[node->node_count]; - max_req =3D min(requested, max_req); - count =3D mt_alloc_bulk(gfp, max_req, slots); - if (!count) - goto nomem_bulk; - - if (node->node_count =3D=3D 0) { - node->slot[0]->node_count =3D 0; - node->slot[0]->request_count =3D 0; - } + mas->node_request =3D 0; + return; + } =20 - node->node_count +=3D count; - allocated +=3D count; - /* find a non-full node*/ - do { - node =3D node->slot[0]; - } while (unlikely(node->node_count =3D=3D MAPLE_ALLOC_SLOTS)); - requested -=3D count; + mas->sheaf =3D mt_get_sheaf(gfp, mas->node_request); + if (likely(mas->sheaf)) { + mas->node_request =3D 0; + return; } - mas->alloc->total =3D allocated; - return; =20 -nomem_bulk: - /* Clean up potential freed allocations on bulk failure */ - memset(slots, 0, max_req * sizeof(unsigned long)); - mas->alloc->total =3D allocated; -nomem_one: - mas_set_alloc_req(mas, requested); +error: mas_set_err(mas, -ENOMEM); } =20 + /* * mas_free() - Free an encoded maple node * @mas: The maple state @@ -1279,42 +1129,7 @@ static inline void mas_alloc_nodes(struct ma_state *= mas, gfp_t gfp) */ static inline void mas_free(struct ma_state *mas, struct maple_enode *used) { - struct maple_node *tmp =3D mte_to_node(used); - - if (mt_in_rcu(mas->tree)) - ma_free_rcu(tmp); - else - mas_push_node(mas, tmp); -} - -/* - * mas_node_count_gfp() - Check if enough nodes are allocated and request = more - * if there is not enough nodes. - * @mas: The maple state - * @count: The number of nodes needed - * @gfp: the gfp flags - */ -static void mas_node_count_gfp(struct ma_state *mas, int count, gfp_t gfp) -{ - unsigned long allocated =3D mas_allocated(mas); - - if (allocated < count) { - mas_set_alloc_req(mas, count - allocated); - mas_alloc_nodes(mas, gfp); - } -} - -/* - * mas_node_count() - Check if enough nodes are allocated and request more= if - * there is not enough nodes. - * @mas: The maple state - * @count: The number of nodes needed - * - * Note: Uses GFP_NOWAIT for gfp flags. - */ -static void mas_node_count(struct ma_state *mas, int count) -{ - return mas_node_count_gfp(mas, count, GFP_NOWAIT); + ma_free_rcu(mte_to_node(used)); } =20 /* @@ -2451,10 +2266,7 @@ static inline void mas_topiary_node(struct ma_state = *mas, enode =3D tmp_mas->node; tmp =3D mte_to_node(enode); mte_set_node_dead(enode); - if (in_rcu) - ma_free_rcu(tmp); - else - mas_push_node(mas, tmp); + ma_free_rcu(tmp); } =20 /* @@ -3980,7 +3792,7 @@ static inline void mas_wr_prealloc_setup(struct ma_wr= _state *wr_mas) * * Return: Number of nodes required for preallocation. */ -static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entr= y) +static inline void mas_prealloc_calc(struct ma_wr_state *wr_mas, void *ent= ry) { struct ma_state *mas =3D wr_mas->mas; unsigned char height =3D mas_mt_height(mas); @@ -4026,7 +3838,7 @@ static inline int mas_prealloc_calc(struct ma_wr_stat= e *wr_mas, void *entry) WARN_ON_ONCE(1); } =20 - return ret; + mas->node_request =3D ret; } =20 /* @@ -4087,15 +3899,15 @@ static inline enum store_type mas_wr_store_type(str= uct ma_wr_state *wr_mas) */ static inline void mas_wr_preallocate(struct ma_wr_state *wr_mas, void *en= try) { - int request; + struct ma_state *mas =3D wr_mas->mas; =20 mas_wr_prealloc_setup(wr_mas); - wr_mas->mas->store_type =3D mas_wr_store_type(wr_mas); - request =3D mas_prealloc_calc(wr_mas, entry); - if (!request) + mas->store_type =3D mas_wr_store_type(wr_mas); + mas_prealloc_calc(wr_mas, entry); + if (!mas->node_request) return; =20 - mas_node_count(wr_mas->mas, request); + mas_alloc_nodes(mas, GFP_NOWAIT); } =20 /** @@ -5208,7 +5020,6 @@ static inline void mte_destroy_walk(struct maple_enod= e *enode, */ void *mas_store(struct ma_state *mas, void *entry) { - int request; MA_WR_STATE(wr_mas, mas, entry); =20 trace_ma_write(__func__, mas, 0, entry); @@ -5238,11 +5049,11 @@ void *mas_store(struct ma_state *mas, void *entry) return wr_mas.content; } =20 - request =3D mas_prealloc_calc(&wr_mas, entry); - if (!request) + mas_prealloc_calc(&wr_mas, entry); + if (!mas->node_request) goto store; =20 - mas_node_count(mas, request); + mas_alloc_nodes(mas, GFP_NOWAIT); if (mas_is_err(mas)) return NULL; =20 @@ -5330,20 +5141,19 @@ EXPORT_SYMBOL_GPL(mas_store_prealloc); int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp) { MA_WR_STATE(wr_mas, mas, entry); - int ret =3D 0; - int request; =20 mas_wr_prealloc_setup(&wr_mas); mas->store_type =3D mas_wr_store_type(&wr_mas); - request =3D mas_prealloc_calc(&wr_mas, entry); - if (!request) + mas_prealloc_calc(&wr_mas, entry); + if (!mas->node_request) goto set_flag; =20 mas->mas_flags &=3D ~MA_STATE_PREALLOC; - mas_node_count_gfp(mas, request, gfp); + mas_alloc_nodes(mas, gfp); if (mas_is_err(mas)) { - mas_set_alloc_req(mas, 0); - ret =3D xa_err(mas->node); + int ret =3D xa_err(mas->node); + + mas->node_request =3D 0; mas_destroy(mas); mas_reset(mas); return ret; @@ -5351,7 +5161,7 @@ int mas_preallocate(struct ma_state *mas, void *entry= , gfp_t gfp) =20 set_flag: mas->mas_flags |=3D MA_STATE_PREALLOC; - return ret; + return 0; } EXPORT_SYMBOL_GPL(mas_preallocate); =20 @@ -5365,26 +5175,13 @@ EXPORT_SYMBOL_GPL(mas_preallocate); */ void mas_destroy(struct ma_state *mas) { - struct maple_alloc *node; - unsigned long total; - mas->mas_flags &=3D ~MA_STATE_PREALLOC; =20 - total =3D mas_allocated(mas); - while (total) { - node =3D mas->alloc; - mas->alloc =3D node->slot[0]; - if (node->node_count > 1) { - size_t count =3D node->node_count - 1; - - mt_free_bulk(count, (void __rcu **)&node->slot[1]); - total -=3D count; - } - kfree(ma_mnode_ptr(node)); - total--; - } + mas->node_request =3D 0; + if (mas->sheaf) + mt_return_sheaf(mas->sheaf); =20 - mas->alloc =3D NULL; + mas->sheaf =3D NULL; } EXPORT_SYMBOL_GPL(mas_destroy); =20 @@ -6019,7 +5816,7 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp) mas_alloc_nodes(mas, gfp); } =20 - if (!mas_allocated(mas)) + if (!mas->sheaf) return false; =20 mas->status =3D ma_start; @@ -7414,8 +7211,9 @@ void mas_dump(const struct ma_state *mas) =20 pr_err("[%u/%u] index=3D%lx last=3D%lx\n", mas->offset, mas->end, mas->index, mas->last); - pr_err(" min=3D%lx max=3D%lx alloc=3D" PTR_FMT ", depth=3D%u, flags= =3D%x\n", - mas->min, mas->max, mas->alloc, mas->depth, mas->mas_flags); + pr_err(" min=3D%lx max=3D%lx sheaf=3D" PTR_FMT ", request %lu depth= =3D%u, flags=3D%x\n", + mas->min, mas->max, mas->sheaf, mas->node_request, mas->depth, + mas->mas_flags); if (mas->index > mas->last) pr_err("Check index & last\n"); } diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/ma= ple.c index 4a35e1e7c64b7ce347cbd1693beeaacb0c4c330e..72a8fe8e832a4150c6567b71176= 8eba6a3fa6768 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -57,430 +57,6 @@ struct rcu_reader_struct { struct rcu_test_struct2 *test; }; =20 -static int get_alloc_node_count(struct ma_state *mas) -{ - int count =3D 1; - struct maple_alloc *node =3D mas->alloc; - - if (!node || ((unsigned long)node & 0x1)) - return 0; - while (node->node_count) { - count +=3D node->node_count; - node =3D node->slot[0]; - } - return count; -} - -static void check_mas_alloc_node_count(struct ma_state *mas) -{ - mas_node_count_gfp(mas, MAPLE_ALLOC_SLOTS + 1, GFP_KERNEL); - mas_node_count_gfp(mas, MAPLE_ALLOC_SLOTS + 3, GFP_KERNEL); - MT_BUG_ON(mas->tree, get_alloc_node_count(mas) !=3D mas->alloc->total); - mas_destroy(mas); -} - -/* - * check_new_node() - Check the creation of new nodes and error path - * verification. - */ -static noinline void __init check_new_node(struct maple_tree *mt) -{ - - struct maple_node *mn, *mn2, *mn3; - struct maple_alloc *smn; - struct maple_node *nodes[100]; - int i, j, total; - - MA_STATE(mas, mt, 0, 0); - - check_mas_alloc_node_count(&mas); - - /* Try allocating 3 nodes */ - mtree_lock(mt); - mt_set_non_kernel(0); - /* request 3 nodes to be allocated. */ - mas_node_count(&mas, 3); - /* Allocation request of 3. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 3); - /* Allocate failed. */ - MT_BUG_ON(mt, mas.node !=3D MA_ERROR(-ENOMEM)); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 3); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[0] =3D=3D NULL); - mas_push_node(&mas, mn); - mas_reset(&mas); - mas_destroy(&mas); - mtree_unlock(mt); - - - /* Try allocating 1 node, then 2 more */ - mtree_lock(mt); - /* Set allocation request to 1. */ - mas_set_alloc_req(&mas, 1); - /* Check Allocation request of 1. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 1); - mas_set_err(&mas, -ENOMEM); - /* Validate allocation request. */ - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - /* Eat the requested node. */ - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, mn->slot[0] !=3D NULL); - MT_BUG_ON(mt, mn->slot[1] !=3D NULL); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mas.status =3D ma_start; - mas_destroy(&mas); - /* Allocate 3 nodes, will fail. */ - mas_node_count(&mas, 3); - /* Drop the lock and allocate 3 nodes. */ - mas_nomem(&mas, GFP_KERNEL); - /* Ensure 3 are allocated. */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 3); - /* Allocation request of 0. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 0); - - MT_BUG_ON(mt, mas.alloc =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[0] =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[1] =3D=3D NULL); - /* Ensure we counted 3. */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 3); - /* Free. */ - mas_reset(&mas); - mas_destroy(&mas); - - /* Set allocation request to 1. */ - mas_set_alloc_req(&mas, 1); - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 1); - mas_set_err(&mas, -ENOMEM); - /* Validate allocation request. */ - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 1); - /* Check the node is only one node. */ - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, mn->slot[0] !=3D NULL); - MT_BUG_ON(mt, mn->slot[1] !=3D NULL); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 1); - MT_BUG_ON(mt, mas.alloc->node_count); - - mas_set_alloc_req(&mas, 2); /* request 2 more. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 2); - mas_set_err(&mas, -ENOMEM); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 3); - MT_BUG_ON(mt, mas.alloc =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[0] =3D=3D NULL); - MT_BUG_ON(mt, mas.alloc->slot[1] =3D=3D NULL); - for (i =3D 2; i >=3D 0; i--) { - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i); - MT_BUG_ON(mt, !mn); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - - total =3D 64; - mas_set_alloc_req(&mas, total); /* request 2 more. */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D total); - mas_set_err(&mas, -ENOMEM); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - for (i =3D total; i > 0; i--) { - unsigned int e =3D 0; /* expected node_count */ - - if (!MAPLE_32BIT) { - if (i >=3D 35) - e =3D i - 34; - else if (i >=3D 5) - e =3D i - 4; - else if (i >=3D 2) - e =3D i - 1; - } else { - if (i >=3D 4) - e =3D i - 3; - else if (i >=3D 1) - e =3D i - 1; - else - e =3D 0; - } - - MT_BUG_ON(mt, mas.alloc->node_count !=3D e); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - 1); - MT_BUG_ON(mt, !mn); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - - total =3D 100; - for (i =3D 1; i < total; i++) { - mas_set_alloc_req(&mas, i); - mas_set_err(&mas, -ENOMEM); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - for (j =3D i; j > 0; j--) { - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D j - 1); - MT_BUG_ON(mt, !mn); - MT_BUG_ON(mt, not_empty(mn)); - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D j); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D j - 1); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - - mas_set_alloc_req(&mas, i); - mas_set_err(&mas, -ENOMEM); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - for (j =3D 0; j <=3D i/2; j++) { - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j); - nodes[j] =3D mas_pop_node(&mas); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j - 1); - } - - while (j) { - j--; - mas_push_node(&mas, nodes[j]); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i); - for (j =3D 0; j <=3D i/2; j++) { - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i - j - 1); - } - mas_reset(&mas); - MT_BUG_ON(mt, mas_nomem(&mas, GFP_KERNEL)); - mas_destroy(&mas); - - } - - /* Set allocation request. */ - total =3D 500; - mas_node_count(&mas, total); - /* Drop the lock and allocate the nodes. */ - mas_nomem(&mas, GFP_KERNEL); - MT_BUG_ON(mt, !mas.alloc); - i =3D 1; - smn =3D mas.alloc; - while (i < total) { - for (j =3D 0; j < MAPLE_ALLOC_SLOTS; j++) { - i++; - MT_BUG_ON(mt, !smn->slot[j]); - if (i =3D=3D total) - break; - } - smn =3D smn->slot[0]; /* next. */ - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D total); - mas_reset(&mas); - mas_destroy(&mas); /* Free. */ - - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - for (i =3D 1; i < 128; i++) { - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i); /* check request filled */ - for (j =3D i; j > 0; j--) { /*Free the requests */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - } - - for (i =3D 1; i < MAPLE_NODE_MASK + 1; i++) { - MA_STATE(mas2, mt, 0, 0); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D i); /* check request filled */ - for (j =3D 1; j <=3D i; j++) { /* Move the allocations to mas2 */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, not_empty(mn)); - mas_push_node(&mas2, mn); - MT_BUG_ON(mt, mas_allocated(&mas2) !=3D j); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - MT_BUG_ON(mt, mas_allocated(&mas2) !=3D i); - - for (j =3D i; j > 0; j--) { /*Free the requests */ - MT_BUG_ON(mt, mas_allocated(&mas2) !=3D j); - mn =3D mas_pop_node(&mas2); /* get the next node. */ - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - MT_BUG_ON(mt, mas_allocated(&mas2) !=3D 0); - } - - - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS + 1); /* Request */ - MT_BUG_ON(mt, mas.node !=3D MA_ERROR(-ENOMEM)); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count !=3D MAPLE_ALLOC_SLOTS); - - mn =3D mas_pop_node(&mas); /* get the next node. */ - MT_BUG_ON(mt, mn =3D=3D NULL); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS); - MT_BUG_ON(mt, mas.alloc->node_count !=3D MAPLE_ALLOC_SLOTS - 1); - - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count !=3D MAPLE_ALLOC_SLOTS); - - /* Check the limit of pop/push/pop */ - mas_node_count(&mas, MAPLE_ALLOC_SLOTS + 2); /* Request */ - MT_BUG_ON(mt, mas_alloc_req(&mas) !=3D 1); - MT_BUG_ON(mt, mas.node !=3D MA_ERROR(-ENOMEM)); - MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); - MT_BUG_ON(mt, mas_alloc_req(&mas)); - MT_BUG_ON(mt, mas.alloc->node_count !=3D 1); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 2); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count !=3D MAPLE_ALLOC_SLOTS); - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas.alloc->node_count !=3D 1); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 2); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - for (i =3D 1; i <=3D MAPLE_ALLOC_SLOTS + 1; i++) { - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, not_empty(mn)); - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - } - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 0); - - - for (i =3D 3; i < MAPLE_NODE_MASK * 3; i++) { - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mas_push_node(&mas, mn); /* put it back */ - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn2 =3D mas_pop_node(&mas); /* get the next node. */ - mas_push_node(&mas, mn); /* put them back */ - mas_push_node(&mas, mn2); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn2 =3D mas_pop_node(&mas); /* get the next node. */ - mn3 =3D mas_pop_node(&mas); /* get the next node. */ - mas_push_node(&mas, mn); /* put them back */ - mas_push_node(&mas, mn2); - mas_push_node(&mas, mn3); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, i); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mn =3D mas_pop_node(&mas); /* get the next node. */ - mn->parent =3D ma_parent_ptr(mn); - ma_free_rcu(mn); - mas_destroy(&mas); - } - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, 5); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 5); - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, 10); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mas.status =3D ma_start; - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 10); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS - 1); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS - 1); - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, 10 + MAPLE_ALLOC_SLOTS - 1); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mas.status =3D ma_start; - MT_BUG_ON(mt, mas_allocated(&mas) !=3D 10 + MAPLE_ALLOC_SLOTS - 1); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS + 1); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS + 1); - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS * 2 + 2); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mas.status =3D ma_start; - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS * 2 + 2); - mas_destroy(&mas); - - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS * 2 + 1); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS * 2 + 1); - mas.node =3D MA_ERROR(-ENOMEM); - mas_node_count(&mas, MAPLE_ALLOC_SLOTS * 3 + 2); /* Request */ - mas_nomem(&mas, GFP_KERNEL); /* Fill request */ - mas.status =3D ma_start; - MT_BUG_ON(mt, mas_allocated(&mas) !=3D MAPLE_ALLOC_SLOTS * 3 + 2); - mas_destroy(&mas); - - mtree_unlock(mt); -} - /* * Check erasing including RCU. */ @@ -35507,6 +35083,13 @@ static unsigned char get_vacant_height(struct ma_w= r_state *wr_mas, void *entry) return vacant_height; } =20 +static int mas_allocated(struct ma_state *mas) +{ + if (mas->sheaf) + return kmem_cache_sheaf_size(mas->sheaf); + + return 0; +} /* Preallocation testing */ static noinline void __init check_prealloc(struct maple_tree *mt) { @@ -35525,7 +35108,10 @@ static noinline void __init check_prealloc(struct = maple_tree *mt) =20 /* Spanning store */ mas_set_range(&mas, 470, 500); - MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); + + mas_wr_preallocate(&wr_mas, ptr); + MT_BUG_ON(mt, mas.store_type !=3D wr_spanning_store); + MT_BUG_ON(mt, mas_is_err(&mas)); allocated =3D mas_allocated(&mas); height =3D mas_mt_height(&mas); vacant_height =3D get_vacant_height(&wr_mas, ptr); @@ -35535,6 +35121,7 @@ static noinline void __init check_prealloc(struct m= aple_tree *mt) allocated =3D mas_allocated(&mas); MT_BUG_ON(mt, allocated !=3D 0); =20 + mas_wr_preallocate(&wr_mas, ptr); MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); allocated =3D mas_allocated(&mas); height =3D mas_mt_height(&mas); @@ -35575,20 +35162,6 @@ static noinline void __init check_prealloc(struct = maple_tree *mt) mn->parent =3D ma_parent_ptr(mn); ma_free_rcu(mn); =20 - MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); - allocated =3D mas_allocated(&mas); - height =3D mas_mt_height(&mas); - vacant_height =3D get_vacant_height(&wr_mas, ptr); - MT_BUG_ON(mt, allocated !=3D 1 + (height - vacant_height) * 3); - mn =3D mas_pop_node(&mas); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D allocated - 1); - mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas_allocated(&mas) !=3D allocated); - MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); - mas_destroy(&mas); - allocated =3D mas_allocated(&mas); - MT_BUG_ON(mt, allocated !=3D 0); - MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) !=3D 0); allocated =3D mas_allocated(&mas); height =3D mas_mt_height(&mas); @@ -36389,11 +35962,17 @@ static void check_nomem_writer_race(struct maple_= tree *mt) check_load(mt, 6, xa_mk_value(0xC)); mtree_unlock(mt); =20 + mt_set_non_kernel(0); /* test for the same race but with mas_store_gfp() */ mtree_store_range(mt, 0, 5, xa_mk_value(0xA), GFP_KERNEL); mtree_store_range(mt, 6, 10, NULL, GFP_KERNEL); =20 mas_set_range(&mas, 0, 5); + + /* setup writer 2 that will trigger the race condition */ + mt_set_private(mt); + mt_set_callback(writer2); + mtree_lock(mt); mas_store_gfp(&mas, NULL, GFP_KERNEL); =20 @@ -36508,10 +36087,6 @@ void farmer_tests(void) check_erase_testset(&tree); mtree_destroy(&tree); =20 - mt_init_flags(&tree, 0); - check_new_node(&tree); - mtree_destroy(&tree); - if (!MAPLE_32BIT) { mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); check_rcu_simulated(&tree); diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c index 4ceff7969b78cf8e33cd1e021c68bc9f8a02a7a1..8c72571559583759456c2b469a2= abc2611117c13 100644 --- a/tools/testing/shared/linux.c +++ b/tools/testing/shared/linux.c @@ -64,7 +64,8 @@ void *kmem_cache_alloc_lru(struct kmem_cache *cachep, str= uct list_lru *lru, =20 if (!(gfp & __GFP_DIRECT_RECLAIM)) { if (!cachep->non_kernel) { - cachep->exec_callback =3D true; + if (cachep->callback) + cachep->exec_callback =3D true; return NULL; } =20 @@ -210,6 +211,8 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gf= p_t gfp, size_t size, for (i =3D 0; i < size; i++) __kmem_cache_free_locked(cachep, p[i]); pthread_mutex_unlock(&cachep->lock); + if (cachep->callback) + cachep->exec_callback =3D true; return 0; } =20 --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4B67314B8A for ; Wed, 10 Sep 2025 08:02:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491357; cv=none; b=tmBSaE3033RMRQBB8rVYQ8gIf0pfcxV5oervtPqdrmwM15XBKDxtAGUJJky8A9xRCO9zqwdYI/EWtuo9yHlbX+zHzhw/u430FrI4VjkkF1ikm49f7w6lzKWRpMNdOTIh+kGlvdmSMEoLdC6aE+OPhWxsFiYc5iuVPgwsZo4w13w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491357; c=relaxed/simple; bh=28ASGun92PwbCnSWRg7GtRz7iIraEx4bfcesxbmo0no=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WRAv4Nb1Mltxyls4aT0hhgHtjNd7UWZXcEQom89STBmeTARp1Q9y/LUB7NeJf5F/7N8ZJ5+1ymQFFhvAjRs48UNN9FSe5GN2H92rQcu9QipAj3LvCBSYzvA5Y6ff0uENJqkwvps32ScapdhJ+6lvS51dovtDIiavHnVKbtOn4to= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=bG8byubO; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=FRqvZn76; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=bG8byubO; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=FRqvZn76; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="bG8byubO"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="FRqvZn76"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="bG8byubO"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="FRqvZn76" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id A30545CC0C; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K1197+hnB0r2rFuPElGqqXMK0u+WFyH+bVUAY0R1B8I=; b=bG8byubOhLuFcRvQJ/p+M0Y2OV50g4nsGXfM8gfrJ0cPb0YCeaCHCfnTJRTX0itVNWk9uo mC9Dyy6Q3QiT2EQTB5SLttSap4J03aQo9rSc/ri6xVnc7mA8TQ7s1NurKh1r7sRigx0te1 Jwx55wHfLrwjOGNwONXQYOQHnmUFadc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K1197+hnB0r2rFuPElGqqXMK0u+WFyH+bVUAY0R1B8I=; b=FRqvZn76AmAanqTnKsjGlWhQU0riFgfbSy1PQwkbso/YR3aWuZCSzf9qHdHZWDEZNpZAvO 8YLupf3mRiYc5hAw== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K1197+hnB0r2rFuPElGqqXMK0u+WFyH+bVUAY0R1B8I=; b=bG8byubOhLuFcRvQJ/p+M0Y2OV50g4nsGXfM8gfrJ0cPb0YCeaCHCfnTJRTX0itVNWk9uo mC9Dyy6Q3QiT2EQTB5SLttSap4J03aQo9rSc/ri6xVnc7mA8TQ7s1NurKh1r7sRigx0te1 Jwx55wHfLrwjOGNwONXQYOQHnmUFadc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K1197+hnB0r2rFuPElGqqXMK0u+WFyH+bVUAY0R1B8I=; b=FRqvZn76AmAanqTnKsjGlWhQU0riFgfbSy1PQwkbso/YR3aWuZCSzf9qHdHZWDEZNpZAvO 8YLupf3mRiYc5hAw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 8C86213ABD; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 2KgMIkMwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:07 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:24 +0200 Subject: [PATCH v8 22/23] maple_tree: Add single node allocation support to maple state Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-22-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, "Liam R. Howlett" X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-4.30 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[14]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,Oracle.com]; R_RATELIMIT(0.00)[to(RL941jgdop1fyjkq8h4),to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:email,suse.cz:mid] X-Spam-Flag: NO X-Spam-Score: -4.30 From: "Liam R. Howlett" The fast path through a write will require replacing a single node in the tree. Using a sheaf (32 nodes) is too heavy for the fast path, so special case the node store operation by just allocating one node in the maple state. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka --- include/linux/maple_tree.h | 4 +++- lib/maple_tree.c | 47 +++++++++++++++++++++++++++++++++++-= ---- tools/testing/radix-tree/maple.c | 9 ++++++-- 3 files changed, 51 insertions(+), 9 deletions(-) diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h index 166fd67e00d882b1e6de1f80c1b590bba7497cd3..562a1e9e5132b5b1fa8f8402a7c= add8abb65e323 100644 --- a/include/linux/maple_tree.h +++ b/include/linux/maple_tree.h @@ -443,6 +443,7 @@ struct ma_state { unsigned long min; /* The minimum index of this node - implied pivot min= */ unsigned long max; /* The maximum index of this node - implied pivot max= */ struct slab_sheaf *sheaf; /* Allocated nodes for this operation */ + struct maple_node *alloc; /* allocated nodes */ unsigned long node_request; enum maple_status status; /* The status of the state (active, start, none= , etc) */ unsigned char depth; /* depth of tree descent during write */ @@ -491,8 +492,9 @@ struct ma_wr_state { .status =3D ma_start, \ .min =3D 0, \ .max =3D ULONG_MAX, \ - .node_request=3D 0, \ .sheaf =3D NULL, \ + .alloc =3D NULL, \ + .node_request=3D 0, \ .mas_flags =3D 0, \ .store_type =3D wr_invalid, \ } diff --git a/lib/maple_tree.c b/lib/maple_tree.c index a3fcb20227e506ed209554cc8c041a53f7ef4903..a912e6a1d4378e72b967027b60f= 8f564476ad14e 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -1073,16 +1073,23 @@ static int mas_ascend(struct ma_state *mas) * * Return: A pointer to a maple node. */ -static inline struct maple_node *mas_pop_node(struct ma_state *mas) +static __always_inline struct maple_node *mas_pop_node(struct ma_state *ma= s) { struct maple_node *ret; =20 + if (mas->alloc) { + ret =3D mas->alloc; + mas->alloc =3D NULL; + goto out; + } + if (WARN_ON_ONCE(!mas->sheaf)) return NULL; =20 ret =3D kmem_cache_alloc_from_sheaf(maple_node_cache, GFP_NOWAIT, mas->sh= eaf); - memset(ret, 0, sizeof(*ret)); =20 +out: + memset(ret, 0, sizeof(*ret)); return ret; } =20 @@ -1093,9 +1100,34 @@ static inline struct maple_node *mas_pop_node(struct= ma_state *mas) */ static inline void mas_alloc_nodes(struct ma_state *mas, gfp_t gfp) { - if (unlikely(mas->sheaf)) { - unsigned long refill =3D mas->node_request; + if (!mas->node_request) + return; + + if (mas->node_request =3D=3D 1) { + if (mas->sheaf) + goto use_sheaf; + + if (mas->alloc) + return; =20 + mas->alloc =3D mt_alloc_one(gfp); + if (!mas->alloc) + goto error; + + mas->node_request =3D 0; + return; + } + +use_sheaf: + if (unlikely(mas->alloc)) { + kfree(mas->alloc); + mas->alloc =3D NULL; + } + + if (mas->sheaf) { + unsigned long refill; + + refill =3D mas->node_request; if(kmem_cache_sheaf_size(mas->sheaf) >=3D refill) { mas->node_request =3D 0; return; @@ -5180,8 +5212,11 @@ void mas_destroy(struct ma_state *mas) mas->node_request =3D 0; if (mas->sheaf) mt_return_sheaf(mas->sheaf); - mas->sheaf =3D NULL; + + if (mas->alloc) + kfree(mas->alloc); + mas->alloc =3D NULL; } EXPORT_SYMBOL_GPL(mas_destroy); =20 @@ -5816,7 +5851,7 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp) mas_alloc_nodes(mas, gfp); } =20 - if (!mas->sheaf) + if (!mas->sheaf && !mas->alloc) return false; =20 mas->status =3D ma_start; diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/ma= ple.c index 72a8fe8e832a4150c6567b711768eba6a3fa6768..83260f2efb1990b71093e456950= 069c24d75560e 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -35085,10 +35085,15 @@ static unsigned char get_vacant_height(struct ma_= wr_state *wr_mas, void *entry) =20 static int mas_allocated(struct ma_state *mas) { + int total =3D 0; + + if (mas->alloc) + total++; + if (mas->sheaf) - return kmem_cache_sheaf_size(mas->sheaf); + total +=3D kmem_cache_sheaf_size(mas->sheaf); =20 - return 0; + return total; } /* Preallocation testing */ static noinline void __init check_prealloc(struct maple_tree *mt) --=20 2.51.0 From nobody Thu Oct 2 22:43:17 2025 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3B55314B6B for ; Wed, 10 Sep 2025 08:02:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491351; cv=none; b=EdDd9HjLBC9OMBC/y3yXXIut+ZgC+bbS4wXj0SZZNkRh8QNcMOW6yYUMIQ4udrUzSQg4Cq8Y5ShGilKJJo5Z/J40EasZuMz0aTadB/vUhXAPAnwhl9HQyZX+X42pXQ3VfyzX1bnyeHmhNUURXJ04lemCtdUphTG34kZhgxBjo5Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757491351; c=relaxed/simple; bh=77nfxFsavEoPWLw0m9ac6K7O3bLexl2j1nhMEeCW5DM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NkvGV8INpU9q6qUthM8MQgbTS5IzI+gAMIHs90NOdHRpcAEyMGaiSPahkJ51AnqxIF/gvwCcolaRKPbpWewvomFsKAdDKz+fMrzUdZw3fcn/AUY/HtGarH9hIoAO8jWR9pJjB8l9E6OtE7TyhgNWjwD+PpcIcxTVdQyzLKJZK7w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz; spf=pass smtp.mailfrom=suse.cz; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=QAwn/o0h; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=B2VvFl8/; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b=QAwn/o0h; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b=B2VvFl8/; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.cz Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="QAwn/o0h"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="B2VvFl8/"; dkim=pass (1024-bit key) header.d=suse.cz header.i=@suse.cz header.b="QAwn/o0h"; dkim=permerror (0-bit key) header.d=suse.cz header.i=@suse.cz header.b="B2VvFl8/" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id B83E35CC1E; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BBs29MGarWPPgfpNi/IH3TvKAOWJMh4Zi3GI/cxY2Sw=; b=QAwn/o0hQcy3b0jhfKJ5rtaMkm65Q8ExQ8JQMpeYcwN68k1Sp4WdAYXg6sZHeaVJE4VAfY +9p1R1DBduVlM5S1LZO549DDZto6jHWa5DXUYgyADQUjM/l6dLgrAzp5poGUW8hF5zraVf S6WBfzUUEpbM9u8JIIqlqOmV6P3WGCA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BBs29MGarWPPgfpNi/IH3TvKAOWJMh4Zi3GI/cxY2Sw=; b=B2VvFl8/JpnCy5aMxPz4vU4D0g9UTbSLnV/oaY9FdpZusIYBkxwJddeUb53zKiUd0VNkAZ j6/0eYHhIq0Iu0Bw== Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BBs29MGarWPPgfpNi/IH3TvKAOWJMh4Zi3GI/cxY2Sw=; b=QAwn/o0hQcy3b0jhfKJ5rtaMkm65Q8ExQ8JQMpeYcwN68k1Sp4WdAYXg6sZHeaVJE4VAfY +9p1R1DBduVlM5S1LZO549DDZto6jHWa5DXUYgyADQUjM/l6dLgrAzp5poGUW8hF5zraVf S6WBfzUUEpbM9u8JIIqlqOmV6P3WGCA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1757491267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BBs29MGarWPPgfpNi/IH3TvKAOWJMh4Zi3GI/cxY2Sw=; b=B2VvFl8/JpnCy5aMxPz4vU4D0g9UTbSLnV/oaY9FdpZusIYBkxwJddeUb53zKiUd0VNkAZ j6/0eYHhIq0Iu0Bw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A12C613ABF; Wed, 10 Sep 2025 08:01:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 0IQQJ0MwwWgGJAAAD6G6ig (envelope-from ); Wed, 10 Sep 2025 08:01:07 +0000 From: Vlastimil Babka Date: Wed, 10 Sep 2025 10:01:25 +0200 Subject: [PATCH v8 23/23] maple_tree: Convert forking to use the sheaf interface Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250910-slub-percpu-caches-v8-23-ca3099d8352c@suse.cz> References: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> In-Reply-To: <20250910-slub-percpu-caches-v8-0-ca3099d8352c@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , Sidhartha Kumar , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz, "Liam R. Howlett" X-Mailer: b4 0.14.2 X-Spam-Level: X-Spamd-Result: default: False [-8.30 / 50.00]; REPLY(-4.00)[]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; FUZZY_RATELIMITED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[14]; ARC_NA(0.00)[]; RCVD_TLS_ALL(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; R_RATELIMIT(0.00)[to(RL941jgdop1fyjkq8h4),to_ip_from(RLwn5r54y1cp81no5tmbbew5oc)]; FROM_HAS_DN(0.00)[]; FREEMAIL_CC(0.00)[linux.dev,oracle.com,gmail.com,kvack.org,vger.kernel.org,lists.infradead.org,suse.cz,Oracle.com]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:email,suse.cz:mid] X-Spam-Flag: NO X-Spam-Score: -8.30 From: "Liam R. Howlett" Use the generic interface which should result in less bulk allocations during a forking. A part of this is to abstract the freeing of the sheaf or maple state allocations into its own function so mas_destroy() and the tree duplication code can use the same functionality to return any unused resources. [andriy.shevchenko@linux.intel.com: remove unused mt_alloc_bulk()] Signed-off-by: Liam R. Howlett Reviewed-by: Suren Baghdasaryan Signed-off-by: Vlastimil Babka --- lib/maple_tree.c | 47 +++++++++++++++++++++++------------------------ 1 file changed, 23 insertions(+), 24 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index a912e6a1d4378e72b967027b60f8f564476ad14e..bb51424053a5c4ceece7604877d= fa3cd3780944a 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -172,11 +172,6 @@ static inline struct maple_node *mt_alloc_one(gfp_t gf= p) return kmem_cache_alloc(maple_node_cache, gfp); } =20 -static inline int mt_alloc_bulk(gfp_t gfp, size_t size, void **nodes) -{ - return kmem_cache_alloc_bulk(maple_node_cache, gfp, size, nodes); -} - static inline void mt_free_bulk(size_t size, void __rcu **nodes) { kmem_cache_free_bulk(maple_node_cache, size, (void **)nodes); @@ -1150,6 +1145,19 @@ static inline void mas_alloc_nodes(struct ma_state *= mas, gfp_t gfp) mas_set_err(mas, -ENOMEM); } =20 +static inline void mas_empty_nodes(struct ma_state *mas) +{ + mas->node_request =3D 0; + if (mas->sheaf) { + mt_return_sheaf(mas->sheaf); + mas->sheaf =3D NULL; + } + + if (mas->alloc) { + kfree(mas->alloc); + mas->alloc =3D NULL; + } +} =20 /* * mas_free() - Free an encoded maple node @@ -5208,15 +5216,7 @@ EXPORT_SYMBOL_GPL(mas_preallocate); void mas_destroy(struct ma_state *mas) { mas->mas_flags &=3D ~MA_STATE_PREALLOC; - - mas->node_request =3D 0; - if (mas->sheaf) - mt_return_sheaf(mas->sheaf); - mas->sheaf =3D NULL; - - if (mas->alloc) - kfree(mas->alloc); - mas->alloc =3D NULL; + mas_empty_nodes(mas); } EXPORT_SYMBOL_GPL(mas_destroy); =20 @@ -6241,7 +6241,7 @@ static inline void mas_dup_alloc(struct ma_state *mas= , struct ma_state *new_mas, struct maple_node *node =3D mte_to_node(mas->node); struct maple_node *new_node =3D mte_to_node(new_mas->node); enum maple_type type; - unsigned char request, count, i; + unsigned char count, i; void __rcu **slots; void __rcu **new_slots; unsigned long val; @@ -6249,20 +6249,17 @@ static inline void mas_dup_alloc(struct ma_state *m= as, struct ma_state *new_mas, /* Allocate memory for child nodes. */ type =3D mte_node_type(mas->node); new_slots =3D ma_slots(new_node, type); - request =3D mas_data_end(mas) + 1; - count =3D mt_alloc_bulk(gfp, request, (void **)new_slots); - if (unlikely(count < request)) { - memset(new_slots, 0, request * sizeof(void *)); - mas_set_err(mas, -ENOMEM); + count =3D mas->node_request =3D mas_data_end(mas) + 1; + mas_alloc_nodes(mas, gfp); + if (unlikely(mas_is_err(mas))) return; - } =20 - /* Restore node type information in slots. */ slots =3D ma_slots(node, type); for (i =3D 0; i < count; i++) { val =3D (unsigned long)mt_slot_locked(mas->tree, slots, i); val &=3D MAPLE_NODE_MASK; - ((unsigned long *)new_slots)[i] |=3D val; + new_slots[i] =3D ma_mnode_ptr((unsigned long)mas_pop_node(mas) | + val); } } =20 @@ -6316,7 +6313,7 @@ static inline void mas_dup_build(struct ma_state *mas= , struct ma_state *new_mas, /* Only allocate child nodes for non-leaf nodes. */ mas_dup_alloc(mas, new_mas, gfp); if (unlikely(mas_is_err(mas))) - return; + goto empty_mas; } else { /* * This is the last leaf node and duplication is @@ -6349,6 +6346,8 @@ static inline void mas_dup_build(struct ma_state *mas= , struct ma_state *new_mas, /* Make them the same height */ new_mas->tree->ma_flags =3D mas->tree->ma_flags; rcu_assign_pointer(new_mas->tree->ma_root, root); +empty_mas: + mas_empty_nodes(mas); } =20 /** --=20 2.51.0