From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qk1-f177.google.com (mail-qk1-f177.google.com [209.85.222.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74B352F0028; Wed, 18 Jun 2025 16:49:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265383; cv=none; b=tnQtQqpjzwo8gUmQUSbG2G3sVhcD1cpbJlfM6e6SyOU1ZrhWgumdr3iret86VTOBNjBxlcPgNPrz4Xo10NJcd5xAtWHEBkfGTls1X8JmXubP1MgrKdZ1uZ37kMWN16L/AM+Z0C680X6fCope6qSLjXqQexbv6Zg2SxiyAX742ss= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265383; c=relaxed/simple; bh=FoCXLvrd28r+j5dGSKe+mfU7oBPzqVEH2BdWwpkDSnw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RkCHAms6otwYuPc8E2+lsK4O4h5OxitKkF319xrUQM63TRZvJ+/W/lcXYphpJenN9HPJQUIdyfQuXmVGFDScMjCdmIIKoPkOTIY0tttEAGBecnXHkeZwrYoiOLP/gxW81pWamU8dcb9U5ecdIVmVgpundy8J2PjYJhwt00FiCPM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NPwFg/NC; arc=none smtp.client-ip=209.85.222.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NPwFg/NC" Received: by mail-qk1-f177.google.com with SMTP id af79cd13be357-7d09a032914so645113085a.3; Wed, 18 Jun 2025 09:49:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265380; x=1750870180; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=IpV23ijPvrIAGaDv4wzdK5Jt4HxEwb8NJbyxvTwwBCI=; b=NPwFg/NCzs9JEpGLnhX4dspMRN/b0STJhTq5I0wFQS9oBL/Aen7NHZzZtjikGC4fgD r701c1v3qCum0tMlbv4JnB1RSAJDMfNmM0ice6IefTUZqMs2XyuDEiA6G4oDuUq0GTJ2 oFRML/8Ome8qQNxSFZxTh3mEADIHPSJanF10AXV7U9hU+ZLznZRT2N1pocadJrzNoLlh 20Nq+KIeRO+PA/zAI5urT5Gz82S5xuqLMljilBnXVhjspiX5P3564Pe4KXnLJck5V9zv ys/djK/HLMrgZcesbsT0o+KAcXbqX6ZMxthMtiuihJ7vcYCZnpO8iSqvYSD+IxXEb8uE rYFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265380; x=1750870180; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=IpV23ijPvrIAGaDv4wzdK5Jt4HxEwb8NJbyxvTwwBCI=; b=jEm9MAkcM2UsgoGRKEK9mdpWDjzXNdPcEghQk/KvWfhQpayna2sg8ySWu3tUwSoCay AruThFilCabfRDO7gE2DdU8vnb5nSrChTsc3oHjB1AhHK7rl27ubygqIxMj2TNvzioSY xky3+Ge5MBNj1MYZ7YPuWGkhjWSpv477qztPJchwqITZrnGvhlFWemsN0X/4HqFpZezf 7QC+qPPKjgUT3yxSsvNz3+pdFaUqQuUf+QSq+ritWPyF/zju0n2Fkxh4nmrUeVEpeFZw C0gM4zwgYyDscgqdemqcruyVmu+QKTqDZTMTk4japCxnD5LdmWMpT9zKPS+ufiPBNJFq JvVA== X-Forwarded-Encrypted: i=1; AJvYcCUI56OQE36p+zIx9d/U4iL0KOW+wbHfK6HsjstTbZhJxy+yp7Xa21nqA/zr33cFj1PPJqPjCe4azuWF@vger.kernel.org, AJvYcCVOEkZXqnUCSwdxF86u1gy1kgdOrK0Lkp0FVCdjRGiLo2Umg/gEbej9l97x7oI1EF2f8I18ytBV2ZhKFK/0AhE=@vger.kernel.org X-Gm-Message-State: AOJu0Yyam9Thg3iLCqDE4M31ES0+QNnSOA8MbRAmjjVAiGW3radN2WH4 0safvyk1KnQeZcaJqAtaCyoPRl3sdQLMRc5yBPfc4rAeuJSXtU38Gq30 X-Gm-Gg: ASbGnctLx9fU2gbJcM2HDfTxDgzfzdtm0wQC52+Dk39z7P3p2qfE78xTfY9nbo651Ct f7RwW8c3vqhwBV7EcoW610S0xfK2Fs9ME9wIwFgOAuplCIhl/KJ8l/KwGOxQDlsgn8G7WHKj+th PNoM4/7yet+98guCbSuprVIfbnkxsBoiQw+5xeXLt/uKneBRxsmdDqUz75eBQEKBguIvGEbL/zS 7pUWb23GAYsOt9eT8Tm/xv+bnB+Saz4JbEAJKuGycAiXjo9dxeU4BUhc3vAdqUWkkx9DhigUTCJ TsbRT/7103b7rnGJ66wkJsL1UTrJ+EX1WV8oKp093UuywGk015McNLfpVpXYU72O6bDayH+VMpM v3pNLsiOkYD+SC4MniFMlf1gjeAxH75kfTDdl5l1+9eS8cQJ0MHdT X-Google-Smtp-Source: AGHT+IFtyHvlRwAjKianU4QcyaY8yUm4qnn9XqtuVulWP/oeDVODPQ1lRVpsWOovkaSshVU56IEVQw== X-Received: by 2002:a05:620a:17a9:b0:7c5:65fb:fe0e with SMTP id af79cd13be357-7d3c6c08d1bmr2846970785a.6.1750265380044; Wed, 18 Jun 2025 09:49:40 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7d3b8df8f7esm785875885a.26.2025.06.18.09.49.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:39 -0700 (PDT) Received: from phl-compute-11.internal (phl-compute-11.phl.internal [10.202.2.51]) by mailfauth.phl.internal (Postfix) with ESMTP id 106841200043; Wed, 18 Jun 2025 12:49:39 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-11.internal (MEProxy); Wed, 18 Jun 2025 12:49:39 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:38 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 01/10] rust: Introduce atomic API helpers Date: Wed, 18 Jun 2025 09:49:25 -0700 Message-Id: <20250618164934.19817-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to support LKMM atomics in Rust, add rust_helper_* for atomic APIs. These helpers ensure the implementation of LKMM atomics in Rust is the same as in C. This could save the maintenance burden of having two similar atomic implementations in asm. Originally-by: Mark Rutland Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl --- rust/helpers/atomic.c | 1038 +++++++++++++++++++++ rust/helpers/helpers.c | 1 + scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++ 4 files changed, 1105 insertions(+) create mode 100644 rust/helpers/atomic.c create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c new file mode 100644 index 000000000000..00bf10887928 --- /dev/null +++ b/rust/helpers/atomic.c @@ -0,0 +1,1038 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +/* + * This file provides helpers for the various atomic functions for Rust. + */ +#ifndef _RUST_ATOMIC_API_H +#define _RUST_ATOMIC_API_H + +#include + +// TODO: Remove this after LTO helper support is added. +#define __rust_helper + +__rust_helper int +rust_helper_atomic_read(const atomic_t *v) +{ + return atomic_read(v); +} + +__rust_helper int +rust_helper_atomic_read_acquire(const atomic_t *v) +{ + return atomic_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic_set(atomic_t *v, int i) +{ + atomic_set(v, i); +} + +__rust_helper void +rust_helper_atomic_set_release(atomic_t *v, int i) +{ + atomic_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic_add(int i, atomic_t *v) +{ + atomic_add(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return(int i, atomic_t *v) +{ + return atomic_add_return(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_acquire(int i, atomic_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_release(int i, atomic_t *v) +{ + return atomic_add_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add(int i, atomic_t *v) +{ + return atomic_fetch_add(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_release(int i, atomic_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_sub(int i, atomic_t *v) +{ + atomic_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return(int i, atomic_t *v) +{ + return atomic_sub_return(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_release(int i, atomic_t *v) +{ + return atomic_sub_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub(int i, atomic_t *v) +{ + return atomic_fetch_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_inc(atomic_t *v) +{ + atomic_inc(v); +} + +__rust_helper int +rust_helper_atomic_inc_return(atomic_t *v) +{ + return atomic_inc_return(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_inc_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_release(atomic_t *v) +{ + return atomic_inc_return_release(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_inc(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_inc_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_dec(atomic_t *v) +{ + atomic_dec(v); +} + +__rust_helper int +rust_helper_atomic_dec_return(atomic_t *v) +{ + return atomic_dec_return(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_dec_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_release(atomic_t *v) +{ + return atomic_dec_return_release(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_dec(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_dec_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_and(int i, atomic_t *v) +{ + atomic_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and(int i, atomic_t *v) +{ + return atomic_fetch_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_andnot(int i, atomic_t *v) +{ + atomic_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_or(int i, atomic_t *v) +{ + atomic_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or(int i, atomic_t *v) +{ + return atomic_fetch_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_release(int i, atomic_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_xor(int i, atomic_t *v) +{ + atomic_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor(int i, atomic_t *v) +{ + return atomic_fetch_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_xchg(atomic_t *v, int new) +{ + return atomic_xchg(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_acquire(atomic_t *v, int new) +{ + return atomic_xchg_acquire(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_release(atomic_t *v, int new) +{ + return atomic_xchg_release(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return atomic_xchg_relaxed(v, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return atomic_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_release(int i, atomic_t *v) +{ + return atomic_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return atomic_add_negative_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_inc_not_zero(atomic_t *v) +{ + return atomic_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic_inc_unless_negative(atomic_t *v) +{ + return atomic_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic_dec_unless_positive(atomic_t *v) +{ + return atomic_dec_unless_positive(v); +} + +__rust_helper int +rust_helper_atomic_dec_if_positive(atomic_t *v) +{ + return atomic_dec_if_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_read(const atomic64_t *v) +{ + return atomic64_read(v); +} + +__rust_helper s64 +rust_helper_atomic64_read_acquire(const atomic64_t *v) +{ + return atomic64_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic64_set(atomic64_t *v, s64 i) +{ + atomic64_set(v, i); +} + +__rust_helper void +rust_helper_atomic64_set_release(atomic64_t *v, s64 i) +{ + atomic64_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic64_add(s64 i, atomic64_t *v) +{ + atomic64_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return atomic64_add_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_sub(s64 i, atomic64_t *v) +{ + atomic64_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_inc(atomic64_t *v) +{ + atomic64_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return(atomic64_t *v) +{ + return atomic64_inc_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_inc_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_dec(atomic64_t *v) +{ + atomic64_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return(atomic64_t *v) +{ + return atomic64_dec_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_dec_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_and(s64 i, atomic64_t *v) +{ + atomic64_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_or(s64 i, atomic64_t *v) +{ + atomic64_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_xor(s64 i, atomic64_t *v) +{ + atomic64_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_xchg(atomic64_t *v, s64 new) +{ + return atomic64_xchg(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return atomic64_xchg_acquire(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return atomic64_xchg_release(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return atomic64_xchg_relaxed(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_acquire(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_release(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_unless_negative(atomic64_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic64_dec_unless_positive(atomic64_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_if_positive(atomic64_t *v) +{ + return atomic64_dec_if_positive(v); +} + +#endif /* _RUST_ATOMIC_API_H */ +// b032d261814b3e119b72dbf7d21447f6731325ee diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 16fa9bca5949..83e89f6a68fb 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -7,6 +7,7 @@ * Sorted alphabetically. */ =20 +#include "atomic.c" #include "auxiliary.c" #include "blk.c" #include "bug.c" diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 5b98a8307693..02508d0d6fe4 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -11,6 +11,7 @@ cat < ${LINUXDIR}/include= /${header} diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen= -rust-atomic-helpers.sh new file mode 100755 index 000000000000..72f2e5bde0c6 --- /dev/null +++ b/scripts/atomic/gen-rust-atomic-helpers.sh @@ -0,0 +1,65 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=3D$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +gen_proto_order_variant() +{ + local meta=3D"$1"; shift + local pfx=3D"$1"; shift + local name=3D"$1"; shift + local sfx=3D"$1"; shift + local order=3D"$1"; shift + local atomic=3D"$1"; shift + local int=3D"$1"; shift + + local atomicname=3D"${atomic}_${pfx}${name}${sfx}${order}" + + local ret=3D"$(gen_ret_type "${meta}" "${int}")" + local params=3D"$(gen_params "${int}" "${atomic}" "$@")" + local args=3D"$(gen_args "$@")" + local retstmt=3D"$(gen_ret_stmt "${meta}")" + +cat < + +// TODO: Remove this after LTO helper support is added. +#define __rust_helper + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic" "int" ${args} +done + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat < X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:39 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework Date: Wed, 18 Jun 2025 09:49:26 -0700 Message-Id: <20250618164934.19817-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for generic atomic implementation. To unify the implementation of a generic method over `i32` and `i64`, the C side atomic methods need to be grouped so that in a generic method, they can be referred as ::, otherwise their parameters and return value are different between `i32` and `i64`, which would require using `transmute()` to unify the type into a `T`. Introduce `AtomicImpl` to represent a basic type in Rust that has the direct mapping to an atomic implementation from C. This trait is sealed, and currently only `i32` and `i64` impl this. Further, different methods are put into different `*Ops` trait groups, and this is for the future when smaller types like `i8`/`i16` are supported but only with a limited set of API (e.g. only set(), load(), xchg() and cmpxchg(), no add() or sub() etc). While the atomic mod is introduced, documentation is also added for memory models and data races. Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect my responsiblity on the Rust atomic mod. Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl --- MAINTAINERS | 4 +- rust/kernel/sync.rs | 1 + rust/kernel/sync/atomic.rs | 19 ++++ rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++ 4 files changed, 222 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/sync/atomic.rs create mode 100644 rust/kernel/sync/atomic/ops.rs diff --git a/MAINTAINERS b/MAINTAINERS index 0c1d245bf7b8..5eef524975ca 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c ATOMIC INFRASTRUCTURE M: Will Deacon M: Peter Zijlstra -R: Boqun Feng +M: Boqun Feng R: Mark Rutland L: linux-kernel@vger.kernel.org S: Maintained @@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h F: include/linux/refcount.h F: scripts/atomic/ +F: rust/kernel/sync/atomic.rs +F: rust/kernel/sync/atomic/ =20 ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 36a719015583..b620027e0641 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -10,6 +10,7 @@ use pin_init; =20 mod arc; +pub mod atomic; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs new file mode 100644 index 000000000000..65e41dba97b7 --- /dev/null +++ b/rust/kernel/sync/atomic.rs @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic primitives. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions of +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Con= sistency) Model is the +//! only model for Rust code in kernel, and Rust's own atomics should be a= voided. +//! +//! # Data races +//! +//! [`LKMM`] atomics have different rules regarding data races: +//! +//! - A normal write from C side is treated as an atomic write if +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=3Dy. +//! - Mixed-size atomic accesses don't cause data races. +//! +//! [`LKMM`]: srctree/tools/memory-mode/ + +pub mod ops; diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs new file mode 100644 index 000000000000..f8825f7c84f0 --- /dev/null +++ b/rust/kernel/sync/atomic/ops.rs @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic implementations. +//! +//! Provides 1:1 mapping of atomic implementations. + +use crate::bindings::*; +use crate::macros::paste; + +mod private { + /// Sealed trait marker to disable customized impls on atomic implemen= tation traits. + pub trait Sealed {} +} + +// `i32` and `i64` are only supported atomic implementations. +impl private::Sealed for i32 {} +impl private::Sealed for i64 {} + +/// A marker trait for types that implement atomic operations with C side = primitives. +/// +/// This trait is sealed, and only types that have directly mapping to the= C side atomics should +/// impl this: +/// +/// - `i32` maps to `atomic_t`. +/// - `i64` maps to `atomic64_t`. +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {} + +// `atomic_t` implements atomic operations on `i32`. +impl AtomicImpl for i32 {} + +// `atomic64_t` implements atomic operations on `i64`. +impl AtomicImpl for i64 {} + +// This macro generates the function signature with given argument list an= d return type. +macro_rules! declare_atomic_method { + ( + $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)? + ) =3D> { + paste!( + #[doc =3D concat!("Atomic ", stringify!($func))] + #[doc =3D "# Safety"] + #[doc =3D "- Any pointer passed to the function has to be a va= lid pointer"] + #[doc =3D "- Accesses must not cause data races per LKMM:"] + #[doc =3D " - Atomic read racing with normal read, normal wri= te or atomic write is not data race."] + #[doc =3D " - Atomic write racing with normal read or normal = write is data-race, unless the"] + #[doc =3D " normal accesses are done at C side and consider= ed as immune to data"] + #[doc =3D " races, e.g. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_AT= OMIC."] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)= ?; + ); + }; + ( + $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(->= $ret:ty)? + ) =3D> { + paste!( + declare_atomic_method!( + [< $func _ $variant >]($($arg_sig)*) $(-> $ret)? + ); + ); + + declare_atomic_method!( + $func [$($rest)*]($($arg_sig)*) $(-> $ret)? + ); + }; + ( + $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)? + ) =3D> { + declare_atomic_method!( + $func($($arg_sig)*) $(-> $ret)? + ); + } +} + +// This macro generates the function implementation with given argument li= st and return type, and it +// will replace "call(...)" expression with "$ctype _ $func" to call the r= eal C function. +macro_rules! impl_atomic_method { + ( + ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $re= t:ty)? { + call($($c_arg:expr),*) + } + ) =3D> { + paste!( + #[inline(always)] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)= ? { + // SAFETY: Per function safety requirement, all pointers a= re valid, and accesses + // won't cause data race per LKMM. + unsafe { [< $ctype _ $func >]($($c_arg,)*) } + } + ); + }; + ( + ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_= sig:tt)*) $(-> $ret:ty)? { + call($($arg:tt)*) + } + ) =3D> { + paste!( + impl_atomic_method!( + ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)?= { + call($($arg)*) + } + ); + ); + impl_atomic_method!( + ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? { + call($($arg)*) + } + ); + }; + ( + ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? { + call($($arg:tt)*) + } + ) =3D> { + impl_atomic_method!( + ($ctype) $func($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + } +} + +// Delcares $ops trait with methods and implements the trait for `i32` and= `i64`. +macro_rules! declare_and_impl_atomic_methods { + ($ops:ident ($doc:literal) { + $( + $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:= ty)? { + call($($arg:tt)*) + } + )* + }) =3D> { + #[doc =3D $doc] + pub trait $ops: AtomicImpl { + $( + declare_atomic_method!( + $func[$($variant)*]($($arg_sig)*) $(-> $ret)? + ); + )* + } + + impl $ops for i32 { + $( + impl_atomic_method!( + (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)?= { + call($($arg)*) + } + ); + )* + } + + impl $ops for i64 { + $( + impl_atomic_method!( + (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret= )? { + call($($arg)*) + } + ); + )* + } + } +} + +declare_and_impl_atomic_methods!( + AtomicHasBasicOps ("Basic atomic operations") { + read[acquire](ptr: *mut Self) -> Self { + call(ptr as *mut _) + } + + set[release](ptr: *mut Self, v: Self) { + call(ptr as *mut _, v) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasXchgOps ("Exchange and compare-and-exchange atomic operations= ") { + xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self { + call(ptr as *mut _, v) + } + + cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: Self, new:= Self) -> Self { + call(ptr as *mut _, old, new) + } + + try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut S= elf, new: Self) -> bool { + call(ptr as *mut _, old, new) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasArithmeticOps ("Atomic arithmetic operations") { + add[](ptr: *mut Self, v: Self) { + call(v, ptr as *mut _) + } + + fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self) -> S= elf { + call(v, ptr as *mut _) + } + } +); --=20 2.39.5 (Apple Git-154) From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F68D2F3C2F; Wed, 18 Jun 2025 16:49:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265386; cv=none; b=VLGd9FQcPmvZeDfrGoGEcoxgHrAMfcLy4WxICwY35sZ5nqNwkLia38jDqch1jcsyBxCwS8SNl5/9HrI994Fnu+8uw2vid6+GWXwjYZyP80b7/lnxZLmeh3UsyDSEm7yb3wvrXAG2+YNdRCPj5vFsAPYfokdyJF70AgT3a6pcCkU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265386; c=relaxed/simple; bh=M1OMjv22fj+u6grhVnfc0b8X9uE9mcPUgP7UW4eovbM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=m3J+0vUCq0nbor29pIMfrtk5/pnKoIsWo6KItEpnORL7kW/NWR88sJ5od0MAFK1GJrK2n+7NQRHSkkkn3iG3dWshT4qLza8LSe/11/h39Ho40gF/vzunlz5+wd4XjNd1jRMh/lxQz1PFw7mZWj12K2/sIgLy61IoTVuAaaqd288= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Pp/TV4jE; arc=none smtp.client-ip=209.85.219.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Pp/TV4jE" Received: by mail-qv1-f51.google.com with SMTP id 6a1803df08f44-6fb1be9ba89so60506866d6.2; Wed, 18 Jun 2025 09:49:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265383; x=1750870183; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=etVYqiCm1fjHGt42+uzJ+bwpTeWEzpsqqf0LF/x0Vak=; b=Pp/TV4jEskwXAktv/HxUZkB44Pnw/3dbvasBUKYw7l82lEP+1mXiystp5cmRxeATwd 7/so/9r7+WwUtHEqy3T6gUUYyM8B6LuXZmnmKj+Ziv8otdT/2zf35xRD1pLOgNJ+adKW Qsp4b9q3Hoeffs4Wm67tv9cTP4lRucg78Ywh7jmjeFOUmVLEcdplsEvCoHcs4h66iG3K MGBe26D09E8x/q5+oQyKaP/Ye3hqfJD5sFkfDN0sggobtqllYaJq+QR+mOsGDSI7UERY kXETzv9aOypw0+UDp5mplEH9dRSifMQq5qzXIEOnOmRZ7k2yEyi6frM0OZ5msq/yznhU GZ7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265383; x=1750870183; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=etVYqiCm1fjHGt42+uzJ+bwpTeWEzpsqqf0LF/x0Vak=; b=BLP9HkP8RVJDoIl6nxOfPKja+m8TL9M5sFmc1rjemxzroJLWJVppNp/XhLyM6SuMEB fUAB7uJB+CE6AOL4TCThP0Yp5FWJJj3gYyt8Msx5zcQ/aaQZ/yrMsP1blNLDLci1cnSe Vrf8DXxc/Rg7q04Y3KtTIQNsSKCqSOHDfFY33rz970nIHS2lnIplOzbtykoUfwbnabo6 tEWLb89udivZFSBQa9aRwKWe5eJ3ZeurvRRXsF6UejM/wQgp+E3enXiIHumRc9bikq1Y eeb2hmaE4hTNgX6tnHB+PNRaL1+BYd1WcjPfgT9OWgrlUp96BW68RO7tf2YUe/PqmMDz Z2LA== X-Forwarded-Encrypted: i=1; AJvYcCVPg69yALZgAHr2JMSR0PDne7SZSLX7W68x3Gd5m26bn2z9WnmU1qEpU/F8TItqg8lOFjocH7fV14KV@vger.kernel.org, AJvYcCViTbCXSM/6P60M9AR1Of1uwLouE9J0CM/E3v1feJiyypsXkjI9kyA+JhG/lyvZEB6LnV6CCw6bTf3brWltQio=@vger.kernel.org X-Gm-Message-State: AOJu0YzYoGcRm3R3vE/22ozHCA4r5J8GvsQz5J6myY9poI+VsWXNv1aD 0sdc5jX9so0drSJkfDkcgSsG0itR9kRIWiKeDuVRuM8u/c8+iN87WRNK X-Gm-Gg: ASbGncstIYB0x1cCIW4X0i8g1ZAHwmIOyEdyS1vYH6uzrwCgNou7iFEQpRancgkdprm JAW8DZTOyfEv7h9zhwfEA+QLjaVvu19CWS9apmAqtf6fwT9MwYYJCgRoiWMvWFzERfHl8hi3MhH PDOcmNZYE0p/Y9c2E6tL1y+QjT1Znp9Qm5R+lLOSIU2m9h09QTIqjOeosyDsxm1ELwST3TZRWTk 1tSQE2Fs1kthFyqT47bEPctxGpbvk56l03MqFo0A8vWc+emSIPpYlC+ANvS3qLX6qN8/nvMRa+4 kxt1PfBv5JnsJj/dOCilQB+kWZDyR17woneR6a1DfgC7asGd3evWk5wi0MN98qjXgmxE5PgQuAh +W/HiRiMVY5xSFcABYEhK+M0WKf8wJWUN1qO9Lfbh1j3k/uYBwL5N X-Google-Smtp-Source: AGHT+IFQqzW7wOfbqXab4kMODpMgkW6utkQ+Kr95/gp03dKp6mt/gZWwxLm0eqs9K7w2/oYN7eTQPg== X-Received: by 2002:a05:6214:d02:b0:6fa:ce35:1ab4 with SMTP id 6a1803df08f44-6fb477360a6mr260669576d6.5.1750265382704; Wed, 18 Jun 2025 09:49:42 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6fb35c42bd9sm75065116d6.84.2025.06.18.09.49.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:42 -0700 (PDT) Received: from phl-compute-02.internal (phl-compute-02.phl.internal [10.202.2.42]) by mailfauth.phl.internal (Postfix) with ESMTP id CB13E1200043; Wed, 18 Jun 2025 12:49:41 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-02.internal (MEProxy); Wed, 18 Jun 2025 12:49:41 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:41 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types Date: Wed, 18 Jun 2025 09:49:27 -0700 Message-Id: <20250618164934.19817-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for atomic primitives. Instead of a suffix like _acquire, a method parameter along with the corresponding generic parameter will be used to specify the ordering of an atomic operations. For example, atomic load() can be defined as: impl Atomic { pub fn load(&self, _o: O) -> T { ... } } and acquire users would do: let r =3D x.load(Acquire); relaxed users: let r =3D x.load(Relaxed); doing the following: let r =3D x.load(Release); will cause a compiler error. Compared to suffixes, it's easier to tell what ordering variants an operation has, and it also make it easier to unify the implementation of all ordering variants in one method via generic. The `IS_RELAXED` and `TYPE` associate consts are for generic function to pick up the particular implementation specified by an ordering annotation. Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl --- rust/kernel/sync/atomic.rs | 3 + rust/kernel/sync/atomic/ordering.rs | 106 ++++++++++++++++++++++++++++ 2 files changed, 109 insertions(+) create mode 100644 rust/kernel/sync/atomic/ordering.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 65e41dba97b7..9fe5d81fc2a9 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -17,3 +17,6 @@ //! [`LKMM`]: srctree/tools/memory-mode/ =20 pub mod ops; +pub mod ordering; + +pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/= ordering.rs new file mode 100644 index 000000000000..96757574ed7d --- /dev/null +++ b/rust/kernel/sync/atomic/ordering.rs @@ -0,0 +1,106 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory orderings. +//! +//! The semantics of these orderings follows the [`LKMM`] definitions and = rules. +//! +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust= memory model. +//! - [`Full`] means "fully-ordered", that is: +//! - It provides ordering between all the preceding memory accesses and= the annotated operation. +//! - It provides ordering between the annotated operation and all the f= ollowing memory accesses. +//! - It provides ordering between all the preceding memory accesses and= all the fllowing memory +//! accesses. +//! - All the orderings are the same strong as a full memory barrier (i.= e. `smp_mb()`). +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, exce= pt that dependency +//! orderings are also honored in [`LKMM`]. Dependency orderings are des= cribed in "DEPENDENCY +//! RELATIONS" in [`LKMM`]'s [`explanation`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.= txt + +/// The annotation type for relaxed memory ordering. +pub struct Relaxed; + +/// The annotation type for acquire memory ordering. +pub struct Acquire; + +/// The annotation type for release memory ordering. +pub struct Release; + +/// The annotation type for fully-order memory ordering. +pub struct Full; + +/// Describes the exact memory ordering. +pub enum OrderingType { + /// Relaxed ordering. + Relaxed, + /// Acquire ordering. + Acquire, + /// Release ordering. + Release, + /// Fully-ordered. + Full, +} + +mod internal { + /// Unit types for ordering annotation. + /// + /// Sealed trait, can be only implemented inside atomic mod. + pub trait OrderingUnit { + /// Describes the exact memory ordering. + const TYPE: super::OrderingType; + } +} + +impl internal::OrderingUnit for Relaxed { + const TYPE: OrderingType =3D OrderingType::Relaxed; +} + +impl internal::OrderingUnit for Acquire { + const TYPE: OrderingType =3D OrderingType::Acquire; +} + +impl internal::OrderingUnit for Release { + const TYPE: OrderingType =3D OrderingType::Release; +} + +impl internal::OrderingUnit for Full { + const TYPE: OrderingType =3D OrderingType::Full; +} + +/// The trait bound for annotating operations that should support all orde= rings. +pub trait All: internal::OrderingUnit {} + +impl All for Relaxed {} +impl All for Acquire {} +impl All for Release {} +impl All for Full {} + +/// The trait bound for operations that only support acquire or relaxed or= dering. +pub trait AcquireOrRelaxed: All { + /// Describes whether an ordering is relaxed or not. + const IS_RELAXED: bool =3D false; +} + +impl AcquireOrRelaxed for Acquire {} + +impl AcquireOrRelaxed for Relaxed { + const IS_RELAXED: bool =3D true; +} + +/// The trait bound for operations that only support release or relaxed or= dering. +pub trait ReleaseOrRelaxed: All { + /// Describes whether an ordering is relaxed or not. + const IS_RELAXED: bool =3D false; +} + +impl ReleaseOrRelaxed for Release {} + +impl ReleaseOrRelaxed for Relaxed { + const IS_RELAXED: bool =3D true; +} + +/// The trait bound for operations that only support relaxed ordering. +pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + All {} + +impl RelaxedOnly for Relaxed {} --=20 2.39.5 (Apple Git-154) From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 934B32F4328; Wed, 18 Jun 2025 16:49:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265388; cv=none; b=JpPVsUyyTMO8RSxL5n6IG6UgPjJdLLusvBOgOg7X/2rqTT1fGj9GC03boFxIrRpOtbfbZb+9Fz71AH0I/TqLgoNLa38XwyfPp4E/zJDBS+QNq63cYujwi3Z/Uo6L3scTKI+vUJ3FosD31u3YrBIOOvvDUT60f4vCgR1UDTikReU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265388; c=relaxed/simple; bh=1Kqi2YDsf3sM7uhM+P4TroSNnv4xFt1rwiunRKerjdw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aPdjxPVw6PRg2ifYIqMvyUre3si1a3kRgrdCy/sBtv2AbWitKmUPBy8rv4UQV6DSZzN09/TuJr74dqrAII8GXovbT9Ny4/AjOniESq1+o8nBVU57kh4k9uYlrVDqgvU0WxyE2ynr/z7ihbAx5Qea8ZM7SyxxZwhh25hRWgZhJrs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Qn5Np/+7; arc=none smtp.client-ip=209.85.160.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qn5Np/+7" Received: by mail-qt1-f170.google.com with SMTP id d75a77b69052e-4a6f6d07bb5so77322691cf.2; Wed, 18 Jun 2025 09:49:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265384; x=1750870184; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=wbnaYQheKaUQhhvZ2OiIVS2CsqlLbAVGNW2lrhmtrNI=; b=Qn5Np/+7NoPgPJqToImDdLc1RixZvplUcyNdueTDP0tJQe7ynGkbzCHzYSz8EzHt7n nQ6SEo1j8HOY9cHD1NQBz2s5wJO2Qf3gYkYyl94a3qWfWYJ5bSdiai8dmdMESg5pHxTb vOSInq1QcI0X+g9712XS8BW4bxw4pWJDFCEzaRveMeQkf46gDmDBxz8DTakv4+rgA341 LEOWj11ZP1VbZijdI6XBRYAyS+6CcZibVN0AHUZ98NY8fmMAKXus61P+j1I6oH2YJ9kH vFSV9ykwbcUpjdt5AHGU+SESTKwVaQhG9GgZd0CjTnGnjSiR+utEWJ+fYxyWSPR5QOyg k07g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265384; x=1750870184; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=wbnaYQheKaUQhhvZ2OiIVS2CsqlLbAVGNW2lrhmtrNI=; b=JC+2AjNWfjwxmsHwCtKsEEpTVcgT6x8tkBG6o2pzRNry6JoxSCe1znf9gwz41w8724 CT1VCfBP9siOo+adC76qK4uQOH/GNmJDWhwbqScX08vLR710M14ykUvJNbF6Y4LBdNek GIEn5unurCJS3XHZ911NuyeVXtVNG+aunbcYSGQ6S6mLQ4DHau1ReruQEhOmNv9o0zWU mhL7KfS+j+WwINOwRO2q7V6Mo909b3ni4iSajliD0rTu6MM97aRGZHuJ7q8UuIxnhrZ1 2B3S8QOrErB6v9DVPXfVvL8QSYc0OQZsRvMhyc2b8gCBxfx3ymWiiD5QKs+mavGtXo2k IFHA== X-Forwarded-Encrypted: i=1; AJvYcCV+j80mrmARp9XNn8ltzRZ6WBinNaGQ3t1R9u/Mf/7eSKFCgTtwNfnhTXd3nSgaed2lW3q7V/WVYyr6@vger.kernel.org, AJvYcCXwaNxD8U51f4fhWTwIy+PmnOShZ+5S2PeGc5vi4QTweGb+qBYvJrCV+UfVyYreuR4rs4IbQHG155B9TWmtN0Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yyf+DyODJot6FwVqjV+idk0+f1sEs/pl6xG0H6gtLM+NXKjfxYo HRnSc5U1ExIS1AtsRqfpjr36fCIW4YNdOTaFJjeCDr0IgMSZKSdW44OI X-Gm-Gg: ASbGncuuBYaSlR7USBZwKG4aKZyNP70mbzwjF+cZMioMef5QZCtkOyM1Gi+Lvxc0pOH +yNxuITzwx21Z2xpWI5kZ8ZQ1p2hfmrS+xzDRmdnfHmAXfBr0GnZEym/EohsehzwljXN+r08fTb GJDTW00m+ba6TVmw8JafIlTifQFfeFX6Ih8Ey6lGJfMokFL9t8UsNEZPah264eJx60b3jXcp7Ya K/ORST7XToXsD0B1HeeY73HJS3yWXfdtFT4bY6YIR0Iow4qjzn5EBu9g7zFvHKox16+71uSa2pl ZWkz10cpfiYqP/v5XJxqHsYSjZzE8eYX50f+u44BuCelQR5fXKrq4hQ+W2MG2BpWaLZSVxrefFB RBxIPk3sMDUkoI4+riHzubRO/6bJ5AkCuNtMpNSYbqpnWC55nZkFFBz/hvFAqvsg= X-Google-Smtp-Source: AGHT+IHMzK/tSg/7anPkWa76F3zWuN1E4JdmV+MOA8xH4htdJIEzNAwYHqKlof3ctb/IB08iMrco7A== X-Received: by 2002:ac8:5acb:0:b0:4a6:f518:6b19 with SMTP id d75a77b69052e-4a73c5faebemr273615191cf.28.1750265384102; Wed, 18 Jun 2025 09:49:44 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4a72a2e94desm73624361cf.20.2025.06.18.09.49.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:43 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 341191200043; Wed, 18 Jun 2025 12:49:43 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-01.internal (MEProxy); Wed, 18 Jun 2025 12:49:43 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:42 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics Date: Wed, 18 Jun 2025 09:49:28 -0700 Message-Id: <20250618164934.19817-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To provide using LKMM atomics for Rust code, a generic `Atomic` is added, currently `T` needs to be Send + Copy because these are the straightforward usages and all basic types support this. The trait `AllowAtomic` should be only implemented inside atomic mod until the generic atomic framework is mature enough (unless the implementer is a `#[repr(transparent)]` new type). `AtomicImpl` types are automatically `AllowAtomic`, and so far only basic operations load() and store() are introduced. Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl --- rust/kernel/sync/atomic.rs | 2 + rust/kernel/sync/atomic/generic.rs | 258 +++++++++++++++++++++++++++++ 2 files changed, 260 insertions(+) create mode 100644 rust/kernel/sync/atomic/generic.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 9fe5d81fc2a9..a01e44eec380 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -16,7 +16,9 @@ //! //! [`LKMM`]: srctree/tools/memory-mode/ =20 +pub mod generic; pub mod ops; pub mod ordering; =20 +pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs new file mode 100644 index 000000000000..73c26f9cf6b8 --- /dev/null +++ b/rust/kernel/sync/atomic/generic.rs @@ -0,0 +1,258 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Generic atomic primitives. + +use super::ops::*; +use super::ordering::*; +use crate::types::Opaque; + +/// A generic atomic variable. +/// +/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be ch= osen. +/// +/// # Invariants +/// +/// Doing an atomic operation while holding a reference of [`Self`] won't = cause a data race, this +/// is guaranteed by the safety requirement of [`Self::from_ptr`] and the = extra safety requirement +/// of the usage on pointers returned by [`Self::as_ptr`]. +#[repr(transparent)] +pub struct Atomic(Opaque); + +// SAFETY: `Atomic` is safe to share among execution contexts because a= ll accesses are atomic. +unsafe impl Sync for Atomic {} + +/// Atomics that support basic atomic operations. +/// +/// TODO: Currently the [`AllowAtomic`] types are restricted within basic = integer types (and their +/// transparent new types). In the future, we could extend the scope to mo= re data types when there +/// is a clear and meaningful usage, but for now, [`AllowAtomic`] should o= nly be implemented inside +/// atomic mod for the restricted types mentioned above. +/// +/// # Safety +/// +/// [`Self`] must have the same size and alignment as [`Self::Repr`]. +pub unsafe trait AllowAtomic: Sized + Send + Copy { + /// The backing atomic implementation type. + type Repr: AtomicImpl; + + /// Converts into a [`Self::Repr`]. + fn into_repr(self) -> Self::Repr; + + /// Converts from a [`Self::Repr`]. + fn from_repr(repr: Self::Repr) -> Self; +} + +// An `AtomicImpl` is automatically an `AllowAtomic`. +// +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and = alignment. +unsafe impl AllowAtomic for T { + type Repr =3D Self; + + fn into_repr(self) -> Self::Repr { + self + } + + fn from_repr(repr: Self::Repr) -> Self { + repr + } +} + +impl Atomic { + /// Creates a new atomic. + pub const fn new(v: T) -> Self { + Self(Opaque::new(v)) + } + + /// Creates a reference to [`Self`] from a pointer. + /// + /// # Safety + /// + /// - `ptr` has to be a valid pointer. + /// - `ptr` has to be valid for both reads and writes for the whole li= fetime `'a`. + /// - For the whole lifetime of '`a`, other accesses to the object can= not cause data races + /// (defined by [`LKMM`]) against atomic operations on the returned = reference. + /// + /// [`LKMM`]: srctree/tools/memory-model + /// + /// # Examples + /// + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [= `Atomic::store()`] can + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire(= )` or + /// `WRITE_ONCE()`/`smp_store_release()` in C side: + /// + /// ```rust + /// # use kernel::types::Opaque; + /// use kernel::sync::atomic::{Atomic, Relaxed, Release}; + /// + /// // Assume there is a C struct `Foo`. + /// mod cbindings { + /// #[repr(C)] + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 } + /// } + /// + /// let tmp =3D Opaque::new(cbindings::foo { a: 1, b: 2}); + /// + /// // struct foo *foo_ptr =3D ..; + /// let foo_ptr =3D tmp.get(); + /// + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound. + /// let foo_a_ptr =3D unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) }; + /// + /// // a =3D READ_ONCE(foo_ptr->a); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all access= es on it is atomic, so no + /// // data race. + /// let a =3D unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed); + /// # assert_eq!(a, 1); + /// + /// // smp_store_release(&foo_ptr->a, 2); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all acces= ses on it is atomic, so no + /// // data race. + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release); + /// ``` + /// + /// However, this should be only used when communicating with C side o= r manipulating a C struct. + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self + where + T: Sync, + { + // CAST: `T` is transparent to `Atomic`. + // SAFETY: Per function safety requirement, `ptr` is a valid point= er and the object will + // live long enough. It's safe to return a `&Atomic` because fu= nction safety requirement + // guarantees other accesses won't cause data races. + unsafe { &*ptr.cast::() } + } + + /// Returns a pointer to the underlying atomic variable. + /// + /// Extra safety requirement on using the return pointer: the operatio= ns done via the pointer + /// cannot cause data races defined by [`LKMM`]. + /// + /// [`LKMM`]: srctree/tools/memory-model + pub const fn as_ptr(&self) -> *mut T { + self.0.get() + } + + /// Returns a mutable reference to the underlying atomic variable. + /// + /// This is safe because the mutable reference of the atomic variable = guarantees the exclusive + /// access. + pub fn get_mut(&mut self) -> &mut T { + // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the obje= ct has already been + // initialized. `&mut self` guarantees the exclusive access, so it= 's safe to reborrow + // mutably. + unsafe { &mut *self.as_ptr() } + } +} + +impl Atomic +where + T::Repr: AtomicHasBasicOps, +{ + /// Loads the value from the atomic variable. + /// + /// # Examples + /// + /// Simple usages: + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// let x =3D Atomic::new(42i64); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// ``` + /// + /// Customized new types in [`Atomic`]: + /// + /// ```rust + /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed}; + /// + /// #[derive(Clone, Copy)] + /// #[repr(transparent)] + /// struct NewType(u32); + /// + /// // SAFETY: `NewType` is transparent to `u32`, which has the same s= ize and alignment as + /// // `i32`. + /// unsafe impl AllowAtomic for NewType { + /// type Repr =3D i32; + /// + /// fn into_repr(self) -> Self::Repr { + /// self.0 as i32 + /// } + /// + /// fn from_repr(repr: Self::Repr) -> Self { + /// NewType(repr as u32) + /// } + /// } + /// + /// let n =3D Atomic::new(NewType(0)); + /// + /// assert_eq!(0, n.load(Relaxed).0); + /// ``` + #[doc(alias("atomic_read", "atomic64_read"))] + #[inline(always)] + pub fn load(&self, _: Ordering) -> T { + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_read*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + let v =3D unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_read(a) + } else { + T::Repr::atomic_read_acquire(a) + } + }; + + T::from_repr(v) + } + + /// Stores a value to the atomic variable. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.store(43, Relaxed); + /// + /// assert_eq!(43, x.load(Relaxed)); + /// ``` + /// + #[doc(alias("atomic_set", "atomic64_set"))] + #[inline(always)] + pub fn store(&self, v: T, _: Ordering) { + let v =3D T::into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_set*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_set(a, v) + } else { + T::Repr::atomic_set_release(a, v) + } + }; + } +} --=20 2.39.5 (Apple Git-154) From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA2D42F49FD; Wed, 18 Jun 2025 16:49:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265389; cv=none; b=Sq/GDTEv2CXAbcJhAXocK2Oe6widb66oiSC72SiDHmLSeferLSI0r+Zn5dP3eTkqAPxpOnRWOKM9elomulejgiX+YT7VpcjF/+F+WiVB2viwjLwZiGYyIKyBoFrRCzG8935mytxxrSaiz+Mh9HEjD3MplZa/bWt2vJ08cBQfAS8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265389; c=relaxed/simple; bh=/HcW3eZAjE++VugN9ilKE1n8C1cr+cpOFqBk7hSOwts=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mFplkZK0wBZfGIopL+sfGks8Fe+8Pcavmle/l9VRB7YgXqE8RLEYgqCRSPoqkEN9HNPKwLBbUfTK+LtZKAtqOob/ZaFI4sxgNGPOwvcFjn2u+6hc2IhDU5hDvJ3UkbKrwsECGQwce+tQQ3v/g4GM7y3Dxn69impKVRO8dcHzjds= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aiDKRDgr; arc=none smtp.client-ip=209.85.222.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aiDKRDgr" Received: by mail-qk1-f181.google.com with SMTP id af79cd13be357-7d38cfa9773so751382085a.2; Wed, 18 Jun 2025 09:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265385; x=1750870185; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ENRbZ3eGBDLlJvyZh4O0swl7RFRKz4H+todkYakjL80=; b=aiDKRDgrVw3c4bfWCqMaP+Tax3xCitN7GXUdI1Sy0oaTuyLPumIam4OyhddcjUXchb esQY4KAp/dKEnsGb7Vyf1sjZI5qztaEHvO0pd0ixo0R0i1qqWxU1NiI4cYkwHYgDBW7V mkjtT14YZImUV64OSLJ2kTP1BcNV0U+PcZM87C6+J4LYvqCnjtXdHHnD0d2BrrUsE45e st19IVshUkNPClxyJQer//LsH4+0YcvF/gY/WuTt6DgAryyAJUNIv3fswNYs5myOB82+ 9cHrFrjevM79kHt+sCzgMSAddyRyxen+o8EaRUbc9ccy1ycUjOfAvzsdYj67VAZ1ijO8 Z0qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265385; x=1750870185; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ENRbZ3eGBDLlJvyZh4O0swl7RFRKz4H+todkYakjL80=; b=pbW/2NU7vTi3NNGi563N8m8FCBk0c/kKruF+j4U99OvNIxg3b9nO45eolf3SBhVZGK Dcj4KJRAw3zDM7XYGoTF2DUQLFaCB1vOLVaC39LqED1I/p/q4y8fSU6y5GGChraVIeJK kQzvTDQffao63ZAwi6MmXiPSFNc+dKcTE7fUjV7utCNbUF/0t1ekl8mHL6ftoMwzgh+L TuSdRIBKlJLeyFcnIJ2i7k1kO6ltqhYpXa+rxV2/1tEHzW4N0rxBPGz1i9VKoXD7NTJG S4SJ5a1+G8AJoijs8d5UxgYTDAouE0rggOcZmB2/+Z3dhDSrtZ++byQkEwuAqYT3kyGJ gGEg== X-Forwarded-Encrypted: i=1; AJvYcCU59+bhOqbXWL16z5bDlQZZbo5fktz02fpo/Cndxx/ylku4mAdOyRukGMKLRNDt9XUsTLnqIX9M2/BM@vger.kernel.org, AJvYcCUSMxGKG0rWIeUjiwYNuRMasJMKZ334SsoDw4N1a76DQBXnH9XEktw4rP/+++0HOnlp1O/H6doSmDkCysaRYG0=@vger.kernel.org X-Gm-Message-State: AOJu0YwCmRav6imxLNbZHl9TUM1RgAhqgDzW9s4zE0XXObi3qs1EVaMQ fyn4wjLM2+Mt/3g4nzS0qYDf17C+h6zkqYU0WCHb25DIxr0X5qh9ng88IBdrnQ== X-Gm-Gg: ASbGncvCP1w8joP2mR8QR01WM/dVoYbZ7gdR0leLqV5lZFnR+TyZjbSlXK1MyHXwwZ1 nSUlF9JW3GKSE0zaOqnz0zBgdQ/4ziY4rdCwGeBERAfWlTW9GQkIQt5MuVmIE2kF2Lnc1MzjWcL URb5m54PU0d7KXNcXLQR6PpgKMh/mXkm3h4ucpuKEOcjV2evJfE5EbnFZMCpxFbwQAu+Bd++xri zINv7vRGwvT5+HDzPohiKNelYAX95Db+n+KpxAqu89dVMqf5/5UZwYJJNtAq8ZxsHsBF/Xk0qdg SuShhXqy3TRTiR+DbC9jxsyRUYZEqIo8ObG9065cmjszRlzLhW/N5jn3M9cAea4yuhR0aCVOjuL qqsC1lxMambqYf9kCA5e9BAJOSqSU/GMgQXT6XHYWAnGUT1A0/Al5 X-Google-Smtp-Source: AGHT+IGC1AEyixCKYVuCrByoXCVkH0MBctNVPqJOz7FMrF9Sg12FGAUf/jdhG0AMuNCF9tEpZCrcpQ== X-Received: by 2002:a05:620a:63c8:b0:7d3:9025:5db7 with SMTP id af79cd13be357-7d3c6c1e568mr2487575585a.20.1750265385507; Wed, 18 Jun 2025 09:49:45 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7d3b8e28623sm784699285a.51.2025.06.18.09.49.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:45 -0700 (PDT) Received: from phl-compute-06.internal (phl-compute-06.phl.internal [10.202.2.46]) by mailfauth.phl.internal (Postfix) with ESMTP id 95B261200043; Wed, 18 Jun 2025 12:49:44 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-06.internal (MEProxy); Wed, 18 Jun 2025 12:49:44 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:43 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Date: Wed, 18 Jun 2025 09:49:29 -0700 Message-Id: <20250618164934.19817-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" xchg() and cmpxchg() are basic operations on atomic. Provide these based on C APIs. Note that cmpxchg() use the similar function signature as compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means the operation succeeds and `Err(old)` means the operation fails. Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl --- rust/kernel/sync/atomic/generic.rs | 154 +++++++++++++++++++++++++++++ 1 file changed, 154 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index 73c26f9cf6b8..bcdbeea45dd8 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -256,3 +256,157 @@ pub fn store(&self, v: T,= _: Ordering) { }; } } + +impl Atomic +where + T::Repr: AtomicHasXchgOps, +{ + /// Atomic exchange. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.xchg(52, Acquire)); + /// assert_eq!(52, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_xchg", "atomic64_xchg"))] + #[inline(always)] + pub fn xchg(&self, v: T, _: Ordering) -> T { + let v =3D T::into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_xchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + let ret =3D unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_xchg(a, v), + OrderingType::Acquire =3D> T::Repr::atomic_xchg_acquire(a,= v), + OrderingType::Release =3D> T::Repr::atomic_xchg_release(a,= v), + OrderingType::Relaxed =3D> T::Repr::atomic_xchg_relaxed(a,= v), + } + }; + + T::from_repr(ret) + } + + /// Atomic compare and exchange. + /// + /// Compare: The comparison is done via the byte level comparison betw= een the atomic variables + /// with the `old` value. + /// + /// Ordering: When succeeds, provides the corresponding ordering as th= e `Ordering` type + /// parameter indicates, and a failed one doesn't provide any ordering= , the read part of a + /// failed cmpxchg should be treated as a relaxed read. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed= to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the at= omic variable when + /// cmpxchg was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// // Checks whether cmpxchg succeeded. + /// let success =3D x.cmpxchg(52, 64, Relaxed).is_ok(); + /// # assert!(!success); + /// + /// // Checks whether cmpxchg failed. + /// let failure =3D x.cmpxchg(52, 64, Relaxed).is_err(); + /// # assert!(failure); + /// + /// // Uses the old value if failed, probably re-try cmpxchg. + /// match x.cmpxchg(52, 64, Relaxed) { + /// Ok(_) =3D> { }, + /// Err(old) =3D> { + /// // do something with `old`. + /// # assert_eq!(old, 42); + /// } + /// } + /// + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in= C. + /// let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); + /// # assert_eq!(42, latest); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[doc(alias( + "atomic_cmpxchg", + "atomic64_cmpxchg", + "atomic_try_cmpxchg", + "atomic64_try_cmpxchg" + ))] + #[inline(always)] + pub fn cmpxchg(&self, mut old: T, new: T, o: Ordering) = -> Result { + // Note on code generation: + // + // try_cmpxchg() is used to implement cmpxchg(), and if the helper= functions are inlined, + // the compiler is able to figure out that branch is not needed if= the users don't care + // about whether the operation succeeds or not. One exception is o= n x86, due to commit + // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semanti= cs"), the + // atomic_try_cmpxchg() on x86 has a branch even if the caller doe= sn't care about the + // success of cmpxchg and only wants to use the old value. For exa= mple, for code like: + // + // let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old|= old); + // + // It will still generate code: + // + // movl $0x40, %ecx + // movl $0x34, %eax + // lock + // cmpxchgl %ecx, 0x4(%rsp) + // jne 1f + // 2: + // ... + // 1: movl %eax, %ecx + // jmp 2b + // + // This might be "fixed" by introducing a try_cmpxchg_exclusive() = that knows the "*old" + // location in the C function is always safe to write. + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } + + /// Atomic compare and exchange and returns whether the operation succ= eeds. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`= ]. + /// + /// Returns `true` means the cmpxchg succeeds otherwise returns `false= ` with `old` updated to + /// the value of the atomic variable when cmpxchg was happening. + #[inline(always)] + fn try_cmpxchg(&self, old: &mut T, new: T, _: Ordering)= -> bool { + let old =3D (old as *mut T).cast::(); + let new =3D T::into_repr(new); + let a =3D self.0.get().cast::(); + + // SAFETY: + // - For calling the atomic_try_cmpchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllowAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - `old` is a valid pointer to write because it comes from a m= utable reference. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_try_cmpxchg(a, old= , new), + OrderingType::Acquire =3D> T::Repr::atomic_try_cmpxchg_acq= uire(a, old, new), + OrderingType::Release =3D> T::Repr::atomic_try_cmpxchg_rel= ease(a, old, new), + OrderingType::Relaxed =3D> T::Repr::atomic_try_cmpxchg_rel= axed(a, old, new), + } + } + } +} --=20 2.39.5 (Apple Git-154) From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CDFA2F4A0A; Wed, 18 Jun 2025 16:49:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265390; cv=none; b=ooxOi90mHojTWOUOrYtFzK/RFRIQVKgUfZtnTzvUwLFznIr0MzTA1yJsPcQzqdczeIQpurprRc4/HY6ZyVaLJxw5CwFjJk3ptRftp8DPBlUGcJCvC1EzQjZ6vpUPXccdPbvym64OWGR/UY1W++EKJM17cLiR2xpBcQODzULQycU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265390; c=relaxed/simple; bh=/xbsQlwVEwP5ylKy/kZdE5AStjYv/8ecIplTxNP+IrY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dO0tMsCCQ1R3kU50iYzcnxETkPavqMpKFN9wKEcSxF3mDkYgt0Pkt4J2dzSNRBN82sMlEI04o20LnEoXIQ5r7kdsOZ8+iU5SPIAV1R682ZPiqDEVLHRGrHEpNeEFOPB3+41tq1/sKaOmmQmMjph1lNCQ9gXvm+raiUbhqitp6gA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DfagVAjQ; arc=none smtp.client-ip=209.85.160.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DfagVAjQ" Received: by mail-qt1-f177.google.com with SMTP id d75a77b69052e-4a4bb155edeso84077161cf.2; Wed, 18 Jun 2025 09:49:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265387; x=1750870187; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=unAsjELoLmXleQcHS9ahtkPNHQOwUCjesMQMuSuNj88=; b=DfagVAjQWMQ+dpUQ1Q1Euw21xag7TPFOqijhhDX++cPvgICo0KEqY5HR+a/nqi2pjx qCR7VWXNiQyqJLDvATdgG8KC05wGesjc0mkpepX1wQ8j1p5Wb+l1DXMSUtuLUtKN2EsU nzfxkpiR/6wHRY+m//Z3DbnAgKO/+JcRKkBW3RAMEqnD4lO3l4slvc51gAWf6i8E/oeq SrcfZ9EXd0btLdjkhKSWKke97lLu4Xl8+qCVoXaJbmZQ87OduSL0wqmDX/+3MK9zS3/p iqrKofMpyUhP3YqGL7SRE1RF/kJF+/jIzk4AdZ3mELK/gBjo+P6kQaH37I8/Uy1FNHDV YBhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265387; x=1750870187; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=unAsjELoLmXleQcHS9ahtkPNHQOwUCjesMQMuSuNj88=; b=YW/jNtKEoDOIpCU+ASARw5xuLYr/wJF4MhnQwLq0E6vv6O53L4tsSs2Yzzuxy2SnhU iUwZ7D9z+1yT1ABc8hamJFcArRv5I0XOVJAs5IgeKyxvs7xaxnZfNQ7t0LVUxODB5Bgt 0MxbY6RVSDOUiOYKa0ARYEDB0CHZySEc4vrSeP7FbFxAf9kjbxLi8s6rgA9JdBg6DLKB VML4RFaYSp2AKSQtUxTJy+J7uwngyyenykte0auNvOOq+Lbhg4dGqaJz9RSTDUITdtJd vAQk/xfZ52jJGJXhmdvqF85Vhmf43BPvGW1n/Si4qFcpErDixOcXZjGbPByofQ4zVXMz q7cw== X-Forwarded-Encrypted: i=1; AJvYcCVgQecJjgNl38v1Eu0QwZ5cFnaMXLFunvFlRzO9cFQQLJlgfYWw9O+B3YuJd6qOluEazDbHNkjMeFsUc2LevaA=@vger.kernel.org, AJvYcCXHQ6RYuOjkM1i3U7wmjCkujYOQEweGH/hyiK+xekiBDlqml3zQGdHKUicONpCQuZWln7NA9ESsBOpV@vger.kernel.org X-Gm-Message-State: AOJu0YzLYjHN0Sy2eMk9NB+ffjBZf2h73S+gusO0ZJ6dqHi5MSt5IvCg +IYixw7p/n9rx2u8MaBmfWGnod8tEFPI85KS8/RmRhXVbBxvsZXk6PHh X-Gm-Gg: ASbGncsnO+PGI9MyM2OxbGDWtj/zCbrAbUuUBQxrb6kdT75w2L0FyXzfxqW730U726C Tq1YtylfgRXTNoK4xSmf15t75Tg6lO66iRalXJX4OSJ4REidKercCOcIp/a+SElFMdz5SrHZrYY 3DyDhouc7uA18vFlsvhFfyLUjN1ZAy0VedKMQuIdw4O6PeYvT3Oekf7GBcs88dqVjv/WQ9aXZnd Dbl2OXQKOsWMz7tgjWwim0AjO4eJGgxxx4QpA3nWZESBKeMyQ2Jq3Xhw/S69C57tojp88fDuPUq FPwEp35LkwLQWEtrFkjGIqigo0o2GBio8ZdCKFhd9Lv+6M/7aj7QzIZksyaUTV/yR4NfEfYa4S1 0VDnF3FAFI3gEjVEYgE+J5JhZi44mWPsL3suYqeNNyGpmCnAEFxcjr6vWKxSGxqQ= X-Google-Smtp-Source: AGHT+IFkVX3zIQ+FXhVhrpQUa+yf0WzQ3i1XCNfjg0amMvG7G3bSmAgM/tJM3TFRmdtKO5Cru+Hzfw== X-Received: by 2002:ac8:5186:0:b0:494:950c:6dca with SMTP id d75a77b69052e-4a747b5cb07mr187296341cf.23.1750265386911; Wed, 18 Jun 2025 09:49:46 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4a72a4e0026sm73651941cf.61.2025.06.18.09.49.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:46 -0700 (PDT) Received: from phl-compute-09.internal (phl-compute-09.phl.internal [10.202.2.49]) by mailfauth.phl.internal (Postfix) with ESMTP id 01DFF1200043; Wed, 18 Jun 2025 12:49:46 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-09.internal (MEProxy); Wed, 18 Jun 2025 12:49:46 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:45 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations Date: Wed, 18 Jun 2025 09:49:30 -0700 Message-Id: <20250618164934.19817-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" One important set of atomic operations is the arithmetic operations, i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not make senses for all the types that `AllowAtomic` to have arithmetic operations, for example a `Foo(u32)` may not have a reasonable add() or sub(), plus subword types (`u8` and `u16`) currently don't have atomic arithmetic operations even on C side and might not have them in the future in Rust (because they are usually suboptimal on a few architecures). Therefore add a subtrait of `AllowAtomic` describing which types have and can do atomic arithemtic operations. A few things about this `AllowAtomicArithmetic` trait: * It has an associate type `Delta` instead of using `AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`) may not wants an `add(&self, i32)`, but an `add(&self, u32)`. * `AtomicImpl` types already implement an `AtomicHasArithmeticOps` trait, so add blanket implementation for them. In the future, `i8` and `i16` may impl `AtomicImpl` but not `AtomicHasArithmeticOps` if arithemtic operations are not available. Only add() and fetch_add() are added. The rest will be added in the future. Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl --- rust/kernel/sync/atomic/generic.rs | 101 +++++++++++++++++++++++++++++ 1 file changed, 101 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index bcdbeea45dd8..8c5bd90b2619 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -57,6 +57,23 @@ fn from_repr(repr: Self::Repr) -> Self { } } =20 +/// Atomics that allows arithmetic operations with an integer type. +pub trait AllowAtomicArithmetic: AllowAtomic { + /// The delta types for arithmetic operations. + type Delta; + + /// Converts [`Self::Delta`] into the representation of the atomic typ= e. + fn delta_into_repr(d: Self::Delta) -> Self::Repr; +} + +impl AllowAtomicArithmetic for T { + type Delta =3D Self; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d + } +} + impl Atomic { /// Creates a new atomic. pub const fn new(v: T) -> Self { @@ -410,3 +427,87 @@ fn try_cmpxchg(&self, old: &mut T, new:= T, _: Ordering) -> bool { } } } + +impl Atomic +where + T::Repr: AtomicHasArithmeticOps, +{ + /// Atomic add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.add(12, Relaxed); + /// + /// assert_eq!(54, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn add(&self, v: T::Delta, _: Ordering) { + let v =3D T::delta_into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_add() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + unsafe { + T::Repr::atomic_add(a, v); + } + } + + /// Atomic fetch and add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) }); + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } ); + /// ``` + #[inline(always)] + pub fn fetch_add(&self, v: T::Delta, _: Ordering) -> T { + let v =3D T::delta_into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_fetch_add*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + let ret =3D unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_fetch_add(a, v), + OrderingType::Acquire =3D> T::Repr::atomic_fetch_add_acqui= re(a, v), + OrderingType::Release =3D> T::Repr::atomic_fetch_add_relea= se(a, v), + OrderingType::Relaxed =3D> T::Repr::atomic_fetch_add_relax= ed(a, v), + } + }; + + T::from_repr(ret) + } +} --=20 2.39.5 (Apple Git-154) From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 899352F546E; Wed, 18 Jun 2025 16:49:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265391; cv=none; b=mG1P4VwKc2eEjcZDufcJFKcVG6zyIzdIupGk6b6j8W2FBFYJC8x3MOQW2hlGqkL5ORo9G1DPemgSHDGiGTZ13SDGP6LIjHcZ0N6i8deO7kI/SwbkOIdFsqaiaV2OQjMySF1M7JeqZdjy+8oEf/c63PLPVuxbHxxhEwkgbeitf9A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265391; c=relaxed/simple; bh=/1u7Gor/8OrPhh8RWM7zBT05ZYt4jbJTlHKK+zHR+1U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AWIXBq5iPrBxjlnLQU5Rc5PpR771/MmFXD/b85RToTVGX9iY2WFvAk8RcGuF5rYbFdAtFpN+ZKlIiP/A1aduC18TnHxpbks9Mon+gTJo/fLW+AuT+YFTY7HKyI5dZ58JTDtrNq52unvzMYSCeB6ZE+WQY91XDocT8PWfJNOccJQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bh1oKwRf; arc=none smtp.client-ip=209.85.222.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bh1oKwRf" Received: by mail-qk1-f174.google.com with SMTP id af79cd13be357-7d3dd14a7edso403772585a.2; Wed, 18 Jun 2025 09:49:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265388; x=1750870188; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=BjCJHW7K90LFjb9bKN42TOlFgvmqIavct0SBIAGQ15s=; b=bh1oKwRfzV/Mcvje3ZK9Fa43+a2ktE+6RqG8WhHT09Qts3cI4vbfQ9aCx9uIsO++/F 3SADtAX/3HraV203sjNtTIF1szOX0WDTFdwFYjcCIYAmwYJNFM/QQpa2QBl5tjY9XJsF RsILNt7UhTJbfmLi6DyPPSPx1oS+lEgfvVlnrXDUrKhdoSM/fK3rseLhH8BUruny4qU0 vE3OieCzT4b63l1eTObiF3PSW7CDMsBxx/gpARXURMpWOuuDVr3MOqk25otZ6TBfXayp JRbjilpA+y+VsrzvFSDPq1ngGCwWhMJPLBgY7gHlrSrPicv5R5N+NAd+0Rzo2cMMakJv JONw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265388; x=1750870188; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=BjCJHW7K90LFjb9bKN42TOlFgvmqIavct0SBIAGQ15s=; b=IDiAjr0nglSWvOQGPDL8NaoFdUeJubAmjaPLkMfc0gicGg7sZuLznwCD2vje0IkTeR tsHmqF7jlpOq8qjS26wd01IGnMvqFtDvBNvIWYal4ARGHFa966UQgvW+RiLFrZx316gv v3s1L4YCfNTAux/WfzIvMpKh6Op4EwlAsBgVsJtNwkHbG232RiVOabkC+JGK4SHQFF8i s8Ovj3ZBljHYeQ+VUhTzch5u5cO6BwEZaBXM00crfnabdy/tI41g09Nwn4vh6Xr6zVnb UyjoKtb5u2CYBjmXPKUpUS2EgVfu4sxLZhVkRwAYEkR00Su5euKuf/ERlaZx+Qsv5xQ8 4YMQ== X-Forwarded-Encrypted: i=1; AJvYcCVd3wPX/4oIyxKpTJnUF7AYDmsqbI4CNgT3hWuQEQR7APWjdKJVHPL9WRqtesoamo+5TYcgl7RkgXFm@vger.kernel.org, AJvYcCWGKMfej5msuB3Sk1KYtXFSPkpyI45IA+xsqMjqMt2vDa8rFJ2o/FB7jByQzS0kW4QqVfrRa24zMzjnioiCmBE=@vger.kernel.org X-Gm-Message-State: AOJu0YwFDer+pU7rftVj3StFNzuPc3pMIkeDXcrBSK+2aGZolQlETVb2 Yu3Xqh/ZZzLC8Hk6HAjnjPdxYHhZZFVPoHQcDYx3xBIHR2mIF4XCLZ0A X-Gm-Gg: ASbGncsEI3wBPmHXRZCk1XxGCvrMkkfpi9lU9DSA9uym5zoi5TniPCmVvc0ibLxW6HO ede4AYmZxrctBwmdrd4HbrU0vtiCCiWk5zawmYVqzmwcGKci5Z3u/lKUef74AqskLJlNTtYuNIU GUwZx/7EgganNoLUaoJL1JjPLEeNqmqHhDwJzo4DflKRaaB53DD8ZtTQwU5XgJfILPAgVdx6qg7 QLXndoEZOOIOA5QBwNNGJtnKYXVB/ZgnR0hJ4O3Q9lyoofc4mhr8EbdoMsKkJ3Mlpg+COzxaJ/L s+rAm5nRGDHra0lJhjEDiuMd5Pd3k6s+FA7OQFx6SwMMHtLkJDBGYPykOWtMPWF5qvOjcXGemcN cBHHUFfgIOvvaQge1fhooGEQ+VTtHQMbdRL0xvvRI6McY/cnBZVk6 X-Google-Smtp-Source: AGHT+IFaLEVg8p8CUnS0SdnehCb3ymzjCbkC0/6SQXVEx3+RzVM1NuRoZXGUVgnreOpmLk9wxUe+yQ== X-Received: by 2002:a05:620a:bd5:b0:7d3:9012:75c4 with SMTP id af79cd13be357-7d3c6ceedfamr2795072185a.44.1750265388334; Wed, 18 Jun 2025 09:49:48 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7d3b8dc9e7asm789775785a.8.2025.06.18.09.49.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:48 -0700 (PDT) Received: from phl-compute-02.internal (phl-compute-02.phl.internal [10.202.2.42]) by mailfauth.phl.internal (Postfix) with ESMTP id 6C1C71200043; Wed, 18 Jun 2025 12:49:47 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-02.internal (MEProxy); Wed, 18 Jun 2025 12:49:47 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:46 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 07/10] rust: sync: atomic: Add Atomic Date: Wed, 18 Jun 2025 09:49:31 -0700 Message-Id: <20250618164934.19817-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for basic unsigned types that have an `AtomicImpl` with the same size and alignment. Unit tests are added including Atomic and Atomic. Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl Reviewed-by: Andreas Hindborg --- rust/kernel/sync/atomic.rs | 111 +++++++++++++++++++++++++++++++++++++ 1 file changed, 111 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index a01e44eec380..965a3db554d9 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -22,3 +22,114 @@ =20 pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; + +// SAFETY: `u64` and `i64` has the same size and alignment. +unsafe impl generic::AllowAtomic for u64 { + type Repr =3D i64; + + fn into_repr(self) -> Self::Repr { + self as Self::Repr + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as Self + } +} + +impl generic::AllowAtomicArithmetic for u64 { + type Delta =3D u64; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as Self::Repr + } +} + +// SAFETY: `u32` and `i32` has the same size and alignment. +unsafe impl generic::AllowAtomic for u32 { + type Repr =3D i32; + + fn into_repr(self) -> Self::Repr { + self as Self::Repr + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as Self + } +} + +impl generic::AllowAtomicArithmetic for u32 { + type Delta =3D u32; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as Self::Repr + } +} + +use crate::macros::kunit_tests; + +#[kunit_tests(rust_atomics)] +mod tests { + use super::*; + + // Call $fn($val) with each $type of $val. + macro_rules! for_each_type { + ($val:literal in [$($type:ty),*] $fn:expr) =3D> { + $({ + let v: $type =3D $val; + + $fn(v); + })* + } + } + + #[test] + fn atomic_basic_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + assert_eq!(v, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_xchg_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + let old =3D v; + let new =3D v + 1; + + assert_eq!(old, x.xchg(new, Full)); + assert_eq!(new, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_cmpxchg_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + let old =3D v; + let new =3D v + 1; + + assert_eq!(Err(old), x.cmpxchg(new, new, Full)); + assert_eq!(old, x.load(Relaxed)); + assert_eq!(Ok(old), x.cmpxchg(old, new, Relaxed)); + assert_eq!(new, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_arithmetic_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + assert_eq!(v, x.fetch_add(12, Full)); + assert_eq!(v + 12, x.load(Relaxed)); + + x.add(13, Relaxed); + + assert_eq!(v + 25, x.load(Relaxed)); + }); + } +} --=20 2.39.5 (Apple Git-154) From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1012D2F5477; Wed, 18 Jun 2025 16:49:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265393; cv=none; b=FuKAXc2rlfdqJu1YPvGEnFIKOPofRj9JaA8fQyzENsnCo5O3IKNIUZi8TXtZfm+0Mc8YcBF17tTR6xJcuJFNyeCXp6H1AKLPjpgP/FJNFtJ0mwaFe02GsyL4i1wayZS9SbMEzxI6FBlpL8jH2A7SkpyJMaYLKCFY4PXTF/5EaGg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265393; c=relaxed/simple; bh=fere2FiE8wcY7FYKyqup4BtXpG8xUmjDEuEEhDZQYm8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IdP9blHBa0SIeMFPRuSksbKxODcYOGWJflV6w9eIsfhcX8MiIyDycS832AJ4Rv17AIOY7uAMmjqZwGI9B1XRwU7/aFEnAspR6w71f89w/cjb6+cuEBwJYFNtWzDK5vqnFwgaxj2PDtGP0XT2ogBp+Q3+YAOcRlnPEKIcXWjhIoc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DDRaKUsG; arc=none smtp.client-ip=209.85.222.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DDRaKUsG" Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-7d094ef02e5so104725385a.1; Wed, 18 Jun 2025 09:49:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265390; x=1750870190; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=IiqenWQWU7iuzOQuKuquY7EpMvEiluNmSmj/bpcg5CQ=; b=DDRaKUsGIqx8IWS0uXh8JIIgT3LzTDarvHx/6moGlYz1Tczo4Pb33K7ds/6GbLsTbV 1T1Oqyk9W9aOuhl9n0akHlhEwdLuq9c7TPcNyE5iXHwl59GRKtRSIesh9MJpUNCk5M9y VJNyLLBiOGJ0lkoAtHCRmXgKD4lfhuyZ9kHndHGuYxH4NLHuAweMFg/WdUBdSnDgIa3a OrnlstKgejHIKF9EmONU7fhR4w42kX/Jvavx7extV4r75q5qgnMb2DB63COmvRq+6bc2 nJsWyaqzalkvPxh5e0N9hvlSvq8bLKJ6rWLI3k1wJMOfC9TpSR6EOJ4bN5LtJFSpHjsu 6JqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265390; x=1750870190; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=IiqenWQWU7iuzOQuKuquY7EpMvEiluNmSmj/bpcg5CQ=; b=a7hBd0p5Nias7//esu3DgsvGjBJr6STF5dCHo5W0Jwni4/Rhu/uNPrjzIAD7A9qH5T MIsJFbaDtml2PRBkWNjFNSDgimS4ui3asP7FtDtqCILb5Su9s7PdbN9MYcn3S8gWMj1J 3c7ouOt+ob+UomIqLhxI97fsP44re+WkMJ+7u1pNCOf9wgfwlw1TyOyPti68xO7zfQVI fouxlp8W48k0fhYp2DNEMglJkUqucZ8WRi4LvAPyLbxZ83Vim9BYe+MJW8hrMijO+sPb cqj06Yd7oS1PwsizT+pLh5PNSAIy2FIzdTEAQX/Iip1MlViLF1jBwjDy1SfMdqyvPA2W TFqg== X-Forwarded-Encrypted: i=1; AJvYcCV4vAr+p2Zw/RzTTDE9adimjpKp4+/C3oylDw+0Nsy3RX7dLuLX9fTi8f3SAgMkJdGlSHD0pEamFn10CBG6enQ=@vger.kernel.org, AJvYcCW0rN0faVAj7vbNLS4VcxSVru48r8OZu9FfIQdRws9lg7Ja3mdZLVCA+bRNlI+He2HFaCUf9HUOEd0V@vger.kernel.org X-Gm-Message-State: AOJu0YxxhEsGeOMSHH8RcHP1UEPh2SdMRt0VDrfvTWQEBe4x83uenGVm HysbUSSgtTisCkg4XSRmziWPjsCILx8PvJk0+LzTFRsleQD13xtIFxQb X-Gm-Gg: ASbGnct2hUfeNtbyYwEG0BZwH24uIWzK27KMgvwEoB82j7brV0lrsJuKg3WrZ7hEq3G 1n5b4FEprznhq0qHnH3HddQ7inL2+XkeBdR/VbOtj3VozK4oKECU6CQqtBmUyFmtfku+C3GPBcy /vGKrppVmmVY2c0rooKExMIrd5bOgaErWitfWVKdzxksxX1FdN46XYI9azEzHYICPrMUD4/ZQ3h ysQt8CK2KzF3wHsGEHSLSN7HWu4qubr4l7zAcMyk3WUqDEYD99iwGO+0AunjlNfMTP2EE9d5tyC YHOJgbsy99Xpm5Ivh+ACyyk7oLCqi6QAuR09CvpF1/D+l8vvh3DzTrWRmr3ITXza6xsqX+Jn+Xb 04oR64ETl3dLrE9fRKsuVYeZ85mT9qo3Za2M/aAtVJv2WeV5ydWjV X-Google-Smtp-Source: AGHT+IFrny/vCbBmkBRC9BnME0hYm/8gTb/V8VUqVO59PYMWA65CHg4iSYOfVJWSz/SsFM5tNpjDew== X-Received: by 2002:a05:620a:4805:b0:7cc:13f:fa30 with SMTP id af79cd13be357-7d3f1738bbfmr39720585a.27.1750265389701; Wed, 18 Jun 2025 09:49:49 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7d3b8f0b35csm783472585a.102.2025.06.18.09.49.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:49 -0700 (PDT) Received: from phl-compute-07.internal (phl-compute-07.phl.internal [10.202.2.47]) by mailfauth.phl.internal (Postfix) with ESMTP id B76991200043; Wed, 18 Jun 2025 12:49:48 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-07.internal (MEProxy); Wed, 18 Jun 2025 12:49:48 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:48 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 08/10] rust: sync: atomic: Add Atomic<{usize,isize}> Date: Wed, 18 Jun 2025 09:49:32 -0700 Message-Id: <20250618164934.19817-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for `usize` and `isize`. Note that instead of mapping directly to `atomic_long_t`, the represention type (`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces the necessarity of creating `atomic_long_*` helpers, which could save the binary size of kernel if inline helpers are not available. Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl Reviewed-by: Andreas Hindborg --- rust/kernel/sync/atomic.rs | 58 +++++++++++++++++++++++++++++++++++--- 1 file changed, 54 insertions(+), 4 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 965a3db554d9..829511f4d582 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -65,6 +65,56 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { } } =20 +// SAFETY: `usize` has the same size and the alignment as `i64` for 64bit = and the same as `i32` for +// 32bit. +unsafe impl generic::AllowAtomic for usize { + #[cfg(CONFIG_64BIT)] + type Repr =3D i64; + #[cfg(not(CONFIG_64BIT))] + type Repr =3D i32; + + fn into_repr(self) -> Self::Repr { + self as Self::Repr + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as Self + } +} + +impl generic::AllowAtomicArithmetic for usize { + type Delta =3D usize; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as Self::Repr + } +} + +// SAFETY: `isize` has the same size and the alignment as `i64` for 64bit = and the same as `i32` for +// 32bit. +unsafe impl generic::AllowAtomic for isize { + #[cfg(CONFIG_64BIT)] + type Repr =3D i64; + #[cfg(not(CONFIG_64BIT))] + type Repr =3D i32; + + fn into_repr(self) -> Self::Repr { + self as Self::Repr + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as Self + } +} + +impl generic::AllowAtomicArithmetic for isize { + type Delta =3D isize; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as Self::Repr + } +} + use crate::macros::kunit_tests; =20 #[kunit_tests(rust_atomics)] @@ -84,7 +134,7 @@ macro_rules! for_each_type { =20 #[test] fn atomic_basic_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 assert_eq!(v, x.load(Relaxed)); @@ -93,7 +143,7 @@ fn atomic_basic_tests() { =20 #[test] fn atomic_xchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 let old =3D v; @@ -106,7 +156,7 @@ fn atomic_xchg_tests() { =20 #[test] fn atomic_cmpxchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 let old =3D v; @@ -121,7 +171,7 @@ fn atomic_cmpxchg_tests() { =20 #[test] fn atomic_arithmetic_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 assert_eq!(v, x.fetch_add(12, Full)); --=20 2.39.5 (Apple Git-154) From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qk1-f170.google.com (mail-qk1-f170.google.com [209.85.222.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40B182F94A3; Wed, 18 Jun 2025 16:49:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265394; cv=none; b=VMOkq5VT8Q00mP0wkbinqKAE2J1rSH8mbklkMAJlc+9TbUCmw8vwRU96T+AXvfsajGiLojVt1ea7sv+okufYAGBt0+3ypXXgm1/WC7a586P+DD2cV9Mbjax6TQ2R9J3o7CSTAZC5DOlNJClW0IIXR6Ciz5Rq6t2qFeMl3dh1Jo4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265394; c=relaxed/simple; bh=dvU14r0iEmi4OnkDpLWDsbtGzzO7lc6z0w9QAU4TSno=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gTIyLSR/q/CL+4Br4QTrYXSwRQTlFMqRMKyCHiHFA1Bh+TxXF3+o+zMdwK3QHGNHu8/tdGEr0t7IT0hkb66eFFFsJTAuVtrye8Jc2GDcENu88UW5agZhG0LbHh9MMkc3DKVHRCH+WKjpWwCFUG6llM4dEoDInPDtzNlzqpV/ZC4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nrk0ocpV; arc=none smtp.client-ip=209.85.222.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nrk0ocpV" Received: by mail-qk1-f170.google.com with SMTP id af79cd13be357-7d21cecc11fso1301738785a.3; Wed, 18 Jun 2025 09:49:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265391; x=1750870191; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ZOCtxk6LFOKknpWbYEkP2m+QB/jgJqyC9svVPxtO+w4=; b=nrk0ocpV5X2kyniQtyEVNAYZBsvYpAVZrrwhOpDQJBtQ2fl4fSlB9a3+9SIIWBKmoB UBdAIqsJ4/+qI32ZmrZ2hru1wKLMylx4FHlGTqWGtOW/wIsEeSGdUnGdr3MSNq4k9721 cGq7DZSUoqfy8c4cl9WiPfFn4xuCHEoR6xQIJggEfc23YZwztrK5kQ4km6vq96FZjS5l 3ODIsEJi1UsVCVIYeU3yuM86c8kD3R79ODHxtRijOsXHyf3CLQLkRcAemOLCii+Jp/8i jMi3wkMlekZjHaK2fKO3EoFMAO/wkb34aipbgCXTff9foD5qsw+cd3wAfg0Ht/OSHIQ0 p68A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265391; x=1750870191; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ZOCtxk6LFOKknpWbYEkP2m+QB/jgJqyC9svVPxtO+w4=; b=Fd5CC/d71ReIW7hIhhrYHMYbSDjfxgXz8gvZJvlRE2MRQIF1fvAH3B9jrqjuedkQOp DXWBYVBSkPfsrDf0O4f7P3Cu0MB9g4ZVXimI2ZJXkkixr+SabZVIv7AScNCnJyYZkIOU Ao53yNI6GxTxzsUq+SKCWmr5/JJdRyj+82KEwgK4fABUYnNZPqprkuJzjSYUi2Rwb/98 hlyon7pg6GaYTd0MomLr/7FWbZpxiHZdJrgvJeddbQsQdoqj8G04rKv0Qi6Ps+fEuU9s vVcqxiQO6HRntVIua/+7JJAilJTiIE9ZV4PWW5LxebHe41DuScEXgzbu020bs9cbtYmG MxyQ== X-Forwarded-Encrypted: i=1; AJvYcCU3Rs9eZ5z9ltU+wgSYNZY4+h9hKDpecjopyHtMMg4KcWZQgPw8Dcp59iaGwLmjEjqlfrXyIavblq7/ceG9Ifk=@vger.kernel.org, AJvYcCXd9OGYgB8FkhieLT3mUqfR/9jPIXcq8Mr69iYvG/2KVRfwPHcmwycImEJJqCry/PL97855nXh6nWQZ@vger.kernel.org X-Gm-Message-State: AOJu0Yxb+LE8Wbspj8/u9UqkgRhJJRHvIScfvXUxz8tsloSpGo8dyLn7 SH5nR511AgJ7cWJtzULZyQRDOg8ohPpAh7+dBjjeDiVeQaiQ32Ft6fRI X-Gm-Gg: ASbGnctpZfI8q4Lw/qAhLo1Upe8HV0rqRHY65KyCbS8TVUjdfs52Hg8W1o32nZQ6A/p Xuh/OI0Xir/97cNP51D1QJXL7BZSwHxayRYWg5sxVAB8kMLslHB54E70aH6qZQKUBBRevfFp/C7 ++hheo8rGUdST2PdVWVSqHaRKKwytsDkVox2F6SUZzhw8kH+Qy6T5NyJy8+v9uWKO976TK3UOHf cVQZr5aYaIWQzrEfJfKOwIcnk/e072GbkDKk78YAi/AJ4aaZUZrE+zCXTntz3l+W2aVmG9N+Vib ZmhZSDhKQ9vfiWl97YF166Axj4qVWpGgu18dZTRF9l3g/jdfDtVH7WGgxLByz+vY/KYP4ZTN03p ZqUYlDnGgdCyNZg0QU+CKoROlTcN37V/oLR3PniZj6MG6shXWV/Zx X-Google-Smtp-Source: AGHT+IGtMbe/TFPtiErhwZaMxWZkvFVyH4tNlqSJ3Rbys4ejjuEohtWqE7T+GmnAfqkxFN4yigkK7A== X-Received: by 2002:a05:620a:1b8f:b0:7d0:996f:9c48 with SMTP id af79cd13be357-7d3c6c04a8cmr2658207685a.9.1750265390952; Wed, 18 Jun 2025 09:49:50 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7d3b8f272a8sm785667285a.115.2025.06.18.09.49.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:50 -0700 (PDT) Received: from phl-compute-07.internal (phl-compute-07.phl.internal [10.202.2.47]) by mailfauth.phl.internal (Postfix) with ESMTP id 17A1B1200043; Wed, 18 Jun 2025 12:49:50 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-07.internal (MEProxy); Wed, 18 Jun 2025 12:49:50 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:49 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 09/10] rust: sync: atomic: Add Atomic<*mut T> Date: Wed, 18 Jun 2025 09:49:33 -0700 Message-Id: <20250618164934.19817-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add atomic support for raw pointer values, similar to `isize` and `usize`, the representation type is selected based on CONFIG_64BIT. `*mut T` is not `Send`, however `Atomic<*mut T>` definitely needs to be a `Sync`, and that's the whole point of atomics: being able to have multiple shared references in different threads so that they can sync with each other. As a result, a pointer value will be transferred from one thread to another via `Atomic<*mut T>`: x.store(p1, Relaxed); let p =3D x.load(p1, Relaxed); This means a raw pointer value (`*mut T`) needs to be able to transfer across thread boundaries, which is essentially `Send`. To reflect this in the type system, and based on the fact that pointer values can be transferred safely (only using them to dereference is unsafe), as suggested by Alice, extend the `AllowAtomic` trait to include a customized `Send` semantics, that is: `impl AllowAtomic` has to be safe to be transferred across thread boundaries. Suggested-by: Alice Ryhl Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl --- rust/kernel/sync/atomic.rs | 48 ++++++++++++++++++++++++++++++ rust/kernel/sync/atomic/generic.rs | 16 ++++++++-- 2 files changed, 61 insertions(+), 3 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 829511f4d582..70920146935f 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -114,6 +114,22 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { d as Self::Repr } } +// SAFETY: A `*mut T` has the same size and the alignment as `i64` for 64b= it and the same as `i32` +// for 32bit. And it's safe to transfer the ownership of a pointer value t= o another thread. +unsafe impl generic::AllowAtomic for *mut T { + #[cfg(CONFIG_64BIT)] + type Repr =3D i64; + #[cfg(not(CONFIG_64BIT))] + type Repr =3D i32; + + fn into_repr(self) -> Self::Repr { + self as Self::Repr + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as Self + } +} =20 use crate::macros::kunit_tests; =20 @@ -139,6 +155,9 @@ fn atomic_basic_tests() { =20 assert_eq!(v, x.load(Relaxed)); }); + + let x =3D Atomic::new(core::ptr::null_mut::()); + assert!(x.load(Relaxed).is_null()); } =20 #[test] @@ -182,4 +201,33 @@ fn atomic_arithmetic_tests() { assert_eq!(v + 25, x.load(Relaxed)); }); } + + #[test] + fn atomic_ptr_tests() -> crate::error::Result { + use crate::alloc::{flags::GFP_KERNEL, KBox}; + use core::ptr; + + let x =3D Atomic::new(ptr::null_mut::()); + + assert!(x.load(Relaxed).is_null()); + + let new =3D KBox::new(42, GFP_KERNEL)?; + x.store(ptr::from_mut(KBox::leak(new)), Release); + + let ptr =3D x.load(Relaxed); + assert!(!ptr.is_null()); + + // SAFETY: `ptr` is a valid pointer from `KBox::leak()` and the ad= dress dependency + // guarantees observation of the initialization of `KBox`. + assert_eq!(42, unsafe { ptr.read_volatile() }); + + x.xchg(ptr::null_mut(), Relaxed); + assert!(x.load(Relaxed).is_null()); + + // SAFETY: `ptr` is a valid pointer from `KBox::leak()` and no one= is currently referencing + // the pointer, so it's safety to convert the ownership back to a = `KBox`. + drop(unsafe { KBox::from_raw(ptr) }); + + Ok(()) + } } diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index 8c5bd90b2619..f496774c1686 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -18,6 +18,10 @@ #[repr(transparent)] pub struct Atomic(Opaque); =20 +// SAFETY: `Atomic` is safe to send between execution contexts, because= `T` is `AllowAtomic` and +// `AllowAtomic`'s safety requirement guarantees that. +unsafe impl Send for Atomic {} + // SAFETY: `Atomic` is safe to share among execution contexts because a= ll accesses are atomic. unsafe impl Sync for Atomic {} =20 @@ -30,8 +34,13 @@ unsafe impl Sync for Atomic {} /// /// # Safety /// -/// [`Self`] must have the same size and alignment as [`Self::Repr`]. -pub unsafe trait AllowAtomic: Sized + Send + Copy { +/// - [`Self`] must have the same size and alignment as [`Self::Repr`]. +/// - The implementer must guarantee it's safe to transfer ownership from = one execution context to +/// another, this means it has to be a [`Send`], but because `*mut T` is= not [`Send`] and that's +/// the basic type needs to support atomic operations, so this safety re= quirement is added to +/// [`AllowAtomic`] trait. This safety requirement is automatically sati= sfied if the type is a +/// [`Send`]. +pub unsafe trait AllowAtomic: Sized + Copy { /// The backing atomic implementation type. type Repr: AtomicImpl; =20 @@ -44,7 +53,8 @@ pub unsafe trait AllowAtomic: Sized + Send + Copy { =20 // An `AtomicImpl` is automatically an `AllowAtomic`. // -// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and = alignment. +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and = alignment. And all +// `AtomicImpl` types are `Send`. unsafe impl AllowAtomic for T { type Repr =3D Self; =20 --=20 2.39.5 (Apple Git-154) From nobody Thu Oct 9 08:47:44 2025 Received: from mail-qk1-f176.google.com (mail-qk1-f176.google.com [209.85.222.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C1E22F4A19; Wed, 18 Jun 2025 16:49:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265395; cv=none; b=YRvLH8mTccOpkvFNnSTA+lHzLhAQNicxWPJSAvgv7+L7Px2W410rDwEmkTSbqjDMVKNQ/z4k16Rerv3p+NVe4HvZ4OPRVhHTV+VUHVRakyMxa9n5k/8cd1ntfpFeMHNJnmkLirCCUzNYa4eXxWsB7wO+23T4iWoOfdhfEyEvQuM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750265395; c=relaxed/simple; bh=bw14+xZqI0w9yxj/j5uKeKXaYLL1MtfH8KsStGXN1Us=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=i18FtkdijlFe7Ux5uq0wKcva0ILIJo+bFajeNm0aAoSQT+GgfsvixNUCJP2rRipSuoIhPxbUj1xdtAmfL45beTSVGd4t3RFVA35T57xQz1h88+jQsHromzk4IUxry+5ld5jUGSRLplZLVfUzSQvFKq7fCV5zCCaQkTwNv38GZAU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=iPDuO+MY; arc=none smtp.client-ip=209.85.222.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iPDuO+MY" Received: by mail-qk1-f176.google.com with SMTP id af79cd13be357-7d3900f90f6so771986685a.1; Wed, 18 Jun 2025 09:49:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750265392; x=1750870192; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=hO2T4XtpSZqQqdPFG12LlFMg3J8jBgvJe6Qf6HA4HQY=; b=iPDuO+MYWCj+GbINbGUxOPDg+/Bss5iH7Yt/uuunkSwqVAdDi58vgsle/E1FbusIsb W6E4VnXtBetsuda04kUo1E1A2M/bfM9DezAH/14dcDR0iwb+R+6/lDM8VjNoHme+/fol qTsGomWFqqmYrB5lbWjgY/WmhANWAcurnFeloKjGqpoLFWJLlRjvGojM7YpiDf7f4tAb 3EDs5NO1RxntWJ09j7IiuhjvFt64Kil8dKBUTgDVItNgXCW3CuyDLnDPVTF9aWRyfPCy Hx1NmoZWFFDi8pNGhdy6ShNUchmX7yOJCT0q8/BPcFPOCjR8CYPo604sS+LMLmXWxz/9 Gfww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750265392; x=1750870192; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=hO2T4XtpSZqQqdPFG12LlFMg3J8jBgvJe6Qf6HA4HQY=; b=RCoBgB6sGWINU+H8C1uWfzEZbNSjHUyxEpSu6cUQANqKXeLeeODrbwNhnPqwwgftqa sy88Kv5oAv1BGzFKmR21Kd+G4AIGUM8om5mhs0UzrFfxzGETy50lZHCbHJaZ1czfYvnB oiSuRGBlYuRejN1l4f5VXRUhBG2v6u3HSDfyFdTAGRpVnL6bqkVNfoo11cQekj9QSyWH 2lRlTdAxH89ZeJ4jQxB8kfYV74RI5egDa+hBmtrY7Pgkb5hb9ADCDQLKFF852oABMScl Ab++3XvKY8kRK8wsvdqdb60MPl83SD0WcsEoZCMbjsc2ZOGAq9tLxyFbYMaUeZeqNeFN qW9A== X-Forwarded-Encrypted: i=1; AJvYcCWlwpCN0O3Q3FGrINYEujZstl4HzdFOOpDEco+HOoqgOaMs8kcWZFKGDTPWwLll18jJy/wIyBGezX6nSh81ypY=@vger.kernel.org, AJvYcCXrm+BBrVTf68dbrnaqg8Xr85ElmIea4Dh8vMJ4r++xeWpHHmms7AN+PN3pf3wQHWdkX9s/BwHPNng7@vger.kernel.org X-Gm-Message-State: AOJu0Yx9YgpZhkMfWtqpVV4+Z4clQLGpooXLV2KMgxNuDkF2mU1S6rfq pejizMYJ7SBBYpo+rDpf1NRrsC8l/Y0kW2JWbQvfrvO3aIeKtV5+AQV2 X-Gm-Gg: ASbGnct0l3YkAPOmWn3QlwUr8QtzTXXhOXOVwIhAd9SVj68hPjsZko3IZy942Xq3fMy mlJpeQXhUNm8kY+MKRDIKwycrW6P+3uJ1bqdoPFALOwlZm1gFn/MPcBFaWwd2XjjZ4i3jqTVvWU 1iXIK2Q8c1whq0vGMJnKmJ1tlfhL1DzVbiw6fUgV+4GLGNjGQ34iCu0wguohwsH3ktRx6LNuAjA oi/p9qpcWupfpYWQPdxYFR8G9hzAL7UDpOn30CtnYzYc/sdv+S2rAgSipoh6idsZzoXr0xI1WI4 53iJetEnPOyN+5hegCq1rxPKpfSl6yxsfnQM561m3mh1/oTUtxyq7z4ACoXAtC4sC0MfC11Cti7 fhsJgIa7lnhFDpoZkV0KzGcF6ehNYaoU0ZUpgWo4/rD6DfDbrzu4r X-Google-Smtp-Source: AGHT+IFkHVXdIToKeS9IYC04H0XhotnExqBscPeT2KS3dLGvn7Ooi5xAI0y8yvE+Wp85XtZepw3pvA== X-Received: by 2002:a05:620a:440f:b0:7ca:f02a:4d2b with SMTP id af79cd13be357-7d3c6c0d3b0mr2596059785a.12.1750265392346; Wed, 18 Jun 2025 09:49:52 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6fb35c927cesm74824436d6.124.2025.06.18.09.49.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 09:49:52 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 6E4AB1200043; Wed, 18 Jun 2025 12:49:51 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-01.internal (MEProxy); Wed, 18 Jun 2025 12:49:51 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtddvgdefudeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhg vghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhinh hugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghrrdhkvghrnhgv lhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtth hopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepsghoqhhu nhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguh hordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgt ohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jun 2025 12:49:50 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , Wedson Almeida Filho , Viresh Kumar , Lyude Paul , Ingo Molnar , Mitchell Levy , "Paul E. McKenney" , "Greg Kroah-Hartman" , Linus Torvalds , "Thomas Gleixner" Subject: [PATCH v5 10/10] rust: sync: Add memory barriers Date: Wed, 18 Jun 2025 09:49:34 -0700 Message-Id: <20250618164934.19817-11-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250618164934.19817-1-boqun.feng@gmail.com> References: <20250618164934.19817-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Memory barriers are building blocks for concurrent code, hence provide a minimal set of them. The compiler barrier, barrier(), is implemented in inline asm instead of using core::sync::atomic::compiler_fence() because memory models are different: kernel's atomics are implemented in inline asm therefore the compiler barrier should be implemented in inline asm as well. Also it's currently only public to the kernel crate until there's a reasonable driver usage. Signed-off-by: Boqun Feng Reviewed-by: Alice Ryhl --- rust/helpers/barrier.c | 18 ++++++++++ rust/helpers/helpers.c | 1 + rust/kernel/sync.rs | 1 + rust/kernel/sync/barrier.rs | 67 +++++++++++++++++++++++++++++++++++++ 4 files changed, 87 insertions(+) create mode 100644 rust/helpers/barrier.c create mode 100644 rust/kernel/sync/barrier.rs diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c new file mode 100644 index 000000000000..cdf28ce8e511 --- /dev/null +++ b/rust/helpers/barrier.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +void rust_helper_smp_mb(void) +{ + smp_mb(); +} + +void rust_helper_smp_wmb(void) +{ + smp_wmb(); +} + +void rust_helper_smp_rmb(void) +{ + smp_rmb(); +} diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 83e89f6a68fb..8ddfc8f84e87 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -9,6 +9,7 @@ =20 #include "atomic.c" #include "auxiliary.c" +#include "barrier.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index b620027e0641..c7c0e552bafe 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -11,6 +11,7 @@ =20 mod arc; pub mod atomic; +pub mod barrier; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs new file mode 100644 index 000000000000..36a5c70e6716 --- /dev/null +++ b/rust/kernel/sync/barrier.rs @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory barriers. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions of +//! semantics can be found at [`LKMM`]. +//! +//! [`LKMM`]: srctree/tools/memory-mode/ + +/// A compiler barrier. +/// +/// An explicic compiler barrier function that prevents the compiler from = moving the memory +/// accesses either side of it to the other side. +pub(crate) fn barrier() { + // By default, Rust inline asms are treated as being able to access an= y memory or flags, hence + // it suffices as a compiler barrier. + // + // SAFETY: An empty asm block should be safe. + unsafe { + core::arch::asm!(""); + } +} + +/// A full memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from mo= ving the memory accesses +/// either side of it to the other side. +pub fn smp_mb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_mb()` is safe to call. + unsafe { + bindings::smp_mb(); + } + } else { + barrier(); + } +} + +/// A write-write memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from mo= ving the memory write +/// accesses either side of it to the other side. +pub fn smp_wmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_wmb()` is safe to call. + unsafe { + bindings::smp_wmb(); + } + } else { + barrier(); + } +} + +/// A read-read memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from mo= ving the memory read +/// accesses either side of it to the other side. +pub fn smp_rmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_rmb()` is safe to call. + unsafe { + bindings::smp_rmb(); + } + } else { + barrier(); + } +} --=20 2.39.5 (Apple Git-154)