From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6B3A1E0DF5; Mon, 21 Apr 2025 16:42:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253757; cv=none; b=Jp8F68em2hs68N0YuPrc/FcFFJxWwngoWwvQ2jKS+UdRYhupgw1iul+U1oX6Daxij4LpLRZUxTRY0f+t0/070gF2RhebAUQOMmOec9cmsir1BzMXDu+A7wkI9TKvOjnBC1Rfm92LYG835lsJCBqFdsc0FDLZU3KqRRLJBJdlPig= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253757; c=relaxed/simple; bh=M8n7p4I5bBRKLYp+zmw7whgmO6Opvw0T6CdsEIFeivs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A7tN/C5UgNN+jONF3vfb92+rtW1XZn+E7ajgpMXV3w/gVtpVDdJQIiXUA5dJxs3e2BVGGCkOlK6Sl6CTC/YXltX8sZiI9CmkY+DmqXhXi5vcdmxw9xuIsSuazL8Td25bvVacC3Z8d3t/loqQCSOJeTL8Wj6i2kplAzU2HkAccBQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aX9tXTRX; arc=none smtp.client-ip=209.85.160.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aX9tXTRX" Received: by mail-qt1-f178.google.com with SMTP id d75a77b69052e-476b89782c3so49721541cf.1; Mon, 21 Apr 2025 09:42:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253753; x=1745858553; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=OVn4vdsyma2YNnYBs3oMc4dBxX75ZCJW+A5HwosHSls=; b=aX9tXTRX7+WPK1/WEJ2aj22rs9VGizwecHyMUgEu2amnBpB6RzZ4kk5xMkkLUgWLb2 BF2rWw5A+FXo4bJoN7sr0uj8eOI1xk9gw0VPyKG52E/bqeMLFSf7KGKPvrRf9EDQu4hn H0bgdycmy+rbk5IJzPVXT254UlqUy0AyOpP9tUMVU/1Pi0k4UCyJ997FCwjR3G5kUC31 eDuT2ntP1uzefGDnsdl3+CLgGvLLoijsLD8frRigkE4ENuviSXlwiZ8afHZ5XCdbCXAu ptUsBTwOozd6lM8EnNSYbWvWvLf0jUFHmNS+Ez87kcLWRe2ZxN2FaGr63Sz5xe5xiXeJ /5AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253753; x=1745858553; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=OVn4vdsyma2YNnYBs3oMc4dBxX75ZCJW+A5HwosHSls=; b=d9+8HSoSsZ3p/kwUcDArwFGvIckAIQWNllYLlnMrFhsiXl1tZYHPee1pLy1W9q153W Y4NC4bfE1VyK0jD3iCBwtKp1l6Xy4v4yxoOoUxgDJjN+qwOiamdXiyvdjioV/aE/DwBD gtzR1Q+WI4Pf8Sc6yOOYJ9gPXE7sIg9baZP45/JxDhPIuPKGQpmQXPbGlkzVpYuuJHin 95943fdqjC7IAiBD1WxfyBAVbyBG4p/FhXsflVSZIKXLkNlHRr8sLO+B4T+hUrOopKAE oBwa8aX29VJqoIj3rpaenim6HS/CEbX9I33U/S/8nfQ3y9cMG2fGvael6SXZDRjFXi/0 USpA== X-Forwarded-Encrypted: i=1; AJvYcCUP8r6dDVg5Flc1jFOkR7/e8Ow+AUACeRn33RF8B/iqJ2wM8wVk687vEpQbTBG8AgevBiRTcI18h0I5eBMA@vger.kernel.org, AJvYcCUWmGm444XFbrHvS+3mfeVz11fLYrMUwl0uBcEI8aAUn2CrX67W9P0wFLRK8M34RgXfT6ChDULd2qlV@vger.kernel.org, AJvYcCV1GWpY+IWvvPiv2sWUWQj+B0Ssk5TTCcyV+XWTcFonQkAyuuKWdPcEMqVrg5AB9am7/ujI+B05WzWkM0zkIg==@vger.kernel.org, AJvYcCWdND9YHDpBa41QoH5m+NjDxRnO55TdIKb7U7yCiBIInEIsgwq4YEss4T2PsIQsnHJfs/rK@vger.kernel.org X-Gm-Message-State: AOJu0YzxTy3YxUYfl4wVoQ9kPeFOMLkFKLW+q8t7bFjbBgux47rL+W6F Lr/x5IeHkC1Lxk9GqyjBJvhbuJl39SA4UdEt5dAagyDAzQISyqt2TAlmM5gO X-Gm-Gg: ASbGnct5AEasE70uKmjOzNrrqOseRtmsJglgrhjU+HOUXEFqQWJDIZWYIa/ziRKnU1G UHK0syeA7EOeKfWrzKfy0TGPNgi8pY9er/3yUZAmNe3uyQn5vX3QqN1co8U3FEPyHV4RZvVpZj9 WCvknkchQgFiHFtAn9eb3/kXEBJ84YH+jnechDDGsjUq7KKGMwFX01goNkf/1A+E2x9m0YeNMW5 76QU02zs8MKANQgvnGKkVSSV6zkp7Jm0sDJcIN1ggpyJr+sEw+QK/aJqCJKuy9SzCUqjq0L9G4+ iliwCgWkHbvk/e1W1NtM3QvohSMl2W/syKhnqz7rwI3K9UYuMRPwV0xp6+2q/yAbNNx5dqfvAVC 9Z5MKnfn6uPJKrkcMLmcFgFKAQBT5Lkg= X-Google-Smtp-Source: AGHT+IGClIxJlqq7/tMwyARGja8bJVuau4lrQesmkLgUtXp7I2IDN5QiJogeuutarMPnoMa+stbg5w== X-Received: by 2002:a05:622a:1a12:b0:475:16db:b911 with SMTP id d75a77b69052e-47aec4d072cmr206459291cf.52.1745253753160; Mon, 21 Apr 2025 09:42:33 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-47ae9c4c786sm45287171cf.44.2025.04.21.09.42.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:32 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 5C87A1200043; Mon, 21 Apr 2025 12:42:32 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-01.internal (MEProxy); Mon, 21 Apr 2025 12:42:32 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:31 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 01/12] rust: Introduce atomic API helpers Date: Mon, 21 Apr 2025 09:42:10 -0700 Message-ID: <20250421164221.1121805-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to support LKMM atomics in Rust, add rust_helper_* for atomic APIs. These helpers ensure the implementation of LKMM atomics in Rust is the same as in C. This could save the maintenance burden of having two similar atomic implementations in asm. Originally-by: Mark Rutland Signed-off-by: Boqun Feng --- rust/helpers/atomic.c | 1038 +++++++++++++++++++++ rust/helpers/helpers.c | 1 + scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++ 4 files changed, 1105 insertions(+) create mode 100644 rust/helpers/atomic.c create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c new file mode 100644 index 000000000000..00bf10887928 --- /dev/null +++ b/rust/helpers/atomic.c @@ -0,0 +1,1038 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +/* + * This file provides helpers for the various atomic functions for Rust. + */ +#ifndef _RUST_ATOMIC_API_H +#define _RUST_ATOMIC_API_H + +#include + +// TODO: Remove this after LTO helper support is added. +#define __rust_helper + +__rust_helper int +rust_helper_atomic_read(const atomic_t *v) +{ + return atomic_read(v); +} + +__rust_helper int +rust_helper_atomic_read_acquire(const atomic_t *v) +{ + return atomic_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic_set(atomic_t *v, int i) +{ + atomic_set(v, i); +} + +__rust_helper void +rust_helper_atomic_set_release(atomic_t *v, int i) +{ + atomic_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic_add(int i, atomic_t *v) +{ + atomic_add(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return(int i, atomic_t *v) +{ + return atomic_add_return(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_acquire(int i, atomic_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_release(int i, atomic_t *v) +{ + return atomic_add_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add(int i, atomic_t *v) +{ + return atomic_fetch_add(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_release(int i, atomic_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_sub(int i, atomic_t *v) +{ + atomic_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return(int i, atomic_t *v) +{ + return atomic_sub_return(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_release(int i, atomic_t *v) +{ + return atomic_sub_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub(int i, atomic_t *v) +{ + return atomic_fetch_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_inc(atomic_t *v) +{ + atomic_inc(v); +} + +__rust_helper int +rust_helper_atomic_inc_return(atomic_t *v) +{ + return atomic_inc_return(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_inc_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_release(atomic_t *v) +{ + return atomic_inc_return_release(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_inc(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_inc_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_dec(atomic_t *v) +{ + atomic_dec(v); +} + +__rust_helper int +rust_helper_atomic_dec_return(atomic_t *v) +{ + return atomic_dec_return(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_dec_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_release(atomic_t *v) +{ + return atomic_dec_return_release(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_dec(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_dec_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_and(int i, atomic_t *v) +{ + atomic_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and(int i, atomic_t *v) +{ + return atomic_fetch_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_andnot(int i, atomic_t *v) +{ + atomic_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_or(int i, atomic_t *v) +{ + atomic_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or(int i, atomic_t *v) +{ + return atomic_fetch_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_release(int i, atomic_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_xor(int i, atomic_t *v) +{ + atomic_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor(int i, atomic_t *v) +{ + return atomic_fetch_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_xchg(atomic_t *v, int new) +{ + return atomic_xchg(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_acquire(atomic_t *v, int new) +{ + return atomic_xchg_acquire(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_release(atomic_t *v, int new) +{ + return atomic_xchg_release(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return atomic_xchg_relaxed(v, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return atomic_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_release(int i, atomic_t *v) +{ + return atomic_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return atomic_add_negative_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_inc_not_zero(atomic_t *v) +{ + return atomic_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic_inc_unless_negative(atomic_t *v) +{ + return atomic_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic_dec_unless_positive(atomic_t *v) +{ + return atomic_dec_unless_positive(v); +} + +__rust_helper int +rust_helper_atomic_dec_if_positive(atomic_t *v) +{ + return atomic_dec_if_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_read(const atomic64_t *v) +{ + return atomic64_read(v); +} + +__rust_helper s64 +rust_helper_atomic64_read_acquire(const atomic64_t *v) +{ + return atomic64_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic64_set(atomic64_t *v, s64 i) +{ + atomic64_set(v, i); +} + +__rust_helper void +rust_helper_atomic64_set_release(atomic64_t *v, s64 i) +{ + atomic64_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic64_add(s64 i, atomic64_t *v) +{ + atomic64_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return atomic64_add_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_sub(s64 i, atomic64_t *v) +{ + atomic64_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_inc(atomic64_t *v) +{ + atomic64_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return(atomic64_t *v) +{ + return atomic64_inc_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_inc_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_dec(atomic64_t *v) +{ + atomic64_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return(atomic64_t *v) +{ + return atomic64_dec_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_dec_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_and(s64 i, atomic64_t *v) +{ + atomic64_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_or(s64 i, atomic64_t *v) +{ + atomic64_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_xor(s64 i, atomic64_t *v) +{ + atomic64_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_xchg(atomic64_t *v, s64 new) +{ + return atomic64_xchg(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return atomic64_xchg_acquire(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return atomic64_xchg_release(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return atomic64_xchg_relaxed(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_acquire(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_release(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_unless_negative(atomic64_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic64_dec_unless_positive(atomic64_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_if_positive(atomic64_t *v) +{ + return atomic64_dec_if_positive(v); +} + +#endif /* _RUST_ATOMIC_API_H */ +// b032d261814b3e119b72dbf7d21447f6731325ee diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 1e7c84df7252..b20ee7cef74d 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -7,6 +7,7 @@ * Sorted alphabetically. */ =20 +#include "atomic.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 5b98a8307693..02508d0d6fe4 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -11,6 +11,7 @@ cat < ${LINUXDIR}/include= /${header} diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen= -rust-atomic-helpers.sh new file mode 100755 index 000000000000..72f2e5bde0c6 --- /dev/null +++ b/scripts/atomic/gen-rust-atomic-helpers.sh @@ -0,0 +1,65 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=3D$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +gen_proto_order_variant() +{ + local meta=3D"$1"; shift + local pfx=3D"$1"; shift + local name=3D"$1"; shift + local sfx=3D"$1"; shift + local order=3D"$1"; shift + local atomic=3D"$1"; shift + local int=3D"$1"; shift + + local atomicname=3D"${atomic}_${pfx}${name}${sfx}${order}" + + local ret=3D"$(gen_ret_type "${meta}" "${int}")" + local params=3D"$(gen_params "${int}" "${atomic}" "$@")" + local args=3D"$(gen_args "$@")" + local retstmt=3D"$(gen_ret_stmt "${meta}")" + +cat < + +// TODO: Remove this after LTO helper support is added. +#define __rust_helper + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic" "int" ${args} +done + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat < X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:33 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 02/12] rust: sync: Add basic atomic operation mapping framework Date: Mon, 21 Apr 2025 09:42:11 -0700 Message-ID: <20250421164221.1121805-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for generic atomic implementation. To unify the implementation of a generic method over `i32` and `i64`, the C side atomic methods need to be grouped so that in a generic method, they can be referred as ::, otherwise their parameters and return value are different between `i32` and `i64`, which would require using `transmute()` to unify the type into a `T`. Introduce `AtomicImpl` to represent a basic type in Rust that has the direct mapping to an atomic implementation from C. This trait is sealed, and currently only `i32` and `i64` impl this. Further, different methods are put into different `*Ops` trait groups, and this is for the future when smaller types like `i8`/`i16` are supported but only with a limited set of API (e.g. only set(), load(), xchg() and cmpxchg(), no add() or sub() etc). While the atomic mod is introduced, documentation is also added for memory models and data races. Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect my responsiblity on the Rust atomic mod. Signed-off-by: Boqun Feng --- MAINTAINERS | 4 +- rust/kernel/sync.rs | 1 + rust/kernel/sync/atomic.rs | 19 ++++ rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++ 4 files changed, 222 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/sync/atomic.rs create mode 100644 rust/kernel/sync/atomic/ops.rs diff --git a/MAINTAINERS b/MAINTAINERS index fa1e04e87d1d..134017f36aec 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3813,7 +3813,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c ATOMIC INFRASTRUCTURE M: Will Deacon M: Peter Zijlstra -R: Boqun Feng +M: Boqun Feng R: Mark Rutland L: linux-kernel@vger.kernel.org S: Maintained @@ -3822,6 +3822,8 @@ F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h F: include/linux/refcount.h F: scripts/atomic/ +F: rust/kernel/sync/atomic.rs +F: rust/kernel/sync/atomic/ =20 ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 36a719015583..b620027e0641 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -10,6 +10,7 @@ use pin_init; =20 mod arc; +pub mod atomic; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs new file mode 100644 index 000000000000..65e41dba97b7 --- /dev/null +++ b/rust/kernel/sync/atomic.rs @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic primitives. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions of +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Con= sistency) Model is the +//! only model for Rust code in kernel, and Rust's own atomics should be a= voided. +//! +//! # Data races +//! +//! [`LKMM`] atomics have different rules regarding data races: +//! +//! - A normal write from C side is treated as an atomic write if +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=3Dy. +//! - Mixed-size atomic accesses don't cause data races. +//! +//! [`LKMM`]: srctree/tools/memory-mode/ + +pub mod ops; diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs new file mode 100644 index 000000000000..1c0a87d31bf0 --- /dev/null +++ b/rust/kernel/sync/atomic/ops.rs @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic implementations. +//! +//! Provides 1:1 mapping of atomic implementations. + +use crate::bindings::*; +use crate::macros::paste; + +mod private { + /// Sealed trait marker to disable customized impls on atomic implemen= tation traits. + pub trait Sealed {} +} + +// `i32` and `i64` are only supported atomic implementations. +impl private::Sealed for i32 {} +impl private::Sealed for i64 {} + +/// A marker trait for types that implement atomic operations with C side = primitives. +/// +/// This trait is sealed, and only types that have directly mapping to the= C side atomics should +/// impl this: +/// +/// - `i32` maps to `atomic_t`. +/// - `i64` maps to `atomic64_t`. +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {} + +// `atomic_t` impl atomic operations on `i32`. +impl AtomicImpl for i32 {} + +// `atomic64_t` impl atomic operations on `i64`. +impl AtomicImpl for i64 {} + +// This macro generates the function signature with given argument list an= d return type. +macro_rules! declare_atomic_method { + ( + $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)? + ) =3D> { + paste!( + #[doc =3D concat!("Atomic ", stringify!($func))] + #[doc =3D "# Safety"] + #[doc =3D "- any pointer passed to the function has to be a va= lid pointer"] + #[doc =3D "- Accesses must not cause data races per LKMM:"] + #[doc =3D " - atomic read racing with normal read, normal wri= te or atomic write is not data race."] + #[doc =3D " - atomic write racing with normal read or normal = write is data-race, unless the"] + #[doc =3D " normal accesses are done at C side and consider= ed as immune to data"] + #[doc =3D " races, e.g. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_AT= OMIC."] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)= ?; + ); + }; + ( + $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(->= $ret:ty)? + ) =3D> { + paste!( + declare_atomic_method!( + [< $func _ $variant >]($($arg_sig)*) $(-> $ret)? + ); + ); + + declare_atomic_method!( + $func [$($rest)*]($($arg_sig)*) $(-> $ret)? + ); + }; + ( + $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)? + ) =3D> { + declare_atomic_method!( + $func($($arg_sig)*) $(-> $ret)? + ); + } +} + +// This macro generates the function implementation with given argument li= st and return type, and it +// will replace "call(...)" expression with "$ctype _ $func" to call the r= eal C function. +macro_rules! impl_atomic_method { + ( + ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $re= t:ty)? { + call($($c_arg:expr),*) + } + ) =3D> { + paste!( + #[inline(always)] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)= ? { + // SAFETY: Per function safety requirement, all pointers a= re valid, and accesses + // won't cause data race per LKMM. + unsafe { [< $ctype _ $func >]($($c_arg,)*) } + } + ); + }; + ( + ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_= sig:tt)*) $(-> $ret:ty)? { + call($($arg:tt)*) + } + ) =3D> { + paste!( + impl_atomic_method!( + ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)?= { + call($($arg)*) + } + ); + ); + impl_atomic_method!( + ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? { + call($($arg)*) + } + ); + }; + ( + ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? { + call($($arg:tt)*) + } + ) =3D> { + impl_atomic_method!( + ($ctype) $func($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + } +} + +// Delcares $ops trait with methods and implements the trait for `i32` and= `i64`. +macro_rules! declare_and_impl_atomic_methods { + ($ops:ident ($doc:literal) { + $( + $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:= ty)? { + call($($arg:tt)*) + } + )* + }) =3D> { + #[doc =3D $doc] + pub trait $ops: AtomicImpl { + $( + declare_atomic_method!( + $func[$($variant)*]($($arg_sig)*) $(-> $ret)? + ); + )* + } + + impl $ops for i32 { + $( + impl_atomic_method!( + (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)?= { + call($($arg)*) + } + ); + )* + } + + impl $ops for i64 { + $( + impl_atomic_method!( + (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret= )? { + call($($arg)*) + } + ); + )* + } + } +} + +declare_and_impl_atomic_methods!( + AtomicHasBasicOps ("Basic atomic operations") { + read[acquire](ptr: *mut Self) -> Self { + call(ptr as *mut _) + } + + set[release](ptr: *mut Self, v: Self) { + call(ptr as *mut _, v) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasXchgOps ("Exchange and compare-and-exchange atomic operations= ") { + xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self { + call(ptr as *mut _, v) + } + + cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: Self, new:= Self) -> Self { + call(ptr as *mut _, old, new) + } + + try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut S= elf, new: Self) -> bool { + call(ptr as *mut _, old, new) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasArithmeticOps ("Atomic arithmetic operations") { + add[](ptr: *mut Self, v: Self) { + call(v, ptr as *mut _) + } + + fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self) -> S= elf { + call(v, ptr as *mut _) + } + } +); --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA4741E5B64; Mon, 21 Apr 2025 16:42:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253759; cv=none; b=VDHGnnP+QzbJoN7bpiRserFhSKYdcxadkWE52TcJi+sSgrEN6XhTl0dnp7lBGdHqZ4kcydtEeeBv8c2Zxgpsk/4JBY3OoMVLoZA/CcdzdBGLnsZhnNh39y9eLTS6PHokUKkgR7Xo37U0b3ZERg6/Wx9SgmDs0g5Fbdv17eAdoEU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253759; c=relaxed/simple; bh=BabJT8shFeooLOlaKbp3PTDt9xeyZZVe051+I5Kt8A8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=s7LZjiLVmm+BjZOAub9GXt8reASHZaO73MwGJf8GfkFRY33tRRVBk/n9wFeOvq39b0103YaxtuHTCkPFespdPr2ncCPCQTbCsLUohloMsBYrwUXWquA36mKEE3/WBKGJssWJGt0HQadGC23J4w3qMXufie1ihDYZzSCpD+pZBJA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=jnhIPfVL; arc=none smtp.client-ip=209.85.160.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jnhIPfVL" Received: by mail-qt1-f174.google.com with SMTP id d75a77b69052e-4769b16d4fbso22167901cf.2; Mon, 21 Apr 2025 09:42:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253756; x=1745858556; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=8saFQNyFDcMMxSMO/2MuZCuMNf/4lANNbiDkrHm2aLE=; b=jnhIPfVL53Vq65yx8DgBu3MbGoFAT/eXWzdUfSr06y2wcoutLdlnLTgJCQONV+Tud0 4g73Kl3YsBeB2sTqlWeN52wvy2AMSabmdslivJb30nPdQ8nqZ8TO6YsYlYMp2pXKkiRW uw1sxybfa9gunoGXrDfwxbN7/rH4YyxDAmfT1CIiU+63MGK1Al7ZQM91qhWoNzGN+w0a ouxYpqxRn0vn+Fs+hq1urBQQUQnKHHMQUtdUg+TgO5En6CeAANxjyzj/rrAZwP7pC/vU nAr+TdZtb7YtZ9+RydK02hOSK17dQqtisIzKOdSnZc/WcUIqlrCz3gsZrSSE2iTCNUNU imIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253756; x=1745858556; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=8saFQNyFDcMMxSMO/2MuZCuMNf/4lANNbiDkrHm2aLE=; b=sSp+UqX1AXtrMSYL113cXq68+TSIqaFhJ9roCYGSFIchZBKPRh0E0xnXo3sxz6II5X oB8O8tAzORfw0O5HJ8TeHqtUbbgCaqVzqQvji3JYj07Apf2v4N+PJa+MhTR6MYpnm1Cu Ztrq3Wy72ovh+mCHDEDT0PSo4h5Wc5PH70Hr/6nqOOBzZQZDboO60uvlrp7+kzqvk0pf uGKM8rxnJ8J//iEoROOgWYRSK+IRAkp83nhLvaeZ/e12yvwODzN5yJ+bjQTRz7jyQutu /wZvbnuy7/r+2PxtAIIoMfVp1/JfTZU26trcil0OmeaMn6RLlc3xalZsAciYEJ2zi3kl 7H5w== X-Forwarded-Encrypted: i=1; AJvYcCUceZ1mQ8fEn26ubOseT2JS87oZcIXqMAWkdTL1l2D0+3vnBj3ifw8v7lTN3xK7TNjYEACiqU3KgotFiTMygA==@vger.kernel.org, AJvYcCVpADDj+kL6SmDzVEzmShChBHI6QKGOMl8+hG3C7eH2OyNC3xTjIrwgXci+kpXlztCcxRQkWsP2phd7TtXW@vger.kernel.org, AJvYcCWP7CedXQ5ikzj1E6Uvt/dL68po5MjS2sLqjqRPaVDfZlJZJGElZSKFSZgTKhnhw3YKke6LUaO94bZ2@vger.kernel.org, AJvYcCXZxP4mWk5kiuWAxbP8WcpUO2XfNAxeXPoVblGhgULn5eyEZ7Kavmxc8i36XXApkHkRSvfg@vger.kernel.org X-Gm-Message-State: AOJu0YyuN0fr71wHsAxX8DXqqH6Kw3DtdUMfvD0QIDx7w6qwSeZ04xrW uhwtkyY+JPWELW63zhpvgIchLpqE4QKa/GJIi+jWi/wv0ZEB4zDj/SQrKw== X-Gm-Gg: ASbGncte6o/AYZPntbwzhw56ECTvugWcrpcksX6GyNe8QEFRv7dn+PnFBmQ/tcdblgJ bWrX3SEo3vrxLV5DoYdTHZw6YXdRuOLibe9vNPE3vJtYL17ThofZxp+MYNpXTweo+aC/Cd7ZL/a AI0FUr4kyTkxjVinAmLWFX/c491OfmouapAi1eT/EmPNfEVCM6bYck0ENrguDJOrmjniMVfw+nN yx9bwaRFWdbmexoLg/iQqY/NC+Kx0gybJN7xkOy1Psa7Tsbu6Lcvz12VPDzVSz4SCbg0MlAc7B0 IOeffgkbWaIErH3QIFYfH2rg72096SQ1TwCE2wuCQpXzMh4hjJkukbu7QogPX5RcF38Gs/Au1DV GAf6XK210H59fXObuHpXMCGiBJG4efrI= X-Google-Smtp-Source: AGHT+IG9hxjtMqFn1bbKak24JtECxpwyBj4OMatGmTvxpRmezdZrvAyzLih6f/8OgHcFAKZl1xOKJg== X-Received: by 2002:a05:622a:3c7:b0:476:8ee8:d8a0 with SMTP id d75a77b69052e-47aec39232bmr231364961cf.2.1745253756248; Mon, 21 Apr 2025 09:42:36 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-47ae9cf9f7dsm43825731cf.74.2025.04.21.09.42.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:35 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 8A0CE1200043; Mon, 21 Apr 2025 12:42:35 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-01.internal (MEProxy); Mon, 21 Apr 2025 12:42:35 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:34 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 03/12] rust: sync: atomic: Add ordering annotation types Date: Mon, 21 Apr 2025 09:42:12 -0700 Message-ID: <20250421164221.1121805-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for atomic primitives. Instead of a suffix like _acquire, a method parameter along with the corresponding generic parameter will be used to specify the ordering of an atomic operations. For example, atomic load() can be defined as: impl Atomic { pub fn load(&self, _o: O) -> T { ... } } and acquire users would do: let r =3D x.load(Acquire); relaxed users: let r =3D x.load(Relaxed); doing the following: let r =3D x.load(Release); will cause a compiler error. Compared to suffixes, it's easier to tell what ordering variants an operation has, and it also make it easier to unify the implementation of all ordering variants in one method via generic. The `IS_RELAXED` and `ORDER` associate consts are for generic function to pick up the particular implementation specified by an ordering annotation. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 3 + rust/kernel/sync/atomic/ordering.rs | 94 +++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) create mode 100644 rust/kernel/sync/atomic/ordering.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 65e41dba97b7..9fe5d81fc2a9 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -17,3 +17,6 @@ //! [`LKMM`]: srctree/tools/memory-mode/ =20 pub mod ops; +pub mod ordering; + +pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/= ordering.rs new file mode 100644 index 000000000000..6cf01cd276c6 --- /dev/null +++ b/rust/kernel/sync/atomic/ordering.rs @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory orderings. +//! +//! The semantics of these orderings follows the [`LKMM`] definitions and = rules. +//! +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust= memory model. +//! - [`Full`] means "fully-ordered", that is: +//! - It provides ordering between all the preceding memory accesses and= the annotated operation. +//! - It provides ordering between the annotated operation and all the f= ollowing memory accesses. +//! - It provides ordering between all the preceding memory accesses and= all the fllowing memory +//! accesses. +//! - All the orderings are the same strong as a full memory barrier (i.= e. `smp_mb()`). +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, exce= pt that dependency +//! orderings are also honored in [`LKMM`]. Dependency orderings are des= cribed in "DEPENDENCY +//! RELATIONS" in [`LKMM`]'s [`explanation`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.= txt + +/// The annotation type for relaxed memory ordering. +pub struct Relaxed; + +/// The annotation type for acquire memory ordering. +pub struct Acquire; + +/// The annotation type for release memory ordering. +pub struct Release; + +/// The annotation type for fully-order memory ordering. +pub struct Full; + +/// The trait bound for operations that only support relaxed ordering. +pub trait RelaxedOnly {} + +impl RelaxedOnly for Relaxed {} + +/// The trait bound for operations that only support acquire or relaxed or= dering. +pub trait AcquireOrRelaxed { + /// Describes whether an ordering is relaxed or not. + const IS_RELAXED: bool =3D false; +} + +impl AcquireOrRelaxed for Acquire {} + +impl AcquireOrRelaxed for Relaxed { + const IS_RELAXED: bool =3D true; +} + +/// The trait bound for operations that only support release or relaxed or= dering. +pub trait ReleaseOrRelaxed { + /// Describes whether an ordering is relaxed or not. + const IS_RELAXED: bool =3D false; +} + +impl ReleaseOrRelaxed for Release {} + +impl ReleaseOrRelaxed for Relaxed { + const IS_RELAXED: bool =3D true; +} + +/// Describes the exact memory ordering of an `impl` [`All`]. +pub enum OrderingDesc { + /// Relaxed ordering. + Relaxed, + /// Acquire ordering. + Acquire, + /// Release ordering. + Release, + /// Fully-ordered. + Full, +} + +/// The trait bound for annotating operations that should support all orde= rings. +pub trait All { + /// Describes the exact memory ordering. + const ORDER: OrderingDesc; +} + +impl All for Relaxed { + const ORDER: OrderingDesc =3D OrderingDesc::Relaxed; +} + +impl All for Acquire { + const ORDER: OrderingDesc =3D OrderingDesc::Acquire; +} + +impl All for Release { + const ORDER: OrderingDesc =3D OrderingDesc::Release; +} + +impl All for Full { + const ORDER: OrderingDesc =3D OrderingDesc::Full; +} --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9328D1DF72E; Mon, 21 Apr 2025 16:42:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253761; cv=none; b=NOV+LGX5UWPhSjbBT+x2w6gSoW0ej+cASnh4ZdFHBKWJpZ1nZTslqqfVQr57sTzEjeQGnsSjFnvTRlnorj0NsJbTSF7H8B5e/Jbg+jBMVUv2mfNz1l/fjj/H9xo9he88NIS1a+dTr6wdg6IpOOQqFuPZD97Jx4DD62sIm0d991U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253761; c=relaxed/simple; bh=VIZ21Tx5XOzrE2lg32MzKfnKBuGu+y7qMMegb1mrb68=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jInzj5qa03XKi67/HslZhDWqBSjC6HyCSUjkCEthQvt1JHBaHhWHxL+9smLJ5KzIoxgx58A9GZLeDI4SupoBxErHRrucgc39M4dW+C1Ij3PDiJHajEKq0KIyFyXMvfR3/XpYizTRyZmsjRgHPESVbPC6aV0bryP9HjQCfpWPjYE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QxLHPH8i; arc=none smtp.client-ip=209.85.222.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QxLHPH8i" Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-7c5c815f8efso373942485a.2; Mon, 21 Apr 2025 09:42:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253758; x=1745858558; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=mtEl5+XBMA+ykf0aS94gSaUF58JxAYctve/OxyPQc2g=; b=QxLHPH8iO26PUgNTQ4Q94cyZd2Od5v7c5WlrxH0uE3a4XsXpGTTCZYR7sevzb3Klb+ PuhHQIAAA4EL6ahu416Vr4BLlh+MUT+7IVO+xQ8Dz3HKmc4gDskCZ/c9yMvRzfZb7Q+1 Qma0iXsRP0IimEU3Yojo8p6rAq8Wt9I6QU6r2sHEMKVXnHFuC123I278qEQFYY0iEII5 rJ2Zvfjbw6hnGm0VjG3YCIoB+hXYjWEijveMAPFRDpTvnFHK65rNH9sIjcj9MFwIHnMP y3JX1AHvL4hZ85OnVq9p+7m8XLCQNj4gVg69EZoRDkMx4M5EfaDQ2MszK4h1OsU5j12Y eBIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253758; x=1745858558; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=mtEl5+XBMA+ykf0aS94gSaUF58JxAYctve/OxyPQc2g=; b=h7/LiZQVn4hQzXv3/uqnSjnRpUQIKkI2SNy7ttfqSUCPjhgTdZYfqDqLLptZb5H+sX BcNVl0ZfwoYUk+peO2AewxOFFMekAo37TrNyJglcvHeMWb9ZSbMfgkw1oCPczH3DX0hv /oD5XyWG1y6PtN88rjH4C96xrNORO2Uu01eTCUnHwc7l+YpFGc8jL/BtnnVnxLQCR5FB 0XPiysUDGN2tERUqdfXEGT7pCA/+M0NxGC0URtO8CIHkDGl4WeJb0WWn2zAzGrUpPFLb 6/DzWJA3zOl67dg5Er4DjL4S3wm6eQ5DmAlBc3DAzDBCVapFFhhMUq6FIQNUMFkF/hKn brbw== X-Forwarded-Encrypted: i=1; AJvYcCUopEB+Jl70IjLpdmUYFbK7zbi8GvoyOvqAF4AY/ariXJmSYvcYqYxEAaujMleHuh9pnO9L@vger.kernel.org, AJvYcCUvS5NH1nhbFfJkiqaJB6mh24ZWjWebPUOyZyhnC6diXEh5Itn2JX/a1no9WLQhlGF+ZvHmkOGgDnvH@vger.kernel.org, AJvYcCVXpiZe720szHQ89sDrMDbmfNBn5u/bqdH+Lk4pktJi9rxYxQl0EbEbC1xX9KCS7ZlufwcZ/PG3wdJna6tSJQ==@vger.kernel.org, AJvYcCWgpNtIs4rgA/pnFP0dDkgnDlLEw4GM8BSAMfWJHPmnPsgUx0VPevM9IyWUnCbUVzPQWWB3tZHp8ubeM4U+@vger.kernel.org X-Gm-Message-State: AOJu0YzT0QgMekiogc3EhyhURjGs82vx29yDUjYGH+CSSRq82ucoPLra 3VJYXEF78M3IfCmenKHEq9Asd7LNEdSfNYMIlqcd+stNYoxUctqAXK6Zig== X-Gm-Gg: ASbGncvcmxMwtdhKwRThAkkjvd7r0GVHD6cdXzX8OhHKWr5L8zUyg8/SRRYx9AUDXJ/ aClPJmeBcz1xDCmCL8HeabzZkl0u33cBaNn4u7vIlBHh3Mg/v/7KOq5q5zQqqIIGs52krjVfk+Z Cyeu0Kel41nwCmuhZ/2Kjq1bVf2HsTmnQtQmRcfWNvf38sBx0wQQ7LkYudxND8tLxEzirfK81EU vOWtZEkcyUFLGnbGYd0Q06VlFmG7FeSfpexHP7QZ9CSit9uo4MizhAHzrYVqERkX9IFcQr1OnM7 jPJzpvtKHfvs9fDwMm3heZoPFDR5MxJW8B1Q3WFP7EuolhhYclWlA1exymCUW6/wzgUPhQFrhQD gFC9OPrcvQ53jroa3J9S0n/sLhW/LraMCX9CjEWp1Uw== X-Google-Smtp-Source: AGHT+IHPmoQswD9V8t4QoCScmJ6eEFw5p8ICv3y2lcEVLBc4mVVcFwA7kZOL6Y/obQjZExq55ZCO1g== X-Received: by 2002:a05:620a:258c:b0:7c7:a555:8788 with SMTP id af79cd13be357-7c927f5908fmr1549015885a.2.1745253758165; Mon, 21 Apr 2025 09:42:38 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7c925a6ee74sm439778485a.1.2025.04.21.09.42.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:37 -0700 (PDT) Received: from phl-compute-09.internal (phl-compute-09.phl.internal [10.202.2.49]) by mailfauth.phl.internal (Postfix) with ESMTP id 5C9F31200043; Mon, 21 Apr 2025 12:42:37 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-09.internal (MEProxy); Mon, 21 Apr 2025 12:42:37 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeehucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:36 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 04/12] rust: sync: atomic: Add generic atomics Date: Mon, 21 Apr 2025 09:42:13 -0700 Message-ID: <20250421164221.1121805-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To provide using LKMM atomics for Rust code, a generic `Atomic` is added, currently `T` needs to be Send + Copy because these are the straightforward usages and all basic types support this. The trait `AllowAtomic` should be only implemented inside atomic mod until the generic atomic framework is mature enough (unless the implementer is a `#[repr(transparent)]` new type). `AtomicImpl` types are automatically `AllowAtomic`, and so far only basic operations load() and store() are introduced. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 2 + rust/kernel/sync/atomic/generic.rs | 254 +++++++++++++++++++++++++++++ 2 files changed, 256 insertions(+) create mode 100644 rust/kernel/sync/atomic/generic.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 9fe5d81fc2a9..a01e44eec380 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -16,7 +16,9 @@ //! //! [`LKMM`]: srctree/tools/memory-mode/ =20 +pub mod generic; pub mod ops; pub mod ordering; =20 +pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs new file mode 100644 index 000000000000..5d8bbaaf108e --- /dev/null +++ b/rust/kernel/sync/atomic/generic.rs @@ -0,0 +1,254 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Generic atomic primitives. + +use super::ops::*; +use super::ordering::*; +use crate::types::Opaque; + +/// A generic atomic variable. +/// +/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be ch= osen. +/// +/// # Invariants +/// +/// Doing an atomic operation while holding a reference of [`Self`] won't = cause a data race, this +/// is guaranteed by the safety requirement of [`Self::from_ptr`] and the = extra safety requirement +/// of the usage on pointers returned by [`Self::as_ptr`]. +#[repr(transparent)] +pub struct Atomic(Opaque); + +// SAFETY: `Atomic` is safe to share among execution contexts because a= ll accesses are atomic. +unsafe impl Sync for Atomic {} + +/// Atomics that support basic atomic operations. +/// +/// TODO: Currently the [`AllowAtomic`] types are restricted within basic = integer types (and their +/// transparent new types). In the future, we could extend the scope to mo= re data types when there +/// is a clear and meaningful usage, but for now, [`AllowAtomic`] should o= nly be implemented inside +/// atomic mod for the restricted types mentioned above. +/// +/// # Safety +/// +/// [`Self`] must have the same size and alignment as [`Self::Repr`]. +pub unsafe trait AllowAtomic: Sized + Send + Copy { + /// The backing atomic implementation type. + type Repr: AtomicImpl; + + /// Converts into a [`Self::Repr`]. + fn into_repr(self) -> Self::Repr; + + /// Converts from a [`Self::Repr`]. + fn from_repr(repr: Self::Repr) -> Self; +} + +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and = alignment. +unsafe impl AllowAtomic for T { + type Repr =3D Self; + + fn into_repr(self) -> Self::Repr { + self + } + + fn from_repr(repr: Self::Repr) -> Self { + repr + } +} + +impl Atomic { + /// Creates a new atomic. + pub const fn new(v: T) -> Self { + Self(Opaque::new(v)) + } + + /// Creates a reference to [`Self`] from a pointer. + /// + /// # Safety + /// + /// - `ptr` has to be a valid pointer. + /// - `ptr` has to be valid for both reads and writes for the whole li= fetime `'a`. + /// - For the whole lifetime of '`a`, other accesses to the object can= not cause data races + /// (defined by [`LKMM`]) against atomic operations on the returned = reference. + /// + /// [`LKMM`]: srctree/tools/memory-model + /// + /// # Examples + /// + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [= `Atomic::store()`] can + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire(= )` or + /// `WRITE_ONCE()`/`smp_store_release()` in C side: + /// + /// ```rust + /// # use kernel::types::Opaque; + /// use kernel::sync::atomic::{Atomic, Relaxed, Release}; + /// + /// // Assume there is a C struct `Foo`. + /// mod cbindings { + /// #[repr(C)] + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 } + /// } + /// + /// let tmp =3D Opaque::new(cbindings::foo { a: 1, b: 2}); + /// + /// // struct foo *foo_ptr =3D ..; + /// let foo_ptr =3D tmp.get(); + /// + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound. + /// let foo_a_ptr =3D unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) }; + /// + /// // a =3D READ_ONCE(foo_ptr->a); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all access= es on it is atomic, so no + /// // data race. + /// let a =3D unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed); + /// # assert_eq!(a, 1); + /// + /// // smp_store_release(&foo_ptr->a, 2); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all acces= ses on it is atomic, so no + /// // data race. + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release); + /// ``` + /// + /// However, this should be only used when communicating with C side o= r manipulating a C struct. + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self + where + T: Sync, + { + // CAST: `T` is transparent to `Atomic`. + // SAFETY: Per function safety requirement, `ptr` is a valid point= er and the object will + // live long enough. It's safe to return a `&Atomic` because fu= nction safety requirement + // guarantees other accesses won't cause data races. + unsafe { &*ptr.cast::() } + } + + /// Returns a pointer to the underlying atomic variable. + /// + /// Extra safety requirement on using the return pointer: the operatio= ns done via the pointer + /// cannot cause data races defined by [`LKMM`]. + /// + /// [`LKMM`]: srctree/tools/memory-model + pub const fn as_ptr(&self) -> *mut T { + self.0.get() + } + + /// Returns a mutable reference to the underlying atomic variable. + /// + /// This is safe because the mutable reference of the atomic variable = guarantees the exclusive + /// access. + pub fn get_mut(&mut self) -> &mut T { + // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the obje= ct has already been + // initialized. `&mut self` guarantees the exclusive access, so it= 's safe to reborrow + // mutably. + unsafe { &mut *self.as_ptr() } + } +} + +impl Atomic +where + T::Repr: AtomicHasBasicOps, +{ + /// Loads the value from the atomic variable. + /// + /// # Examples + /// + /// Simple usages: + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// let x =3D Atomic::new(42i64); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// ``` + /// + /// Customized new types in [`Atomic`]: + /// + /// ```rust + /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed}; + /// + /// #[derive(Clone, Copy)] + /// #[repr(transparent)] + /// struct NewType(u32); + /// + /// // SAFETY: `NewType` is transparent to `u32`, which has the same s= ize and alignment as + /// // `i32`. + /// unsafe impl AllowAtomic for NewType { + /// type Repr =3D i32; + /// + /// fn into_repr(self) -> Self::Repr { + /// self.0 as i32 + /// } + /// + /// fn from_repr(repr: Self::Repr) -> Self { + /// NewType(repr as u32) + /// } + /// } + /// + /// let n =3D Atomic::new(NewType(0)); + /// + /// assert_eq!(0, n.load(Relaxed).0); + /// ``` + #[inline(always)] + pub fn load(&self, _: Ordering) -> T { + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_read*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + let v =3D unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_read(a) + } else { + T::Repr::atomic_read_acquire(a) + } + }; + + T::from_repr(v) + } + + /// Stores a value to the atomic variable. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.store(43, Relaxed); + /// + /// assert_eq!(43, x.load(Relaxed)); + /// ``` + /// + #[inline(always)] + pub fn store(&self, v: T, _: Ordering) { + let v =3D T::into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_set*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_set(a, v) + } else { + T::Repr::atomic_set_release(a, v) + } + }; + } +} --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5487E1E231E; Mon, 21 Apr 2025 16:42:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253763; cv=none; b=LVKIdiSNWP564q+80V6Ythun0RuLj/1F5ypZnqRmn4/7vcdhdgpMuZX3uamNr/oeI0ZWAPIBXh6cCG2wJcAxfgMeHH6RtA8wER/uvWUi31h/ImbuskM0LGXqltgJ13D8Dudo98RMracM2juhSPPB1ssynymxMhuFC3HK7cCQK9g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253763; c=relaxed/simple; bh=yplzto+5ajpLI7kECpauLp8HuHi1ihU0Fk9lmFiGkMo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AKnVAgo91C4JAn5OG4f/XaJrydbBbaolGKB5incXvbE7JTmmVbPqqdMCaPzvBqE/llUdycpALRxWQydz1JGgXMXIHPtpOroih6SzjbrMtYED0ionXZom2KBO+jQ/6bzcuraeBJe2ahJ33vx53mvm/AQfxTOLhIht+JiizIhM4Q8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=a6m6EJJU; arc=none smtp.client-ip=209.85.219.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="a6m6EJJU" Received: by mail-qv1-f52.google.com with SMTP id 6a1803df08f44-6e8f06e13a4so52274076d6.0; Mon, 21 Apr 2025 09:42:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253760; x=1745858560; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=aV8YwGgeoCl+I8co/zSeKwDIuhVQPzbGhymLAEUJ1wA=; b=a6m6EJJUV0G95jjNwfTCD4Zu8IIL9cz3FGhy+WwnYFigJ07TgPYsx/kvUrYAHOfoXe bBpZnQrj0SKtQZZSuZ0ZfTtYQs0jDHJ5bc3OGwtS54XB1FRoJcf01zp1rpO88I83yEkm k2CnpqejkE+zbokuTtdYJTUXIZnimOKLhfCbK1RuHTPIeK9K+ysRQGE7JDJyzvWNj3wd NgfgMkSVK4vR/l00jbuZlhFTqoo4gQaJfhHTAhAwHHOeubV71LSVhDqMMoFCiD6yUkcs Dot1hVKQOwQR1wJMOfdIe8gvg//KYj4vykKY3b42dE9VE1Jw8CUsdj0TT5JHaOQWi0jr kVxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253760; x=1745858560; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=aV8YwGgeoCl+I8co/zSeKwDIuhVQPzbGhymLAEUJ1wA=; b=nMjylNFn0M4JQ68yuKZaWNP8VygyrznQQYbT+249tJVx08vX2HbBn1pYwTtGOAqaTD lQqBrkAihQ9YK5JW4JkrSNMRFAuXEW0VsjreyryfILmrZgr60K12bB4MSmOwKOzSkwTF cJVOP0lee28ehkRTvpjIbgX77UdcjCZ76wyhZju2AO2iefY+jQKikncA/XS9W41os0ex EtBc4YJJzDJfYkWlYnGsJJuW0sS45zJkkzA+6mAAKcfM1MATORgs9F46cWLNscrd6nsR z1wjjvYfuzMe4aY19d3e0dYQNjo0xph8eTbWHot9PNU03FQoEgQ84WlFoHPb0w4d86Jb 6eMg== X-Forwarded-Encrypted: i=1; AJvYcCUXRNqJ1VvC4ReR4WpDeNBw47iYOoXvqWbjny7VhnTtCPLeaw2GkBiWfUkTyF49zr1t6JcgLrvUJEll@vger.kernel.org, AJvYcCUo/hCq18X1TiEkt7GcWj1P8g0VuR26oBYNjXbGRK51LoisPi9mxJYd+XUQhcjPFYwkH5yr0rvHP2ATsZCDsg==@vger.kernel.org, AJvYcCV+CZcbQml1UmMQ036eKgQB6hEJNtN8jeac0E/exhDxGVvbmnmoiN7ERpJHEn4xtW7DkfHf3peUSu4nkDZS@vger.kernel.org, AJvYcCW7zjrJ/V0HI2SkCEhd8X6xgHTuapQtScFodP45Zvf1F6JEmqsohMne1CSKtPOGTDhUX0p9@vger.kernel.org X-Gm-Message-State: AOJu0Yw0NpEDnHCo2jt/sY85plCj8CPgeP5s1GDmA/lBdpVzsKkxaYKx llEuL/2ecDAnQVrJquP1xtR2PWyrC5qM/nuAimuCLjAjGw/hmUY+LlLQAQ== X-Gm-Gg: ASbGnctJZgLgU6a/bXyVIZSjbSfgYFUiqrF2gdAbml0EkvczxnrzllGgowZ1M8PTu6Q aqNGi+LxxVtjSl4ZGVHruLlVyB/WNOA21QMVhR6CEj69gjAuIOd3Hm2FuZ3unl2K22ARP/VtnO7 dSqqjWSCCILxPIMkM0HM/iwKhrvYUxg5KadrshrlY7uMMIiLFkWiL5f1+FtHdNPr7ZrPtei2Mv/ JRZB4LvhYNCe5GFKCOIS7WOsHiWl3jSTe5qi+BgTj5h5AxfIsHYg92M2aNFx8t2mbxRXtpeuhK+ J2XP8Zo+b4Eue6T9yEdtCRL/X6HDETH9+AFOidLQx92WnDLpQu0ieA87x+wWwLZ5PkSckMSZSoG pAcEhX45mjvOAgPmrHkCtSWod3FS/S6eEqF5A/gOwYQ== X-Google-Smtp-Source: AGHT+IG5y/1NscrWaBAQrKMUT8r9paYRMlbhFJRy1Q47dX5/vP18R6wh0ZjU/vDgAl1GlEDf2aVIVQ== X-Received: by 2002:a05:6214:e81:b0:6e8:9053:825e with SMTP id 6a1803df08f44-6f2c4eac75bmr164998246d6.17.1745253759891; Mon, 21 Apr 2025 09:42:39 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c2af1428sm45797246d6.7.2025.04.21.09.42.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:39 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 199D11200043; Mon, 21 Apr 2025 12:42:39 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Mon, 21 Apr 2025 12:42:39 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:38 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 05/12] rust: sync: atomic: Add atomic {cmp,}xchg operations Date: Mon, 21 Apr 2025 09:42:14 -0700 Message-ID: <20250421164221.1121805-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" xchg() and cmpxchg() are basic operations on atomic. Provide these based on C APIs. Note that cmpxchg() use the similar function signature as compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means the operation succeeds and `Err(old)` means the operation fails. With the compiler optimization and inline helpers, it should provides the same efficient code generation as using atomic_try_cmpxchg() or atomic_cmpxchg() correctly. Except it's not! Because of commit 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the success of cmpxchg and only wants to use the old value. For example, for code like: // Uses the latest value regardlessly, same as atomic_cmpxchg() in C. let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); It will still generate code: movl $0x40, %ecx movl $0x34, %eax lock cmpxchgl %ecx, 0x4(%rsp) jne 1f 2: ... 1: movl %eax, %ecx jmp 2b Attempting to write an x86 try_cmpxchg_exclusive() for Rust use only, because the Rust function takes a `&mut` for old pointer, which must be exclusive to the function, therefore it's unsafe to use some shared pointer. But maybe I'm missing something? Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/generic.rs | 122 +++++++++++++++++++++++++++++ 1 file changed, 122 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index 5d8bbaaf108e..73aacfac381b 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -252,3 +252,125 @@ pub fn store(&self, v: T,= _: Ordering) { }; } } + +impl Atomic +where + T::Repr: AtomicHasXchgOps, +{ + /// Atomic exchange. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.xchg(52, Acquire)); + /// assert_eq!(52, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn xchg(&self, v: T, _: Ordering) -> T { + let v =3D T::into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_xchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + let ret =3D unsafe { + match Ordering::ORDER { + OrderingDesc::Full =3D> T::Repr::atomic_xchg(a, v), + OrderingDesc::Acquire =3D> T::Repr::atomic_xchg_acquire(a,= v), + OrderingDesc::Release =3D> T::Repr::atomic_xchg_release(a,= v), + OrderingDesc::Relaxed =3D> T::Repr::atomic_xchg_relaxed(a,= v), + } + }; + + T::from_repr(ret) + } + + /// Atomic compare and exchange. + /// + /// Compare: The comparison is done via the byte level comparison betw= een the atomic variables + /// with the `old` value. + /// + /// Ordering: When succeeds, provides the corresponding ordering as th= e `Ordering` type + /// parameter indicates, and a failed one doesn't provide any ordering= , the read part of a + /// failed cmpxchg should be treated as a relaxed read. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed= to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the at= omic variable when + /// cmpxchg was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// // Checks whether cmpxchg succeeded. + /// let success =3D x.cmpxchg(52, 64, Relaxed).is_ok(); + /// # assert!(!success); + /// + /// // Checks whether cmpxchg failed. + /// let failure =3D x.cmpxchg(52, 64, Relaxed).is_err(); + /// # assert!(failure); + /// + /// // Uses the old value if failed, probably re-try cmpxchg. + /// match x.cmpxchg(52, 64, Relaxed) { + /// Ok(_) =3D> { }, + /// Err(old) =3D> { + /// // do something with `old`. + /// # assert_eq!(old, 42); + /// } + /// } + /// + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in= C. + /// let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); + /// # assert_eq!(42, latest); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn cmpxchg(&self, mut old: T, new: T, o: Ordering) = -> Result { + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } + + /// Atomic compare and exchange and returns whether the operation succ= eeds. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`= ]. + /// + /// Returns `true` means the cmpxchg succeeds otherwise returns `false= ` with `old` updated to + /// the value of the atomic variable when cmpxchg was happening. + #[inline(always)] + fn try_cmpxchg(&self, old: &mut T, new: T, _: Ordering)= -> bool { + let old =3D (old as *mut T).cast::(); + let new =3D T::into_repr(new); + let a =3D self.0.get().cast::(); + + // SAFETY: + // - For calling the atomic_try_cmpchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllowAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - `old` is a valid pointer to write because it comes from a m= utable reference. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + unsafe { + match Ordering::ORDER { + OrderingDesc::Full =3D> T::Repr::atomic_try_cmpxchg(a, old= , new), + OrderingDesc::Acquire =3D> T::Repr::atomic_try_cmpxchg_acq= uire(a, old, new), + OrderingDesc::Release =3D> T::Repr::atomic_try_cmpxchg_rel= ease(a, old, new), + OrderingDesc::Relaxed =3D> T::Repr::atomic_try_cmpxchg_rel= axed(a, old, new), + } + } + } +} --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qv1-f44.google.com (mail-qv1-f44.google.com [209.85.219.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69CBB1EB1B6; Mon, 21 Apr 2025 16:42:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253765; cv=none; b=bYQe/QVQDDorqxkuklLJcTaRWcP6ibLLufnoobHxV+pjpXU46DtdqiEfSCStfEXdbSrOwVnz0PZXAUYho7yyeSr2T18OxKIuHYn2SUCqGzoGD73hZ4udCfAu1DcY5f3XLSvFmo8nLhoFtwFEg1X+g/BKBjyYP3Y4PkmBvrLcq3g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253765; c=relaxed/simple; bh=UGNCuXxFioUqqIrmBuMgTay3Y8/FsLszX3rIkRfGS/g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L2yEiBMyV6YnNGaSL0xNDPm1We7INZRmuSNw4TUKusOR4wGRsFajF4iC8+Hiw0bfPoUJ1tB3MVx1mQs/ZHibUK/9I77ZTR1A6LdsEcmO/xdBPOmmfee9onJ/FlBYm7BOKjBrC0bL0n5tTD2CQMtjlP9z7ieEChuKEUh6hmxz/Aw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WL1At61v; arc=none smtp.client-ip=209.85.219.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WL1At61v" Received: by mail-qv1-f44.google.com with SMTP id 6a1803df08f44-6e8f05acc13so55460876d6.2; Mon, 21 Apr 2025 09:42:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253762; x=1745858562; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=Nnh9FhtmfBr7vF/QXBT4Oa0ku2sBn0D0ph9GeI/BdhY=; b=WL1At61vnB+HH3Oe9Mry02no7jcO8+79pA+b5hh5YRbCsF39qHBMotMfBumUoxMB9W HzkU/B7HKAS/K11CJjFQ7iJbyTIweINFZOcGUH+OJtYrLARteflRtSjOrjZdzQVw8mgV JS9U3xAYtz/2yWLFHZHFcyMJTKirD/2a4T/4c5LId2Dnlqnyu9hl9KTp2YiFA7BYGTav 9IqZ9aZZ3dP0bfS15xYR4/6W9JgETOY9kJsRrZld8XaPUV7jh0uDBKdBvyKtOzaloZxs pmABrw7FrpcbdpYEdBK4S9GKCoq3G4eXg52AQKNYU2g7uqLNT4EfsLZHi3Kh9fHLHSUd wD3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253762; x=1745858562; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Nnh9FhtmfBr7vF/QXBT4Oa0ku2sBn0D0ph9GeI/BdhY=; b=YTypPKf1ctp/Y+eACaX0t147CpFwSr/WB1jIUBLQmzV103hkkOVpyVQ1W1akD4PtQT yR+8y6KS6Vp2k/2a6+1fQLcCO6yQLrKi8hcan1Wfo/1gzDpby7DiFZo8UIkp3q7JaZ5d t0doy1yK8DkzHqA8H5EdhybjCE/2wDKlgXp5Ti0BNIsRzp076Hd+c2WmL9AzTDPLZDJx VMHEpdiwzhWJ/rNTXnyy1TFKD6DH1CWicMiLPVlD2hYoBrr3Su3VX2EFwpP6DH77Y97V KNHuBHFHAnzqgFBJDdiYtFnfCOmh2R6N9gjHMAYiHxVZwV65cmWUyemGsnH9ZHXYBCVl LywQ== X-Forwarded-Encrypted: i=1; AJvYcCUNnJe1K8x/hCQcb3RIV7OmsNSF2B/17MOTs28D0M6wphXD3S2pkbFUludJLUnKpR8XUVrru1yhUKLk@vger.kernel.org, AJvYcCVEZuSKIhUXZt7PLULkjePA3L6PUQgjRKqPJ+FA4sopyTKOa52zWEgTsCR2fgMOGp9TV6+1Q4UfnLpY5cjPWw==@vger.kernel.org, AJvYcCVVPRDZHZcA6ZIQ2/nEZfSVv8ESBS29QLQ18AkcUQVoHJi6vCKbplclGYrQTHWfNkwnU+6HT7nzo2bb9YJ3@vger.kernel.org, AJvYcCWGYKOL/Gn3vpSOnyXToadP8epvG8TUZ2EN7C2ySltkx5nyTLQO0tU/+6eNYBc+FB4Y2rkM@vger.kernel.org X-Gm-Message-State: AOJu0YzKvBYFI5xdGSMhfVXbPs2Q2AkL3+ZKCpRoNm5cIJWVvhd2TMfG 1PCVxBCmPQqmZxdCZDA2xX9ZHZ28qlC+xtdh9mGbX/vqQd58J8LgwrDqtA== X-Gm-Gg: ASbGnct74fp/MPkUxSGDSEFRs56Yb6bGEstPoA5jMBjVuMcNXCO6ehanH1M6UK35iqK ldbApaL4bD6MaGg/Xp5TfZGdDkPI4Gwh2H6s0txmttTlybVrb2HAyrUJzUXWTC/m1sdn6npb0T+ YpRTSd4crs7S+OsT3X2DI0xCU1fQaCWerNP2xW++aTbCToei9vnr43t8R9hjGQFROUl6IKKz8eS SIlPz0kw9g8Js8nQGO+J7DBlg6lFSrTmvr7VZb1E8mUP+Zi65xUVHskvCkjVyUFurtMXqyz/MBo 2Ti8tGzEiqQ9UaXuZMvIgocUG0oF34RrZGhVONl2k4CTN8iCkP9KDuyEvcW+MWht/G89bB09qBW Opq4ZH+p7DLd25C3F5961HKpR54SSdoE= X-Google-Smtp-Source: AGHT+IEGQid9yBRcR2A0o8laWRVzb9RUOp/oLa6XOM8ouA/1CJegibrL3x6J6Kt8/WyFOsT33rR8Yg== X-Received: by 2002:a0c:ec48:0:b0:6f2:d45c:4a1d with SMTP id 6a1803df08f44-6f2d45c4c07mr132479856d6.38.1745253761526; Mon, 21 Apr 2025 09:42:41 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7c925b4ddb2sm435608985a.88.2025.04.21.09.42.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:41 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id B3FC11200043; Mon, 21 Apr 2025 12:42:40 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-01.internal (MEProxy); Mon, 21 Apr 2025 12:42:40 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpeefnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:39 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 06/12] rust: sync: atomic: Add the framework of arithmetic operations Date: Mon, 21 Apr 2025 09:42:15 -0700 Message-ID: <20250421164221.1121805-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" One important set of atomic operations is the arithmetic operations, i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not make senses for all the types that `AllowAtomic` to have arithmetic operations, for example a `Foo(u32)` may not have a reasonable add() or sub(), plus subword types (`u8` and `u16`) currently don't have atomic arithmetic operations even on C side and might not have them in the future in Rust (because they are usually suboptimal on a few architecures). Therefore add a subtrait of `AllowAtomic` describing which types have and can do atomic arithemtic operations. A few things about this `AllowAtomicArithmetic` trait: * It has an associate type `Delta` instead of using `AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`) may not wants an `add(&self, i32)`, but an `add(&self, u32)`. * `AtomicImpl` types already implement an `AtomicHasArithmeticOps` trait, so add blanket implementation for them. In the future, `i8` and `i16` may impl `AtomicImpl` but not `AtomicHasArithmeticOps` if arithemtic operations are not available. Only add() and fetch_add() are added. The rest will be added in the future. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/generic.rs | 102 +++++++++++++++++++++++++++++ 1 file changed, 102 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index 73aacfac381b..2de4cdbce58e 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -3,6 +3,7 @@ //! Generic atomic primitives. =20 use super::ops::*; +use super::ordering; use super::ordering::*; use crate::types::Opaque; =20 @@ -55,6 +56,23 @@ fn from_repr(repr: Self::Repr) -> Self { } } =20 +/// Atomics that allows arithmetic operations with an integer type. +pub trait AllowAtomicArithmetic: AllowAtomic { + /// The delta types for arithmetic operations. + type Delta; + + /// Converts [`Self::Delta`] into the representation of the atomic typ= e. + fn delta_into_repr(d: Self::Delta) -> Self::Repr; +} + +impl AllowAtomicArithmetic for T { + type Delta =3D Self; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d + } +} + impl Atomic { /// Creates a new atomic. pub const fn new(v: T) -> Self { @@ -374,3 +392,87 @@ fn try_cmpxchg(&self, old: &mut T, new:= T, _: Ordering) -> bool { } } } + +impl Atomic +where + T::Repr: AtomicHasArithmeticOps, +{ + /// Atomic add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.add(12, Relaxed); + /// + /// assert_eq!(54, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn add(&self, v: T::Delta, _: Ordering) { + let v =3D T::delta_into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_add() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + unsafe { + T::Repr::atomic_add(a, v); + } + } + + /// Atomic fetch and add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) }); + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } ); + /// ``` + #[inline(always)] + pub fn fetch_add(&self, v: T::Delta, _: Ordering) -> T { + let v =3D T::delta_into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_fetch_add*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + let ret =3D unsafe { + match Ordering::ORDER { + ordering::OrderingDesc::Full =3D> T::Repr::atomic_fetch_ad= d(a, v), + ordering::OrderingDesc::Acquire =3D> T::Repr::atomic_fetch= _add_acquire(a, v), + ordering::OrderingDesc::Release =3D> T::Repr::atomic_fetch= _add_release(a, v), + ordering::OrderingDesc::Relaxed =3D> T::Repr::atomic_fetch= _add_relaxed(a, v), + } + }; + + T::from_repr(ret) + } +} --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 944ED1EA7C8; Mon, 21 Apr 2025 16:42:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253766; cv=none; b=lhAUmpdEO10jz2u8PjDRtL8vttbQTKHFe2qeVJuUDTaPjFq/zUn5Dx7VJedBODhtT5dUm1RyQ9R9+bWRPcWed2c1sa9oZj3wzCypf1nHzEBODuv6OIh1ordrUBzZQ5jX7F4ZP271izTMSI7OPnlLCLZ3/j9M119T9qgPMoLXGls= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253766; c=relaxed/simple; bh=T6fn+IIXOWZFbnRBBLRrM7viHHO9sGEOT6hWF8K4jvQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L1NCOWnh5qiQZvIWj3yKq8l8vL9MCFZdlrmQRxQzjT3LJprvGz0RJ7dyWFRmumCaK0zOMdWV0LAtZjffLVCC4DMXMjYFuvOv4Zb1gCNT8pTcv1yAuf3Qv7XfPjjoFEghjU8iitnnH557K58xLAv2dBveTXLPE9PJQEZTS7cqTGg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gDoOorjt; arc=none smtp.client-ip=209.85.219.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gDoOorjt" Received: by mail-qv1-f51.google.com with SMTP id 6a1803df08f44-6e900a7ce55so67697576d6.3; Mon, 21 Apr 2025 09:42:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253763; x=1745858563; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=2I8XAztJEZn+vwuO6dwvyeow7+5UV1o92z9+NI6bSFY=; b=gDoOorjth15rCaN4PEozAEGFpy0X5L6FrttWakBOh1XlOMH7YSEUz7OnTUhCoM60wG Dsaoeu/XVIGPMyPQNdsPuWrulkh4/MCLcv3AZPjDWQrlfo6xw6+scz07HLO+crd+8yRg sWXApRvOkA+uEWROMwFMWVyNylHZI1Ce+KVSN4IKCqcCEJjDxDTVukqM2ziNmdbuFtEx 1V+0kDTTfULVctNEtUjZksZJrezW7EbPUj7g/lUg/JhCWbHYmYT6yDc2eKdLKBT6q0Az 3uygRg1zQHpYla/KzUCTfdrcrTZVG6yrm7Ecb9Dfu8q00aLzOpnfyz4qg2nA92FwXl1y C8tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253763; x=1745858563; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=2I8XAztJEZn+vwuO6dwvyeow7+5UV1o92z9+NI6bSFY=; b=XpfHXXH8m0lm7QJ7db+kDMJ3aAcrZKQQpkjaGn2jqhSmG+eWgIFvn/ZneInG3LTe9D jIC/w20bUi1NfuDRAIsI3YKdg25eYGlJsQSfSOxYY4a7QsxFU8w4Pt5qbw7ebD/p5o8k vUzw/F6oyq/GZaXV6WYZzx0m3MZ5ZPI0HWxzjfYBBC3Qq9lvr3EsDXwdvQToOVWyM1A/ j/o3B/qKUbiCi4T+l5nGqxFb8I7gOuwVmpSDKz0XMtB5BhYP12cbn00ZeRhMpmFJ9KnC dCvE9kN9CawBkEoi+RWL8MQnOJgBfxCjARqlZA6i43DfBlTMqyQPylB6ZBJfKVNWnWwR B1Kg== X-Forwarded-Encrypted: i=1; AJvYcCUNuHV+2xS6lcxEiOvJR/5ZiulgkZRK1jEDWSsK7VGijnLGhmCthbNBNlmVhavZnyeZlh1N@vger.kernel.org, AJvYcCV4TnWBIGYNSXHQ5N1c1SVbvmI99+QxtnsNObvS7vlWDYLSksjBcE6O8IEFAY2QHc5kfX/4fxEhKP2t81+l/Q==@vger.kernel.org, AJvYcCVXGhc6sH3VS85ab4XROyIC3gJSK1ZZme4nrEsuFuLnZbXRrsjVepYUWBgQFMGR2d9wklwohUd4f48Z@vger.kernel.org, AJvYcCVj5BYbij8fqgrhx7E9qNwxl8zKGnIj1hH54j2nUb4WDfZY5Zl9NdINdiuLu/KD+ew6dEvhEiC9x5cpRO9v@vger.kernel.org X-Gm-Message-State: AOJu0YzyH2nWUsqlLZxG0Rp23iQ6UJEryXzK4B8rZPeHVQOdhDrhMDIl LPbwtd+KTNP/66fLLdJXxokExQ9W2OflXxXaKcPlJeND4jeMaCIFhxF3TA== X-Gm-Gg: ASbGncspJLrEKHW7OVaYzAH4AQvGtYDXDKGFUxUYL9xfZXdbIUpWZ7gl0xrOK1TN7hj 5wTHVOuOCz729v5CaZTIHHDsmQC5Xqw4n/b91eiiCZ1lBB4WbsTJtQA/JJoLZldXsi+zOdgjR48 XaN0+Z9rZSHZpmbHrHKbx1ljBJBqcUrQyJxDia4Z1FpLGuDJljiv+nUaDkjKExBr5kM4srmE+RR TkatDxdwWDpthef0+InQ4a+t3g9SS6F4LnJtyn1uz8VPjovnVlWRxErVUcxj1tK/hPOOx4p/BtV SAbNGM/nFY4laWGgFaWOlwhX+kRHD+pXljGCSaikCpkg1JYr6xW0BjVK3OyYJD62jfZd+lmE/N3 SIIh1c2pmYtH+BCbZHSzebfNy613DyFQ= X-Google-Smtp-Source: AGHT+IGvM5MicIi76VA584PiH6Y2GFv2NRK24khwnxOuU4C7WmvwllySnRg7t29WaLINEvuwTGyqKA== X-Received: by 2002:ad4:4ea7:0:b0:6e6:6c18:3ab7 with SMTP id 6a1803df08f44-6f2c45b6bc5mr215702126d6.27.1745253763090; Mon, 21 Apr 2025 09:42:43 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c2bfd5e4sm45385466d6.74.2025.04.21.09.42.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:42 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 504631200066; Mon, 21 Apr 2025 12:42:42 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-01.internal (MEProxy); Mon, 21 Apr 2025 12:42:42 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpeefnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:41 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 07/12] rust: sync: atomic: Add Atomic Date: Mon, 21 Apr 2025 09:42:16 -0700 Message-ID: <20250421164221.1121805-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for basic unsigned types that have an `AtomicImpl` with the same size and alignment. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 80 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index a01e44eec380..d197b476e3bc 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -22,3 +22,83 @@ =20 pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x =3D Atomic::new(42u64); +/// +/// assert_eq!(42, x.load(Relaxed)); +/// ``` +// SAFETY: `u64` and `i64` has the same size and alignment. +unsafe impl generic::AllowAtomic for u64 { + type Repr =3D i64; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x =3D Atomic::new(42u64); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for u64 { + type Delta =3D u64; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x =3D Atomic::new(42u32); +/// +/// assert_eq!(42, x.load(Relaxed)); +/// ``` +// SAFETY: `u32` and `i32` has the same size and alignment. +unsafe impl generic::AllowAtomic for u32 { + type Repr =3D i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x =3D Atomic::new(42u32); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for u32 { + type Delta =3D u32; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01CE71EE039; Mon, 21 Apr 2025 16:42:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253768; cv=none; b=SFibYGB6GD54C6592GvwlbRtjZx8slW8g75cYWB6brvSq06E9kQsUOnNMco+INh0EkNgWXJBsS8sg0bGNNoAbHmB5eo8gW8W7BSRKtM6cr5NsnLP3CVQQxT22sNTsqDJVBp8gvjzyWwDEH7sDp70PYNsN6XVCSike18lI7kwIoo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253768; c=relaxed/simple; bh=GvXnRtT8VxgnibDq9zxBs4/gUfJm1ape9XUvKM0P9wk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sdpZV1pBtpF7hk5hbRTuncOHrtoxTqbz+rYeoP01siEKTqPcKpD/YbyIx04YergYkueSNCXy9U+uiT6d845c4qQvt8M3C4+kxsJ2ayolI4LbUOtrL/YSz/a5VR4czOgum39Q6HcilBdGhV6r7PcHoxRmYr1N/G+bipTp0lUVKCI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QKmKGS9t; arc=none smtp.client-ip=209.85.222.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QKmKGS9t" Received: by mail-qk1-f174.google.com with SMTP id af79cd13be357-7c5b2472969so448789785a.1; Mon, 21 Apr 2025 09:42:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253764; x=1745858564; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=FQdpd+pohHuwKd+PplgPe0guKqarlBZhgyvqtWw2lTY=; b=QKmKGS9tEAEWruw0n30UWqRAzWdMSXmyquvz/ssnyhEPKfqgDcnNYCFOJslYCcSM34 FBsCTMRy9cR1VmYTvpC2hljd9TYrtZCcENtT0aMrIcoKT2LEieoy7xtoRkiL/dTBny8x MAu8TXod1xiYdPsj9K71Fh1vVNtls27kRnsn82ZXXDyCib3aYsTW2c9XBGrBAKS2YvkB KjvrjWhEgiCb9OVdYc+COedJychS33g2dwVSCdWGq1xk3ky4FqlzuH51NBL/c1jmGgQD RHCuxapV6PRz7Vdi7rytATuTSEx9t7TuBtwA4MiykUNXAKA+K0z43jE9I+rDJSAzNotx ur6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253764; x=1745858564; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=FQdpd+pohHuwKd+PplgPe0guKqarlBZhgyvqtWw2lTY=; b=KrMZUeUBoS7bgAo8/ztK2012x0ER7xkzuZuYzPuePx8jByOov2mLCinUTqYMRBJqTf 7fPb1Y+ddizsP/gM+0IeZtLpocwRRfTKZcZxK9NpotBpJA8vV4X+3aVhChzB6FEV5B64 6ht6zxTpu0fyfSOR7bX/zCOMRi7WzuNSTg1mo0MNl7kkazRtfnK1Fd+G3ub0G70eJ+tT 2rA7ly/NbjdL7vafJ7YaqY9xW9Ug3Se7SCL50ww2siL9e+NcRMw4JLXkdKjTm7e7CU5I aDbwRi3JESc9LoMOMPnEp/B7aqKfIW0eLAoAJVZUW7DGaVYyEvxjwRK3CcxOHrwAO4jE Zjpw== X-Forwarded-Encrypted: i=1; AJvYcCVOFDX5Hfr1Dllvh4/BM4W3hAJR0fex5c75yv6GaftLY0PTRb9BCL+Z3T8Z2+jCnNnt5eX/lGu0HrJKHNdVJA==@vger.kernel.org, AJvYcCVlngTIqO3/VwondK3fB9B5VmXhWDd3AdHulhuCddaK6oUp2XvKNiRna/Dl4UL5S25z1AhVTNnKc8NEYhWu@vger.kernel.org, AJvYcCWSyB93UQua/TQQ6xusgYWuSz9iL8IGHSyCM4pm5Vy0MFcGO+ZC5xPPAZ2LJ3ig4OTlxkGS0jlGyTKr@vger.kernel.org, AJvYcCXkvTQfauvJDyYn9Yo96iiTcmZvDL4PFP0M3L/zgtmtD9SrU216wLH5yPWfV+mCTcKI7HWM@vger.kernel.org X-Gm-Message-State: AOJu0YwudLzDZ7BRQhNFfZ0M+hJsv1dbOCLUl5IojOfaTt6ciejIPjoJ qlLMop8M4OhDiP4YMCuTTAMA564Om+ZvTre1ZBq9eIXT9QchjbsB8J03FQ== X-Gm-Gg: ASbGncsKwbVjdQx6UgohSSHbS83/4KX/E8AJCJgZw+f+hwMRgUdm4iJbUJf07Y5IoAe PMtDpF45Xkj9P3XsulMNDlrVIR8WSzJWPP+/MxLNWNMSlEqDQe4oMU7mrvs04CDO9Jp+UAMNKIG /VYsl4yXYsRUUfQ0p9jE2sOjF0p1QT6G90bPGdZENu9a67rOEudncBtc2WLDC5fyTg/UGOTR19g TNzgXgLhTY9tGfTjRbnsw3TMi6NkyLfs0gR7OqO3hTUykhIADg5ZuPDioBP2PmQ0KW/RxQs6fHp P5NioezXw0jjiUcvBe08EgjPxvqX48dWIElr7vSimfFsCIhrnmBc0tNz0yVwXUdUer5p9lLtHxo PB4fK9As5uf0Diy9xVz7huJMap23o6uQ= X-Google-Smtp-Source: AGHT+IFv5kjUwR8/Y/tKuGxoZhXCxprxqNdVpCUQI6JryRHBrh3wOBVwtOy29Ja/DVq5OiiHaEY7+g== X-Received: by 2002:a05:620a:4510:b0:7c5:6ba5:dd65 with SMTP id af79cd13be357-7c9280779bcmr2259957585a.55.1745253764552; Mon, 21 Apr 2025 09:42:44 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c2b325a4sm45802916d6.60.2025.04.21.09.42.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:44 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id D19181200043; Mon, 21 Apr 2025 12:42:43 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-01.internal (MEProxy); Mon, 21 Apr 2025 12:42:43 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpeehnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:43 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 08/12] rust: sync: atomic: Add Atomic<{usize,isize}> Date: Mon, 21 Apr 2025 09:42:17 -0700 Message-ID: <20250421164221.1121805-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for `usize` and `isize`. Note that instead of mapping directly to `atomic_long_t`, the represention type (`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces the necessarity of creating `atomic_long_*` helpers, which could save the binary size of kernel if inline helpers are not available. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 71 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index d197b476e3bc..6008d65594a2 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -102,3 +102,74 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { d as _ } } + +// SAFETY: `usize` has the same size and the alignment as `i64` for 64bit = and the same as `i32` for +// 32bit. +unsafe impl generic::AllowAtomic for usize { + #[cfg(CONFIG_64BIT)] + type Repr =3D i64; + #[cfg(not(CONFIG_64BIT))] + type Repr =3D i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x =3D Atomic::new(42usize); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for usize { + type Delta =3D usize; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} + +// SAFETY: `isize` has the same size and the alignment as `i64` for 64bit = and the same as `i32` for +// 32bit. +unsafe impl generic::AllowAtomic for isize { + type Repr =3D i64; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x =3D Atomic::new(42isize); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for isize { + type Delta =3D isize; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA8141EE00F; Mon, 21 Apr 2025 16:42:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253769; cv=none; b=u9HeEbnWUHI5QeSZsYRDgi6cPj77DwzrSWRPHZNlif7DZdAezMTx8iqVuNm6KfPRi396T+oraJe1TLu1XA7rveQ2vkNJ8xjzTn30KhL9iuAhiZ4HajuPC+BEw2s4StNkuexHa1o7I0OnvVZOqD5X10XSdeBiTopQxoNbS5/bYXs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253769; c=relaxed/simple; bh=GhwcOFNRCppAF9Yyw+kk4A11418K1+TP/Xmp0tivsGs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SJin19Om17HyNBbg9VA8trhuUKIZzFepMu8M2OsnAQgb2Jz9Zht44tCDh7fhPnEzSUIkRo2YN1xfJHPdWZ4gPQAjmKoEvG0lpaBkuIJws7GlxZnKnnvBYajbQTKISaraOSOEZKGwF0B96WcsPVtSzfbNiGNTeaZCQW1YV18/oio= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Yl1UKRME; arc=none smtp.client-ip=209.85.219.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Yl1UKRME" Received: by mail-qv1-f46.google.com with SMTP id 6a1803df08f44-6ecfbf8fa76so49445026d6.0; Mon, 21 Apr 2025 09:42:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253766; x=1745858566; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=CEsQWBCXRVwADjexG++NYp0pc0xQK2w2I3VkfiuwbTM=; b=Yl1UKRMERv6KdfFVCsOyU+KcFSVGHWh40iLJS/FiIA/iaY/OgILf8t+hq3S2udVTk1 t+PTrICGRM8W1at7FFvGhd2+WbrJpJjjwkchSb0WoDtF05wFyyhACKbgwoOKFZ2SBfoO vrfSgwyDFWvSy9lMy8mKTvlD0JH3dXw1o+DsyUMa5o56xm0zqvKaiUCCzrWaBI4hUnes 6JPwQO5/etfJH8tOV8EXDLfYQRKDYQpbbjU2DlsqEMB7LFc5r11fOSj2TLyML6D0swxW RItq6ddSQnfcXub2af9YCWLLy5WbsWZHoq1Lu/iixYNQ0PyD/+hUOUUEiq5zFFzmoID3 TMUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253766; x=1745858566; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=CEsQWBCXRVwADjexG++NYp0pc0xQK2w2I3VkfiuwbTM=; b=EdY4PyVEtnRiWt9BfzDZbJSOMvzlZDgD+gLvFbOtBPrK59MoQWl3S2onQXMwKLkFFZ /wyswzBIWKKyOeu7iTRaWrXlmSG/BzoYHOhcgX0emmk0Ns0BIkuqBr8Xb1NN5k7geZSx x+dlX0gdV8NG6xKmUPHto2rFiM5IzbKd57L2M5qUaUk+Tye6OyZLGLtxAcacxexTrPwu noh1HmX8LBtauoaJRRRsUzr2IcQZDHUVyaN2fO92J7kCwUgu8mumkQo30/BZIpHaxEk7 3hqgPv/I+S3L0ZIPkrU8cY6yHMNcDE+DYhXSVlMDlTMi3JH+IdqQ/RODBwjuaFl4b30U 5BcA== X-Forwarded-Encrypted: i=1; AJvYcCUJaTzgs/ak7VAEU2yICacOxfASTGz5pTbEyl1JR9UtQm54qgVwvrABqMUHNLFRuvs30PXHhbHOlKIDESNEFA==@vger.kernel.org, AJvYcCVUS6O37CDvEMbVbhrbn6BP5plD31xh3WCdV/vnrr608DJVziamE7+ZPpqH3cgaH54HQqyH@vger.kernel.org, AJvYcCVdz4rSGcYO44NkZJPbm0h0B+Fgt3IEfOGE/DbB3IVEekeygm/AK2UsaOYgYhiVne24edTPX/XCty/rJd/O@vger.kernel.org, AJvYcCXLrdEX+hppDHqQ7OqSYHlGotok4q8ABHaAur+qDY2/mmG/27xHM10R4BaAkax866IbWMjypFvb9mcb@vger.kernel.org X-Gm-Message-State: AOJu0YwMtrVnlw1DR6JYP6fAT9MpIvB3kzY+/UsKl9On0mLpBthDigYj YWfxRUlk+5OsH2fNJdTk8iTmkRayl8mdIu1y/gau9yJUijlTcAkANPTIZw== X-Gm-Gg: ASbGncv5TRp0CPfBh6lc5ns/HoLnTI4M+PBD3IUaZ7Pw4F8Rz05hFSRi79KWHe7f/0U EmGYonOsilMcI4/YySRg4+1No5JrpcU7DPv3lnbhZLYE8MGu0Db4LkhMOL1X1DmumwleXTPqOSj 0h2XGJ9g0ABPoknftdCxnpHY8E5lNXCh7tTWNYqRJMjfK43B7DXq4Awu2C781yldlryLyohfTR6 uqGc2GvfLnWR1alLHcisfa/Kj3TZpS+WzNQQzB62rdIzly4as6JHUc3+9CKMFwzi1evwKltJrQ0 y2EoteOIIBVPz2Ie5838cLek4ZHDzhB/6Jbjk0ytpaEbv9qDxqGur1Bu/f8GUsgX23quWb+img4 DuSZU7izOPfLW/tzH2hsnm9vYkuQb4W8= X-Google-Smtp-Source: AGHT+IG3+HjiW5IDm/9TFqmDVn4FGeCgq7KYNi2ZDNBw5WrxxYY55xKYqm5DjR5vcwbWf5ps21zKUA== X-Received: by 2002:a05:6214:248a:b0:6ed:19d1:212f with SMTP id 6a1803df08f44-6f2c44ea7b0mr227931276d6.5.1745253766285; Mon, 21 Apr 2025 09:42:46 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c2b0c1bcsm46335126d6.26.2025.04.21.09.42.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:45 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 72C5A1200043; Mon, 21 Apr 2025 12:42:45 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Mon, 21 Apr 2025 12:42:45 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:44 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 09/12] rust: sync: atomic: Add Atomic<*mut T> Date: Mon, 21 Apr 2025 09:42:18 -0700 Message-ID: <20250421164221.1121805-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add atomic support for raw pointer values, similar to `isize` and `usize`, the representation type is selected based on CONFIG_64BIT. `*mut T` is not `Send`, however `Atomic<*mut T>` definitely needs to be a `Sync`, and that's the whole point of atomics: being able to have multiple shared references in different threads so that they can sync with each other. As a result, a pointer value will be transferred from one thread to another via `Atomic<*mut T>`: x.store(p1, Relaxed); let p =3D x.load(p1, Relaxed); This means a raw pointer value (`*mut T`) needs to be able to transfer across thread boundaries, which is essentially `Send`. To reflect this in the type system, and based on the fact that pointer values can be transferred safely (only using them to dereference is unsafe), as suggested by Alice, extend the `AllowAtomic` trait to include a customized `Send` semantics, that is: `impl AllowAtomic` has to be safe to be transferred across thread boundaries. Suggested-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 24 ++++++++++++++++++++++++ rust/kernel/sync/atomic/generic.rs | 16 +++++++++++++--- 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 6008d65594a2..ffec46e50a06 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -173,3 +173,27 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { d as _ } } + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x =3D Atomic::new(core::ptr::null_mut::()); +/// +/// assert!(x.load(Relaxed).is_null()); +/// ``` +// SAFETY: A `*mut T` has the same size and the alignment as `i64` for 64b= it and the same as `i32` +// for 32bit. And it's safe to transfer the ownership of a pointer value t= o another thread. +unsafe impl generic::AllowAtomic for *mut T { + #[cfg(CONFIG_64BIT)] + type Repr =3D i64; + #[cfg(not(CONFIG_64BIT))] + type Repr =3D i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index 2de4cdbce58e..44cb6378367b 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -19,6 +19,10 @@ #[repr(transparent)] pub struct Atomic(Opaque); =20 +// SAFETY: `Atomic` is safe to send between execution contexts, because= `T` is `AllowAtomic` and +// `AllowAtomic`'s safety requirement guarantees that. +unsafe impl Send for Atomic {} + // SAFETY: `Atomic` is safe to share among execution contexts because a= ll accesses are atomic. unsafe impl Sync for Atomic {} =20 @@ -31,8 +35,13 @@ unsafe impl Sync for Atomic {} /// /// # Safety /// -/// [`Self`] must have the same size and alignment as [`Self::Repr`]. -pub unsafe trait AllowAtomic: Sized + Send + Copy { +/// - [`Self`] must have the same size and alignment as [`Self::Repr`]. +/// - The implementer must guarantee it's safe to transfer ownership from = one execution context to +/// another, this means it has to be a [`Send`], but because `*mut T` is= not [`Send`] and that's +/// the basic type needs to support atomic operations, so this safety re= quirement is added to +/// [`AllowAtomic`] trait. This safety requirement is automatically sati= sfied if the type is a +/// [`Send`]. +pub unsafe trait AllowAtomic: Sized + Copy { /// The backing atomic implementation type. type Repr: AtomicImpl; =20 @@ -43,7 +52,8 @@ pub unsafe trait AllowAtomic: Sized + Send + Copy { fn from_repr(repr: Self::Repr) -> Self; } =20 -// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and = alignment. +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and = alignment. And all +// `AtomicImpl` types are `Send`. unsafe impl AllowAtomic for T { type Repr =3D Self; =20 --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 139A91F03C9; Mon, 21 Apr 2025 16:42:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253771; cv=none; b=aJF1sPNVkZGGYJ4HIwv7qbtaT+jb/5rtXkQudmhiVV1IDLXiiCQWuML1L4ZQTWjVW9j9oO8zo2ufH4aeg9JCQVS/4zaqpGr6yQXjzj6bz+y+wOXEXQd7cJ50nsUE2UzcSBDZrD9pzmVUYZVyKqwqmaW3kCon7E8k331g/iQZ29c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253771; c=relaxed/simple; bh=ir9Jiu1Y1J4FXKxDOougVPhBcRyH5oNJ6vvblr33dic=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OTc+uRGUke53BjX8JxFriD2GxIpPjJWm0du4qmks/D6amqinU3GIOXBK9nwa6mLzw7bsLsAShwJ2N4T4kPE0D4YqRBUfNAw+BYVaU6NklkKYcv52X7lYTDvyziXOYaq9k5LrFwp//5d/OYFsuziuQPmqmiwXpEoXfIMjR33yMqc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=T9WDzMWD; arc=none smtp.client-ip=209.85.219.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="T9WDzMWD" Received: by mail-qv1-f51.google.com with SMTP id 6a1803df08f44-6eeb7589db4so47912336d6.1; Mon, 21 Apr 2025 09:42:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253768; x=1745858568; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=uMU54tdag8mXyAmWP+eFJWJLUiSas5VBnDSB0h2I5Ow=; b=T9WDzMWDeaUkn5VzGRPeLGJccyBtu5/vxujaB9Ma6CyYMg4mw2sYIVJXvLUDOYax/E j4Tmxsr6m4G0E0shr5BC9FzdFxhQNI7bmMuuet9/w2EWXDBPIP8WgOfbpHhvDX9LuDWQ mAFuZC/5DNyWqNFZjkbOAQI6x0JRCvLzqkKQ47WqDHWiGxI0pd0Qqy1ZGyLgTSWBSe9P GMyTcrmrzbrSAufx2bjstWM/J92haiEi2KevzWiPQkumHOA925puCcOs2dRboWeAOkWn xqpXEVu39YIY9jrO/VAnVnAb43zKOLj7t/cteSJDA4nDS/YCy7w0IjGb+2ZVCJJkxmBD WdUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253768; x=1745858568; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=uMU54tdag8mXyAmWP+eFJWJLUiSas5VBnDSB0h2I5Ow=; b=haN8nbAETR+GBxO52LYPXQWVNWYhGQ0gbkVzqy3v4v9G4d/Jz7perN3evCfYWQ8jJV BTDD3P3Tz3WmOP0oDbbLw5ILZkfw3zsePpsR1tif9kOHmjyYJMVNcjNJB3J/+opkaCms mkh3sb/aSQFqqCnfIAwCs6IXP7vWyIYCovDNRqn/KHucSGfZYKkR2kMsF3rV8SeRypir pqgcuUyj6aGRs0LY1N7d3tJDlwOAoyOagbmJXt8h0Zwnb09UW0zw/CRNM0ofEpd2DCHq G7B22iIQtEbz3kehzmoLin1fDHuAEgKpwgTIcLOIpa1loRFUGF8CrodiJMvataVLeKJZ vPnA== X-Forwarded-Encrypted: i=1; AJvYcCUJG+ySefSiOG0CF5yFXfBU9J7g6hF+c3pJpkri16d2OvUIjYF/6zJgBfQ/vYWqciFEUquffvYgLX9CaUb4@vger.kernel.org, AJvYcCUiJS0LblW/RgkabqtRis9hCVgQPvflpEno9xsEIIl3+dzBjVszfoSOHL5Al8qX0j1hCdkUs9YN5+2aCdHQ9A==@vger.kernel.org, AJvYcCVtxopTXpdlt+kqtT0gxXD5DdPz1juMeJWng3NWrTISkczJ22Rl0sGZYQecm+AK194qOjLUSUc1N3bH@vger.kernel.org, AJvYcCWMFGI6Wf6PEoi21pGmvAFBvc0hZ1NHoT71b1cteaXWnFjLFLXh0lBwKPrcCZTVUBAiiYJ0@vger.kernel.org X-Gm-Message-State: AOJu0YyfEYdKhoiAnX5oqdKRTyEd3SUutWQOz7Ox4V4PslGQB34Ju43a 86HbrXx7Te4d/2lMrJw/sR9TTlTML8rqe9eCRzPBbKq45qw4BFmZ3UtOzQ== X-Gm-Gg: ASbGncut4q+fhstzwaLxS3skwM5v8suVj9k201kt7itPzCB4tVH3bIFWbRXE2LTzJW5 vWceS15KFINJDe5wrEt3jQHpAPdMSBO0SnAmFLDC9wDA1dJMCr4+mt+INCjP25Fw1TFBwlEgPxs XiYwrGKpjEn4CSnqyRdby7sr9ZXeSb1u/K28bceN+b/oZwNVclLVSKppZ7HVmUg+2cnA8L5D/5c 17CzCTCusZ9jaS7Guu3XPLuxR95AKYTtdDAFjXPT3NhoPAkUiB6Ru0BDAsh5nfi85I96JpQGgtM kWtL9buVrtnuxnF2tGe18j3en2qgOQh9o6vYaOIGkb5OEANCyLaqMgcxs2aBpNDqBpIgqqhCvpA NIYAXwi8bO6qDn+3gsQqCvUekNbpJc5p02L6jDpd7aw== X-Google-Smtp-Source: AGHT+IGU0JcGAhVuULdUx6Z4ZKxYf32ZP2BkzbnQ+QciqlajuN2LNG68/aiDW+Pw4v75CJt6Z5kDGA== X-Received: by 2002:ad4:5b83:0:b0:6ed:122c:7da7 with SMTP id 6a1803df08f44-6f2c4509511mr225275636d6.5.1745253767862; Mon, 21 Apr 2025 09:42:47 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c2b0cb51sm45755526d6.31.2025.04.21.09.42.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:47 -0700 (PDT) Received: from phl-compute-03.internal (phl-compute-03.phl.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id 25E5B1200043; Mon, 21 Apr 2025 12:42:47 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-03.internal (MEProxy); Mon, 21 Apr 2025 12:42:47 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:46 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 10/12] rust: sync: atomic: Add arithmetic ops for Atomic<*mut T> Date: Mon, 21 Apr 2025 09:42:19 -0700 Message-ID: <20250421164221.1121805-11-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" (This is more an RFC) Add arithmetic operations support for `Atomic<*mut T>`. Currently the semantics of arithmetic atomic operation is the same as pointer arithmetic, that is, e.g. `Atomic<*mut u64>::add(1)` is adding 8 (`size_of::`) to the pointer value. In Rust std library, there are two sets of pointer arithmetic for `AtomicPtr`: * ptr_add() and ptr_sub(), which is the same as Atomic<*mut T>::add(), pointer arithmetic. * byte_add() and byte_sub(), which use the input as byte offset to change the pointer value, e.g. byte_add(1) means adding 1 to the pointer value. We can either take the approach in the current patch and add byte_add() later on if needed, or start with ptr_add() and byte_add() naming. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index ffec46e50a06..f2dd72c531fd 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -197,3 +197,32 @@ fn from_repr(repr: Self::Repr) -> Self { repr as _ } } + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let s: &mut [i32] =3D &mut [1, 3, 2, 4]; +/// +/// let x =3D Atomic::new(s.as_mut_ptr()); +/// +/// x.add(1, Relaxed); +/// +/// let ptr =3D x.fetch_add(1, Relaxed); // points to the 2nd element. +/// let ptr2 =3D x.load(Relaxed); // points to the 3rd element. +/// +/// // SAFETY: `ptr` and `ptr2` are valid pointers to the 2nd and 3rd elem= ents of `s` with writing +/// // provenance, and no other thread is accessing these elements. +/// unsafe { core::ptr::swap(ptr, ptr2); } +/// +/// assert_eq!(s, &mut [1, 2, 3, 4]); +/// ``` +impl generic::AllowAtomicArithmetic for *mut T { + type Delta =3D isize; + + /// The behavior of arithmetic operations + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + // Since atomic arithmetic operations are wrapping, so a wrapping_= mul() here suffices even + // if overflow may happen. + d.wrapping_mul(core::mem::size_of::() as _) as _ + } +} --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qv1-f48.google.com (mail-qv1-f48.google.com [209.85.219.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11E391F09B6; Mon, 21 Apr 2025 16:42:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253773; cv=none; b=YOpw/Ep8I40kOx1kjVgTMPgcUzpxbTbdZxQ9C1zPC360833EiEBVtmpLr3j/pirmhbNPOBpjzPNVGdniDbK6babycK3Zdj1tUyPomVmE1GoEuwjD3Y+v+BusErEuJxhT2CbZjC/CX1qsbqggR4IcVXKoXcH4iGTpQ6G33IyPn/Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253773; c=relaxed/simple; bh=dJnEYU/O8FUpD9s6i4Ut4f40W3FVdVgcvnoQ+BKOIFU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=I1XFTn2oJIjkgfYSbGddCzeIEIE2hiCuMLugSVcJynqLSfGEQ/GFJ0+Bum7g4YAY4cKqpRn9g82nnVOnZwoin+9VGqjixK+fgIzAw2dk/Vm1drK6HOv2JBjOirL0zHnql/FlhISP9d+5F+k1Csx83VdcvlRiyPw5JdCdgoDO9q0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=iBncKYOa; arc=none smtp.client-ip=209.85.219.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iBncKYOa" Received: by mail-qv1-f48.google.com with SMTP id 6a1803df08f44-6e8f8657f29so40335386d6.3; Mon, 21 Apr 2025 09:42:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253769; x=1745858569; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=2VlZJudXEGF9wwtFZHglR/nFrBqPJ1rxXkCL1SoJv3Q=; b=iBncKYOadYXQqt4OI6/gaPynEJEgjYbsodq435Nf4WA902/AdVA4G1pixrMNmpuqDa wMVNAoiE2zO4PzZKwfPb6mkj1N39zeBtfQPWxu9+ow1P5rwbfTJKowpNr/FwTggOK61i MfwBPw3YxVSgnCdJlN9x0A6lfSd0oeOrbbXYVxNIHsWBlDQEaOObBmYGO+WgFUCd0DhQ QiwGqmTgao6AAyWlwCo5GgnngyeKqLha4a4HK0/CvkGXjATxhoJiIsJIq9aS5Ml/dz+E 2b0y/KFllwS8+/lI1ome0Qd3CfWd6ysipUu2fjCoaLrlP2N18hbyP/9/FNazbRdVachn o6bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253769; x=1745858569; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=2VlZJudXEGF9wwtFZHglR/nFrBqPJ1rxXkCL1SoJv3Q=; b=jUC4RrxwsDnKUq9G5yGWOdvp9DWZztw357JPY2rvlkNmdOtaIiiL3aIaxFZ5XegMYM rUWjVKgT94obDMD8NZEg9d06OI65izxDVpllwnsiReFJHdMxuKBCGA9HxbBEw4nJ93Zb ceQn3r/U/3QpMHYUAlKsrbFFVhVre3o4bZo/m4QWJUrj3MYq69V+z0O6FgfAYq/q+aAv jqglgB5hwv9UXCBF7GywgMYXCzoatblT7bjkpbCInHCwz5HxgqPYEAGdjovrHUcZUe8w 3xpP92q1Gfq1H2EJBXmd4xszmQKI368DeDdow1iP2MoRKjau7lGFF0BOSW1l1+XVHBoi pgSg== X-Forwarded-Encrypted: i=1; AJvYcCUC2wPFzbhLHAvwXexKjpEAiy1h9SrCNGIN9HciCMQ5Ik9Sc+hqFGdek8nDRVsMXmYMpT+k5hbXGyjq@vger.kernel.org, AJvYcCUILxj+Ty8SietQTCs5Qok3PXaT4bB9uz3RlQ9od0ks8jtf2SrQkNtAzgRaX4+XP2hJfPgnTgxWkB5fecaokg==@vger.kernel.org, AJvYcCV+w3HzL4T4fQLoOABFFDDoChcBE5xQed1GmS4Q0QHv5WR1/qHkca45A10+lpGZFDg4J8N7@vger.kernel.org, AJvYcCXJCCNrGelwhy9EaL9pnCP+y6kyWNX6BbKZ7lVER1GUpLtIu9tg8NwkwMzewycbev8y4ceOINRGL1YF51cp@vger.kernel.org X-Gm-Message-State: AOJu0YzbkmkbVT2ldllbRi1VvST2eGPIzrdnINa37X/0nZ81Z8+C9xu4 4Os5LaYysMXpudaxipOyGqHuVSwN9BQ/FCJ/tW6nlEcibkzo1HdBgT8e2Q== X-Gm-Gg: ASbGncuKh1/Rt20xychZcRmJSEweSDwWeNebAofoeB92eAUlOVNxsr4FeNf+Dzjhiqn rKW3zefsgAYr6JuWm2zp5JK+RQVPQLnFnJZ/WNHsp23OiXtLaxhuGgEQjnvyI1v2RQqZEmHcb2M xUYGOlMYO/gYselIBk4jwjTjCrshaHvgjWOKS+DENiaMjha4trM2FTUVi+yrvFpT42EBftFBoxg b4Ti9BcLUo2unnd1ceWy6U9AR7v89v7OuXai+6wbwkM+oDLfAAYgzG27b3uPDpu3huZ2fxxjUsw zGFhTCYGphQ6P/bHC/PjRNewiXg1N1bgwIsWKZE0BDg1uJjFYrn2Y5vIFgK7tqLbOUxu1fMoJHn 11v/I03s2g1TBFo3cx+zIPIJGEGGXAYk= X-Google-Smtp-Source: AGHT+IH9gcSNCA0FMXfkjNvwI0DUIj86Rsf19/zTIQ5BrJ5mdE4I+7VHPbfpm5fPPk0PcuFMV1633g== X-Received: by 2002:ad4:5d43:0:b0:6d8:8fbf:d1b7 with SMTP id 6a1803df08f44-6f2c468839amr222039316d6.43.1745253769407; Mon, 21 Apr 2025 09:42:49 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c4fa0bbesm43182626d6.90.2025.04.21.09.42.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:48 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id AEE4D1200043; Mon, 21 Apr 2025 12:42:48 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Mon, 21 Apr 2025 12:42:48 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredt tdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrih hlrdgtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefg veffieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedvnecurfgrrh grmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgr lhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppe hgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheeipdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvh hgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrh hnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhn vghlrdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpd hrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:47 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 11/12] rust: sync: Add memory barriers Date: Mon, 21 Apr 2025 09:42:20 -0700 Message-ID: <20250421164221.1121805-12-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Memory barriers are building blocks for concurrent code, hence provide a minimal set of them. The compiler barrier, barrier(), is implemented in inline asm instead of using core::sync::atomic::compiler_fence() because memory models are different: kernel's atomics are implemented in inline asm therefore the compiler barrier should be implemented in inline asm as well. Signed-off-by: Boqun Feng --- rust/helpers/barrier.c | 18 ++++++++++ rust/helpers/helpers.c | 1 + rust/kernel/sync.rs | 1 + rust/kernel/sync/barrier.rs | 67 +++++++++++++++++++++++++++++++++++++ 4 files changed, 87 insertions(+) create mode 100644 rust/helpers/barrier.c create mode 100644 rust/kernel/sync/barrier.rs diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c new file mode 100644 index 000000000000..cdf28ce8e511 --- /dev/null +++ b/rust/helpers/barrier.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +void rust_helper_smp_mb(void) +{ + smp_mb(); +} + +void rust_helper_smp_wmb(void) +{ + smp_wmb(); +} + +void rust_helper_smp_rmb(void) +{ + smp_rmb(); +} diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index b20ee7cef74d..1183150ebdc6 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -8,6 +8,7 @@ */ =20 #include "atomic.c" +#include "barrier.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index b620027e0641..c7c0e552bafe 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -11,6 +11,7 @@ =20 mod arc; pub mod atomic; +pub mod barrier; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs new file mode 100644 index 000000000000..277aa09747bf --- /dev/null +++ b/rust/kernel/sync/barrier.rs @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory barriers. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions of +//! semantics can be found at [`LKMM`]. +//! +//! [`LKMM`]: srctree/tools/memory-mode/ + +/// A compiler barrier. +/// +/// An explicic compiler barrier function that prevents the compiler from = moving the memory +/// accesses either side of it to the other side. +pub fn barrier() { + // By default, Rust inline asms are treated as being able to access an= y memory or flags, hence + // it suffices as a compiler barrier. + // + // SAFETY: An empty asm block should be safe. + unsafe { + core::arch::asm!(""); + } +} + +/// A full memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from mo= ving the memory accesses +/// either side of it to the other side. +pub fn smp_mb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_mb()` is safe to call. + unsafe { + bindings::smp_mb(); + } + } else { + barrier(); + } +} + +/// A write-write memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from mo= ving the memory write +/// accesses either side of it to the other side. +pub fn smp_wmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_wmb()` is safe to call. + unsafe { + bindings::smp_wmb(); + } + } else { + barrier(); + } +} + +/// A read-read memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from mo= ving the memory read +/// accesses either side of it to the other side. +pub fn smp_rmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_rmb()` is safe to call. + unsafe { + bindings::smp_rmb(); + } + } else { + barrier(); + } +} --=20 2.47.1 From nobody Sun Feb 8 07:24:28 2026 Received: from mail-qk1-f180.google.com (mail-qk1-f180.google.com [209.85.222.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4E521F1313; Mon, 21 Apr 2025 16:42:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253775; cv=none; b=WlhWRAMs/Imr+rQcisZfA+RRS5Dx8+J+L8WgflvnjlOZHBqgG3ETfWwmlDrcFWp4HOTIcdSSd6j81vcPnfm5bJz9WM5SNqV4TNJ2x+0A9iqxQuZxIUFsbRWeVCQ/YDp5wa58VRV6pNvuUM9aa0+2n2lp91j+6N6DLZnTAeB1ZaI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745253775; c=relaxed/simple; bh=ycXL8q2+YDl6YBofOqf86s4FeBh11bM0f53ATgDpB5k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aTQ0M4JNBfnlVCXIlt1H0FenIoKK50rMjlR+zapvQwCCUu8R5BQ9odJULB0CwfBCsax14rYXmvg+BvDIjENYExCyOMo69QSxVRJZLATNPDxZzx+h1Q8utvI8Ei9BjJHsWBJlXgWkUYAWKpcHeNDV28iJ2UiXdcrIEOZVw9y5QRY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fgAW+BjV; arc=none smtp.client-ip=209.85.222.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fgAW+BjV" Received: by mail-qk1-f180.google.com with SMTP id af79cd13be357-7c08fc20194so28419285a.2; Mon, 21 Apr 2025 09:42:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745253771; x=1745858571; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=y7J/QCauCF+ekh62R8O3hTayTSjtfwBqCWuSp+W3eCE=; b=fgAW+BjVV0IuGUtdDIbKHOWYRL6/GAYzCDoyo9akdtk/rG06BjxmgWf8QFxnkmudvg QGYWt7oiDiH1lhwqO3O905iT8tF4jliU0uBdJGwhQflvjG07bcBICvxThPLBPfyDOvh7 1K9Uxzh9XbKENdCCQNaXgI3tmIPhzQzdRRTNVXJl7nkXOqH7heMcHni9xZ6xTrZQ9XXF ItTS9PXNW/bNM8f3cD8Z+T+6Dj34YCNHwa6MdM12YiMk+oFnF4UBJGb7wcl+Z4f6bVzw s8zuJpRh3h4lrqBa3p6JplzqeJ/nTvLpbJwnBG1sEr1bywF91Yds3et1iFOv7j0AlFbR rcGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745253771; x=1745858571; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=y7J/QCauCF+ekh62R8O3hTayTSjtfwBqCWuSp+W3eCE=; b=L2yZkY0BFxTyHPPcr/4T+Lqkc+ksrXGmLGG5SdpLp7OSQamUBzpdwo3eoHvnkcad/N Xo3F1+rWzIULP9xrYoF88sedsOSfQEi2wy5JNFUW06MA/pIInh7HXwtjsbud99LkD6ox K3uP8S7f2fzHnAkDbh5D+4HqEbI3A/8Bz7JqjtfR7wmj5HlDpnYsI0h1RV/GeZffD34p BOH9snbg0/tzvTZzVXMDH5MUjcRHEKwP7y9NH407AneYmGJ8njKi/VgwufmFb1wz6IFC RgQzb8vUA3Qk5K3AEcYwW3YEuj1Z5Ywl2SnLlvQ/FvcBwJMlha7LnZDvFmie69FzCyC8 s9Tw== X-Forwarded-Encrypted: i=1; AJvYcCUWLCUKPL1mLf/HdpkVblOj119moFne2P5VNPddqnla5RMJusaTxemBgMJd2TLV3p/r1uUP6XCnRP4r@vger.kernel.org, AJvYcCVsszkFE0H047778FlKqSDQz8VDkDUPnU70Rnw1FJ4ftLtN9qX0lTtaMZP2RgZY1JJ77gum@vger.kernel.org, AJvYcCXEfBzp9Ghpor2lv80j/TF8WrakFfo5OHZ1I4ALw/Id6aNLFwl4NW3a2iNHH7bnLc7gM0dTDxwucqLDDcwN@vger.kernel.org, AJvYcCXh/gSiG5FHLzi7KMlfuMItF/9GH8dKaKkupB0Fs36R7ZeMbzpjcBnx5RAHCOfiWSc8Pm6iJ7AyrjOr7E71iw==@vger.kernel.org X-Gm-Message-State: AOJu0YxvAsZ5ldzviReyNzxnTSsYQ22hFAxCMLII1ejZArJZ6hRwhwUp vVBjSiaQ3RNEmyA0X0dWO4SU162Ep7MAcpFUxq97Mg7CiARl7m+q49EDqA== X-Gm-Gg: ASbGncvZZFyOVbMhM50hB9xgXnGcp1Cjd6LrtQ9DBXWu7AS6xf7dy9XMD7+XomGalpM jogJ9hB1dWSLwr6y9kXkq38hYroVHeyT1jyq/DdlbNARLEbTNo4fcnxFktCpGTzmmSivjHrHHat IxVm72/WQGb+J6LRVVtariApo0sTSvidQGkyhLs5HnN1nL/hi/Why7Qc1bzymPy/oy24LvTi2i1 YDuyHkvczVzSb1UDss4RNGVfqj9aQJF4vkegykuLwFW5mtSgXG3GYS4JB9EmdtKT1aMioDJGwOr XrTWRiQlfMV+gwoe5JpEFD2K7zigVQBsfo3+yMX19+J5BslZNVf2BTVL48ermotRHBJRRN35kgW Lue3Kv53Jgq+TykgiJgIbztZiQGWHCuI= X-Google-Smtp-Source: AGHT+IHHrK+fEbc0Vyx8MJUhUYDa6aVvWqz1XnaJe7cGhk0HPZCzn/T19iJEAOhPsfRFssLRPvAlPA== X-Received: by 2002:a05:620a:4146:b0:7c5:57cd:f1cb with SMTP id af79cd13be357-7c9280196f8mr2018895585a.37.1745253771295; Mon, 21 Apr 2025 09:42:51 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7c925a8c9d9sm437087185a.27.2025.04.21.09.42.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Apr 2025 09:42:50 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 611A91200043; Mon, 21 Apr 2025 12:42:50 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Mon, 21 Apr 2025 12:42:50 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvgedufeeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggv pdfurfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpih gvnhhtshculddquddttddmnegoufhushhpvggtthffohhmrghinhculdegledmnecujfgu rhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhunhcuhf gvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgv rhhnpedutedvgfetleeiffeihfetgfeiheetueefhedukedvveejuddvheeujeehuefgte enucffohhmrghinheptghrrghtvghsrdhiohdpiihulhhiphgthhgrthdrtghomhdpghhi thhhuhgsrdgtohhmpdhkrghnghhrvghjohhsrdgtohhmpdhgihhthhhusgdrihhonecuve hluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqhhunhdo mhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddujeejke ehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvgdrnhgr mhgvpdhnsggprhgtphhtthhopeehiedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtoh eprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghp thhtoheprhgtuhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhnuh igqdhkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhn uhigqdgrrhgthhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehllhhvmh eslhhishhtshdrlhhinhhugidruggvvhdprhgtphhtthhopehlkhhmmheslhhishhtshdr lhhinhhugidruggvvhdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprh gtphhtthhopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohep sghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Apr 2025 12:42:49 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v3 12/12] rust: sync: rcu: Add RCU protected pointer Date: Mon, 21 Apr 2025 09:42:21 -0700 Message-ID: <20250421164221.1121805-13-boqun.feng@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250421164221.1121805-1-boqun.feng@gmail.com> References: <20250421164221.1121805-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" RCU protected pointers are an atomic pointer that can be loaded and dereferenced by mulitple RCU readers, but only one updater/writer can change the value (following a read-copy-update pattern usually). This is useful in the case where data is read-mostly. The rationale of this patch is to provide a proof of concept on how RCU should be exposed to the Rust world, and it also serves as an example for atomic usage. Similar mechanisms like ArcSwap [1] are already widely used. Provide a `Rcu

` type with an atomic pointer implementation. `P` has to be a `ForeignOwnable`, which means the ownership of a object can be represented by a pointer-size value. `Rcu::dereference()` requires a RCU Guard, which means dereferencing is only valid under RCU read lock protection. `Rcu::read_copy_update()` is the operation for updaters, it requries a `Pin<&mut Self>` for exclusive accesses, since RCU updaters are normally exclusive with each other. A lot of RCU functionalities including asynchronously free (call_rcu() and kfree_rcu()) are still missing, and will be the future work. Also, we still need language changes like field projection [2] to provide better ergonomic. Acknowledgment: this work is based on a lot of productive discussions and hard work from others, these are the ones I can remember (sorry if I forgot your contribution): * Wedson started the work on RCU field projection and Benno followed it up and had been working on it as a more general language feature. Also, Gary's field-projection repo [3] has been used as an example for related discussions. * During Kangrejos 2023 [4], Gary, Benno and Alice provided a lot of feedbacks on the talk from Paul and me: "If you want to use RCU in Rust for Linux kernel..." * During a recent discussion among Benno, Paul and me, Benno suggested using `Pin<&mut>` to guarantee the exclusive access on updater operations. Link: https://crates.io/crates/arc-swap [1] Link: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/F= ield.20Projections/near/474648059 [2] Link: https://github.com/nbdd0121/field-projection [3] Link: https://kangrejos.com/2023 [4] Signed-off-by: Boqun Feng --- rust/kernel/sync/rcu.rs | 275 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 274 insertions(+), 1 deletion(-) diff --git a/rust/kernel/sync/rcu.rs b/rust/kernel/sync/rcu.rs index b51d9150ffe2..201c09cb60db 100644 --- a/rust/kernel/sync/rcu.rs +++ b/rust/kernel/sync/rcu.rs @@ -4,7 +4,12 @@ //! //! C header: [`include/linux/rcupdate.h`](srctree/include/linux/rcupdate.= h) =20 -use crate::{bindings, types::NotThreadSafe}; +use crate::bindings; +use crate::{ + sync::atomic::{Atomic, Relaxed, Release}, + types::{ForeignOwnable, NotThreadSafe}, +}; +use core::{marker::PhantomData, pin::Pin, ptr::NonNull}; =20 /// Evidence that the RCU read side lock is held on the current thread/CPU. /// @@ -45,3 +50,271 @@ fn drop(&mut self) { pub fn read_lock() -> Guard { Guard::new() } + +/// An RCU protected pointer, the pointed object is protected by RCU. +/// +/// # Invariants +/// +/// Either the pointer is null, or it points to a return value of [`P::int= o_foreign`] and the atomic +/// variable exclusively owns the pointer. +pub struct Rcu(Atomic<*mut crate::ffi::c_void>, Phantom= Data

); + +/// A pointer that has been unpublished, but hasn't waited for a grace per= iod yet. +/// +/// The pointed object may still have an existing RCU reader. Therefore a = grace period is needed to +/// free the object. +/// +/// # Invariants +/// +/// The pointer has to be a return value of [`P::into_foreign`] and [`Self= `] exclusively owns the +/// pointer. +pub struct RcuOld(NonNull, PhantomD= ata

); + +impl Drop for RcuOld

{ + fn drop(&mut self) { + // SAFETY: As long as called in a sleepable context, which should = be checked by klint, + // `synchronize_rcu()` is safe to call. + unsafe { + bindings::synchronize_rcu(); + } + + // SAFETY: `self.0` is a return value of `P::into_foreign()`, so i= t's safe to call + // `from_foreign()` on it. Plus, the above `synchronize_rcu()` gua= rantees no existing + // `ForeignOwnable::borrow()` anymore. + let p: P =3D unsafe { P::from_foreign(self.0.as_ptr()) }; + drop(p); + } +} + +impl Rcu

{ + /// Creates a new RCU pointer. + pub fn new(p: P) -> Self { + // INVARIANTS: The return value of `p.into_foreign()` is directly = stored in the atomic + // variable. + Self(Atomic::new(p.into_foreign()), PhantomData) + } + + /// Creates a null RCU pointer. + pub const fn null() -> Self { + Self(Atomic::new(core::ptr::null_mut()), PhantomData) + } + + /// Dereferences the protected object. + /// + /// Returns `Some(b)`, where `b` is a reference-like borrowed type, if= the pointer is not null, + /// otherwise returns `None`. + /// + /// # Examples + /// + /// ```rust + /// # use kernel::alloc::{flags, KBox}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// let x =3D Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?); + /// + /// let g =3D rcu::read_lock(); + /// // Read in under RCU read lock protection. + /// let v =3D x.dereference(&g); + /// + /// assert_eq!(v, Some(&100i32)); + /// + /// # Ok::<(), Error>(()) + /// ``` + /// + /// Note the borrowed access can outlive the reference of the [`Rcu

= `], this is because as + /// long as the RCU read lock is held, the pointed object should remai= n valid. + /// + /// In the following case, the main thread is responsible for the owne= rship of `shared`, i.e. it + /// will drop it eventually, and a work item can temporarily access th= e `shared` via `cloned`, + /// but the use of the dereferenced object doesn't depend on `cloned`'= s existence. + /// + /// ```rust + /// # use kernel::alloc::{flags, KBox}; + /// # use kernel::workqueue::system; + /// # use kernel::sync::{Arc, atomic::{Atomic, Acquire, Release}}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// struct Config { + /// a: i32, + /// b: i32, + /// c: i32, + /// } + /// + /// let config =3D KBox::new(Config { a: 1, b: 2, c: 3 }, flags::GFP_K= ERNEL)?; + /// + /// let shared =3D Arc::new(Rcu::new(config), flags::GFP_KERNEL)?; + /// let cloned =3D shared.clone(); + /// + /// // Use atomic to simulate a special refcounting. + /// static FLAG: Atomic =3D Atomic::new(0); + /// + /// system().try_spawn(flags::GFP_KERNEL, move || { + /// let g =3D rcu::read_lock(); + /// let v =3D cloned.dereference(&g).unwrap(); + /// drop(cloned); // release reference to `shared`. + /// FLAG.store(1, Release); + /// + /// // but still need to access `v`. + /// assert_eq!(v.a, 1); + /// drop(g); + /// }); + /// + /// // Wait until `cloned` dropped. + /// while FLAG.load(Acquire) =3D=3D 0 { + /// // SAFETY: Sleep should be safe. + /// unsafe { kernel::bindings::schedule(); } + /// } + /// + /// drop(shared); + /// + /// # Ok::<(), Error>(()) + /// ``` + pub fn dereference<'rcu>(&self, _rcu_guard: &'rcu Guard) -> Option> { + // Ordering: Address dependency pairs with the `store(Release)` in= read_copy_update(). + let ptr =3D self.0.load(Relaxed); + + if !ptr.is_null() { + // SAFETY: + // - Since `ptr` is not null, so it has to be a return value o= f `P::into_foreign()`. + // - The returned `Borrowed<'rcu>` cannot outlive the RCU Guar= , this guarantees the + // return value will only be used under RCU read lock, and t= he RCU read lock prevents + // the pass of a grace period that the drop of `RcuOld` or `= Rcu` is waiting for, + // therefore no `from_foreign()` will be called for `ptr` as= long as `Borrowed` exists. + // + // CPU 0 CPU 1 + // =3D=3D=3D=3D=3D = =3D=3D=3D=3D=3D + // { `x` is a reference to Rcu> } + // let g =3D rcu::read_lock(); + // + // if let Some(b) =3D x.dereference(&g) { + // // drop(g); cannot be done, since `b` is still alive. + // + // if let Some(ol= d) =3D x.replace(...) { + // // `x` is = null now. + // println!("{}", b); + // } + // drop(old): + // synchron= ize_rcu(); + // drop(g); + // // a gra= ce period passed. + // // No `B= orrowed` exists now. + // from_for= eign(...); + // } + Some(unsafe { P::borrow(ptr) }) + } else { + None + } + } + + /// Read, copy and update the pointer with new value. + /// + /// Returns `None` if the pointer's old value is null, otherwise retur= ns `Some(old)`, where old + /// is a [`RcuOld`] which can be used to free the old object eventuall= y. + /// + /// The `Pin<&mut Self>` is needed because this function needs the exc= lusive access to + /// [`Rcu

`], otherwise two `read_copy_update()`s may get the same o= ld object and double free. + /// Using `Pin<&mut Self>` provides the exclusive access that C side r= equires with the type + /// system checking. + /// + /// Also this has to be `Pin` because a `&mut Self` may allow users to= `swap()` safely, that + /// will break the atomicity. A [`Rcu

`] should be structurally pinn= ed in the struct that + /// contains it. + /// + /// Note that `Pin<&mut Self>` cannot assume noalias here because [`At= omic`] is a + /// [`Opaque`] which has the same effect on aliasing rules as [`Uns= afePinned`]. + /// + /// [`UnsafePinned`]: https://rust-lang.github.io/rfcs/3467-unsafe-pin= ned.html + pub fn read_copy_update(self: Pin<&mut Self>, f: F) -> Option> + where + F: FnOnce(Option>) -> Option

, + { + // step 1: READ. + // Ordering: Address dependency pairs with the `store(Release)` in= read_copy_update(). + let old_ptr =3D NonNull::new(self.0.load(Relaxed)); + + let old =3D old_ptr.map(|nonnull| { + // SAFETY: Per type invariants `old_ptr` has to be a value ret= urn by a previous + // `into_foreign()`, and the exclusive reference `self` guaran= tees that `from_foreign()` + // has not been called. + unsafe { P::borrow(nonnull.as_ptr()) } + }); + + // step 2: COPY, or more generally, initializing `new` based on `o= ld`. + let new =3D f(old); + + // step 3: UPDATE. + if let Some(new) =3D new { + let new_ptr =3D new.into_foreign(); + // Ordering: Pairs with the address dependency in `dereference= ()` and + // `read_copy_update()`. + // INVARIANTS: `new.into_foreign()` is directly store into the= atomic variable. + self.0.store(new_ptr, Release); + } else { + // Ordering: Setting to a null pointer doesn't need to be Rele= ase. + // INVARIANTS: The atomic variable is set to be null. + self.0.store(core::ptr::null_mut(), Relaxed); + } + + // INVARIANTS: The exclusive reference guarantess that the ownersh= ip of a previous + // `into_foreign()` transferred to the `RcuOld`. + Some(RcuOld(old_ptr?, PhantomData)) + } + + /// Replaces the pointer with new value. + /// + /// Returns `None` if the pointer's old value is null, otherwise retur= ns `Some(old)`, where old + /// is a [`RcuOld`] which can be used to free the old object eventuall= y. + /// + /// # Examples + /// + /// ```rust + /// use core::pin::pin; + /// # use kernel::alloc::{flags, KBox}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// let mut x =3D pin!(Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?)= ); + /// let q =3D KBox::new(101i32, flags::GFP_KERNEL)?; + /// + /// // Read in under RCU read lock protection. + /// let g =3D rcu::read_lock(); + /// let v =3D x.dereference(&g); + /// + /// // Replace with a new object. + /// let old =3D x.as_mut().replace(q); + /// + /// assert!(old.is_some()); + /// + /// // `v` should still read the old value. + /// assert_eq!(v, Some(&100i32)); + /// + /// // New readers should get the new value. + /// assert_eq!(x.dereference(&g), Some(&101i32)); + /// + /// drop(g); + /// + /// // Can free the object outside the read-side critical section. + /// drop(old); + /// # Ok::<(), Error>(()) + /// ``` + pub fn replace(self: Pin<&mut Self>, new: P) -> Option> { + self.read_copy_update(|_| Some(new)) + } +} + +impl Drop for Rcu

{ + fn drop(&mut self) { + let ptr =3D *self.0.get_mut(); + if !ptr.is_null() { + // SAFETY: As long as called in a sleepable context, which sho= uld be checked by klint, + // `synchronize_rcu()` is safe to call. + unsafe { + bindings::synchronize_rcu(); + } + + // SAFETY: `self.0` is a return value of `P::into_foreign()`, = so it's safe to call + // `from_foreign()` on it. Plus, the above `synchronize_rcu()`= guarantees no existing + // `ForeignOwnable::borrow()` anymore. + drop(unsafe { P::from_foreign(ptr) }); + } + } +} --=20 2.47.1