From nobody Tue Oct 7 05:43:05 2025 Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30644205ABA; Mon, 14 Jul 2025 05:37:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471431; cv=none; b=Xyas1uAkuTGBsje3haCaMS8jhQuQthNU83t41+TurdOL5W1dR+7ZG8GRI7ZtVWxmWLlMX9cbIYddgefmd/eE/A2HEyBNCuMo39W2cB2zzfR/wos3qfKzn2kUsTS3T9A2+blfo+hn8JLP3svMpQ47w5lGQk+nAS0B73AABda8Qhk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471431; c=relaxed/simple; bh=22djv8E6MRg4uvxlUWiDx6U/LmxxxIyjKjQ4JApsS2M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XzfLaJO1JfNvdzTnRhZ2vHh/FgZpjPC9eO6w9r0x9rC9b6spghF3XMRCabAVu70V2v0q8OigRK/7YFAio0pHLaalIYogm/PrDOBfMfqKPpZVovehtglsrzDdD9bob9tQdyEKM+mZgYgca/aXbvttvnKP0bVlY5WSHFrfOPvpsGg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bjicrYrQ; arc=none smtp.client-ip=209.85.222.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bjicrYrQ" Received: by mail-qk1-f171.google.com with SMTP id af79cd13be357-7e1d89fcc31so102190685a.1; Sun, 13 Jul 2025 22:37:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752471428; x=1753076228; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=MROVhjbXAwDJvnFQvYhC8HYg2UDwZTq9JXFvovjW0rc=; b=bjicrYrQEhClgIZSlFhWGKrgCJHJx7VG8MlQR2gVC2WmMdUNlIFaYrmJahmb7fWaxP 2r91a6TCYWdQga/lifgE++RuV7FtHJI9gao2L4dXRfFadc4GEmsbbCvoDwXtBByLYmIc RvzoqFE20d+ofak/ENuFtC5Z7JMuqH8WZc0Q6+5Xs3MD3TkQdL0gB7gVxGk5sunsE0ui cf2bpV/FPDHwNRH+kP708eW3GqzdDb0mK2MY3HPV4x4BP6Tw2QTHCdPDX0JGQ9v2hB7Y VMoUboQj97SpVg6LZbrgaxPouoYi5iUuaoNt7gUCn2IxzPjQuB1j2V+5uQ6ToshPEdvp 99wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752471428; x=1753076228; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=MROVhjbXAwDJvnFQvYhC8HYg2UDwZTq9JXFvovjW0rc=; b=V9nefyqNxUPpRCUylL7QMNE2Go1CbE7oYeyO392YkLcHJcPWfW1v8G3o8VpOlKzKnN YPFJ2S3xH89DtlnJ3Nhze2hqqZhbMKxFjxS0Bro3w80bfm4gmqL9yhaZ9S+i+71asGVc reJcaPeu+MApzkj0PmTMbPnYz6zU5RwjcLFQeG9wcu+/VFNbIgencN2marAlHt/OdlCI NLrZQTo45PUZyrQqfuqAO1W9hU5EwabaetVRKRqR+lbLjVOX2YFkDK5/j+oYFLwWdJnl GEloub0EMVRGouB90FBS7X3wTE9FinxRLrTR+dJv60g+k6CknpHt36M0jw5yiD55Tk2T uCAA== X-Forwarded-Encrypted: i=1; AJvYcCVKyUvMcjPD3D7cVDVFR3pzD0xpYodD7fqKoMwtGmkkrTPCMTt7GlUm1WV/iWnLaYe/ar0YCevgbHMr@vger.kernel.org, AJvYcCXGjE+jXRhpIcPAbiH8//lO6RlTWOrBcZIhwWsJKVimhdb09CkC4bCzDlzoxTDzSgLTui7Quys+6RoLz0bfdIs=@vger.kernel.org X-Gm-Message-State: AOJu0Yz3Jg1TY+YCeyfBlFtJp0rKSPBTd4aRolvGk7u237TB1SNomtUU Zuv+cvv1ko7yv2ICSC2WP4FQks7qt58ss/O8EXURO18aBL4fcFKwiFE6 X-Gm-Gg: ASbGncvyoF/V5GEViW9kqM138WMdAF1fb0u/zGMm/kQU/o670Ill8SCXoRiDNDJehON jkOtnKNaSKaVlogKMysvYYbW5DHHy2VdczofoCpSQl2szxBUnURUhJ/YRso8XVUB9iH1j5it0XD U+Zo6Etn0hMdJJATH0DJTYWwsPgfUSFeU5/bxfRFSUt+mMWTkzM7cUWnY9IkoODmklTm8XXfe9V bEpNwV4tqiwzxJzUTAu9wt1sGGlo7LzNZVTojWtF7COgK+zFXO3P3lQZUKA04Xi6203oA4h1Wke DWY7arzK02Kr1+5degPdcT6zS/Op3TkYyVdLIofVs6IL43+xhpMlXf/nbWln9Z/A4TMfug7qsHC 95ZYqfNoXrayWXvW7okx+zIKeO+jORK81QapCdmur6onnFl8p6+ua2IoEzZhfU+bS3tFol+sqY8 LKqyeuqhB5KwCW X-Google-Smtp-Source: AGHT+IGK9SV++5EVXw2GaSUVUjWD8RHBJtjDLrMZ/mjy/zTXNpasmZhJ4/PiiR/6geKfjEiMm8DAPQ== X-Received: by 2002:a05:620a:2544:b0:7d3:f17d:10c8 with SMTP id af79cd13be357-7dded1f16b8mr1723186285a.43.1752471427684; Sun, 13 Jul 2025 22:37:07 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-7049799e874sm43764106d6.18.2025.07.13.22.37.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Jul 2025 22:37:07 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 8D7C7F40066; Mon, 14 Jul 2025 01:37:06 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-01.internal (MEProxy); Mon, 14 Jul 2025 01:37:06 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:04 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 1/9] rust: Introduce atomic API helpers Date: Sun, 13 Jul 2025 22:36:48 -0700 Message-Id: <20250714053656.66712-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to support LKMM atomics in Rust, add rust_helper_* for atomic APIs. These helpers ensure the implementation of LKMM atomics in Rust is the same as in C. This could save the maintenance burden of having two similar atomic implementations in asm. Originally-by: Mark Rutland Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/helpers/atomic.c | 1040 +++++++++++++++++++++ rust/helpers/helpers.c | 1 + scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic-helpers.sh | 67 ++ 4 files changed, 1109 insertions(+) create mode 100644 rust/helpers/atomic.c create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c new file mode 100644 index 000000000000..cf06b7ef9a1c --- /dev/null +++ b/rust/helpers/atomic.c @@ -0,0 +1,1040 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +/* + * This file provides helpers for the various atomic functions for Rust. + */ +#ifndef _RUST_ATOMIC_API_H +#define _RUST_ATOMIC_API_H + +#include + +// TODO: Remove this after INLINE_HELPERS support is added. +#ifndef __rust_helper +#define __rust_helper +#endif + +__rust_helper int +rust_helper_atomic_read(const atomic_t *v) +{ + return atomic_read(v); +} + +__rust_helper int +rust_helper_atomic_read_acquire(const atomic_t *v) +{ + return atomic_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic_set(atomic_t *v, int i) +{ + atomic_set(v, i); +} + +__rust_helper void +rust_helper_atomic_set_release(atomic_t *v, int i) +{ + atomic_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic_add(int i, atomic_t *v) +{ + atomic_add(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return(int i, atomic_t *v) +{ + return atomic_add_return(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_acquire(int i, atomic_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_release(int i, atomic_t *v) +{ + return atomic_add_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add(int i, atomic_t *v) +{ + return atomic_fetch_add(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_release(int i, atomic_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_sub(int i, atomic_t *v) +{ + atomic_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return(int i, atomic_t *v) +{ + return atomic_sub_return(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_release(int i, atomic_t *v) +{ + return atomic_sub_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub(int i, atomic_t *v) +{ + return atomic_fetch_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_inc(atomic_t *v) +{ + atomic_inc(v); +} + +__rust_helper int +rust_helper_atomic_inc_return(atomic_t *v) +{ + return atomic_inc_return(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_inc_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_release(atomic_t *v) +{ + return atomic_inc_return_release(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_inc(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_inc_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_dec(atomic_t *v) +{ + atomic_dec(v); +} + +__rust_helper int +rust_helper_atomic_dec_return(atomic_t *v) +{ + return atomic_dec_return(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_dec_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_release(atomic_t *v) +{ + return atomic_dec_return_release(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_dec(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_dec_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_and(int i, atomic_t *v) +{ + atomic_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and(int i, atomic_t *v) +{ + return atomic_fetch_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_andnot(int i, atomic_t *v) +{ + atomic_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_or(int i, atomic_t *v) +{ + atomic_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or(int i, atomic_t *v) +{ + return atomic_fetch_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_release(int i, atomic_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_xor(int i, atomic_t *v) +{ + atomic_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor(int i, atomic_t *v) +{ + return atomic_fetch_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_xchg(atomic_t *v, int new) +{ + return atomic_xchg(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_acquire(atomic_t *v, int new) +{ + return atomic_xchg_acquire(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_release(atomic_t *v, int new) +{ + return atomic_xchg_release(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return atomic_xchg_relaxed(v, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return atomic_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_release(int i, atomic_t *v) +{ + return atomic_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return atomic_add_negative_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_inc_not_zero(atomic_t *v) +{ + return atomic_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic_inc_unless_negative(atomic_t *v) +{ + return atomic_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic_dec_unless_positive(atomic_t *v) +{ + return atomic_dec_unless_positive(v); +} + +__rust_helper int +rust_helper_atomic_dec_if_positive(atomic_t *v) +{ + return atomic_dec_if_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_read(const atomic64_t *v) +{ + return atomic64_read(v); +} + +__rust_helper s64 +rust_helper_atomic64_read_acquire(const atomic64_t *v) +{ + return atomic64_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic64_set(atomic64_t *v, s64 i) +{ + atomic64_set(v, i); +} + +__rust_helper void +rust_helper_atomic64_set_release(atomic64_t *v, s64 i) +{ + atomic64_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic64_add(s64 i, atomic64_t *v) +{ + atomic64_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return atomic64_add_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_sub(s64 i, atomic64_t *v) +{ + atomic64_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_inc(atomic64_t *v) +{ + atomic64_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return(atomic64_t *v) +{ + return atomic64_inc_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_inc_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_dec(atomic64_t *v) +{ + atomic64_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return(atomic64_t *v) +{ + return atomic64_dec_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_dec_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_and(s64 i, atomic64_t *v) +{ + atomic64_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_or(s64 i, atomic64_t *v) +{ + atomic64_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_xor(s64 i, atomic64_t *v) +{ + atomic64_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_xchg(atomic64_t *v, s64 new) +{ + return atomic64_xchg(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return atomic64_xchg_acquire(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return atomic64_xchg_release(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return atomic64_xchg_relaxed(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_acquire(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_release(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_unless_negative(atomic64_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic64_dec_unless_positive(atomic64_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_if_positive(atomic64_t *v) +{ + return atomic64_dec_if_positive(v); +} + +#endif /* _RUST_ATOMIC_API_H */ +// 615a0e0c98b5973a47fe4fa65e92935051ca00ed diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 16fa9bca5949..83e89f6a68fb 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -7,6 +7,7 @@ * Sorted alphabetically. */ =20 +#include "atomic.c" #include "auxiliary.c" #include "blk.c" #include "bug.c" diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 5b98a8307693..02508d0d6fe4 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -11,6 +11,7 @@ cat < ${LINUXDIR}/include= /${header} diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen= -rust-atomic-helpers.sh new file mode 100755 index 000000000000..45b1e100ed7c --- /dev/null +++ b/scripts/atomic/gen-rust-atomic-helpers.sh @@ -0,0 +1,67 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=3D$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +gen_proto_order_variant() +{ + local meta=3D"$1"; shift + local pfx=3D"$1"; shift + local name=3D"$1"; shift + local sfx=3D"$1"; shift + local order=3D"$1"; shift + local atomic=3D"$1"; shift + local int=3D"$1"; shift + + local atomicname=3D"${atomic}_${pfx}${name}${sfx}${order}" + + local ret=3D"$(gen_ret_type "${meta}" "${int}")" + local params=3D"$(gen_params "${int}" "${atomic}" "$@")" + local args=3D"$(gen_args "$@")" + local retstmt=3D"$(gen_ret_stmt "${meta}")" + +cat < + +// TODO: Remove this after INLINE_HELPERS support is added. +#ifndef __rust_helper +#define __rust_helper +#endif + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic" "int" ${args} +done + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat < X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:07 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework Date: Sun, 13 Jul 2025 22:36:49 -0700 Message-Id: <20250714053656.66712-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for generic atomic implementation. To unify the implementation of a generic method over `i32` and `i64`, the C side atomic methods need to be grouped so that in a generic method, they can be referred as ::, otherwise their parameters and return value are different between `i32` and `i64`, which would require using `transmute()` to unify the type into a `T`. Introduce `AtomicImpl` to represent a basic type in Rust that has the direct mapping to an atomic implementation from C. This trait is sealed, and currently only `i32` and `i64` impl this. Further, different methods are put into different `*Ops` trait groups, and this is for the future when smaller types like `i8`/`i16` are supported but only with a limited set of API (e.g. only set(), load(), xchg() and cmpxchg(), no add() or sub() etc). While the atomic mod is introduced, documentation is also added for memory models and data races. Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect my responsiblity on the Rust atomic mod. Reviewed-by: Alice Ryhl Reviewed-by: Benno Lossin Signed-off-by: Boqun Feng --- Benno, I actually followed your suggestion and put the safety requirement inline, and also I realized I don't need to mention about data race, because no data race is an implied safety requirement. Note that macro-wise, I forced only #[doc] attributes can be put before `unsafe fn ..` because this is the only usage, and I don't think it's likely we want to support other attributes. We can always add them later. MAINTAINERS | 4 +- rust/kernel/sync.rs | 1 + rust/kernel/sync/atomic.rs | 19 +++ rust/kernel/sync/atomic/ops.rs | 265 +++++++++++++++++++++++++++++++++ 4 files changed, 288 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/sync/atomic.rs create mode 100644 rust/kernel/sync/atomic/ops.rs diff --git a/MAINTAINERS b/MAINTAINERS index 0c1d245bf7b8..5eef524975ca 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c ATOMIC INFRASTRUCTURE M: Will Deacon M: Peter Zijlstra -R: Boqun Feng +M: Boqun Feng R: Mark Rutland L: linux-kernel@vger.kernel.org S: Maintained @@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h F: include/linux/refcount.h F: scripts/atomic/ +F: rust/kernel/sync/atomic.rs +F: rust/kernel/sync/atomic/ =20 ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 36a719015583..b620027e0641 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -10,6 +10,7 @@ use pin_init; =20 mod arc; +pub mod atomic; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs new file mode 100644 index 000000000000..c9c7c3617dd5 --- /dev/null +++ b/rust/kernel/sync/atomic.rs @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic primitives. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions of +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Con= sistency) Model is the +//! only model for Rust code in kernel, and Rust's own atomics should be a= voided. +//! +//! # Data races +//! +//! [`LKMM`] atomics have different rules regarding data races: +//! +//! - A normal write from C side is treated as an atomic write if +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=3Dy. +//! - Mixed-size atomic accesses don't cause data races. +//! +//! [`LKMM`]: srctree/tools/memory-model/ + +pub mod ops; diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs new file mode 100644 index 000000000000..1353dc748ef9 --- /dev/null +++ b/rust/kernel/sync/atomic/ops.rs @@ -0,0 +1,265 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic implementations. +//! +//! Provides 1:1 mapping of atomic implementations. + +use crate::bindings; +use crate::macros::paste; + +mod private { + /// Sealed trait marker to disable customized impls on atomic implemen= tation traits. + pub trait Sealed {} +} + +// `i32` and `i64` are only supported atomic implementations. +impl private::Sealed for i32 {} +impl private::Sealed for i64 {} + +/// A marker trait for types that implement atomic operations with C side = primitives. +/// +/// This trait is sealed, and only types that have directly mapping to the= C side atomics should +/// impl this: +/// +/// - `i32` maps to `atomic_t`. +/// - `i64` maps to `atomic64_t`. +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed { + /// The type of the delta in arithmetic or logical operations. + /// + /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usuall= y it's the same type of + /// [`Self`], but it may be different for the atomic pointer type. + type Delta; +} + +// `atomic_t` implements atomic operations on `i32`. +impl AtomicImpl for i32 { + type Delta =3D Self; +} + +// `atomic64_t` implements atomic operations on `i64`. +impl AtomicImpl for i64 { + type Delta =3D Self; +} + +// This macro generates the function signature with given argument list an= d return type. +macro_rules! declare_atomic_method { + ( + $(#[doc=3D$doc:expr])* + $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)? + ) =3D> { + paste!( + $(#[doc =3D $doc])* + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)= ?; + ); + }; + ( + $(#[doc=3D$doc:expr])* + $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(->= $ret:ty)? + ) =3D> { + paste!( + declare_atomic_method!( + $(#[doc =3D $doc])* + [< $func _ $variant >]($($arg_sig)*) $(-> $ret)? + ); + ); + + declare_atomic_method!( + $(#[doc =3D $doc])* + $func [$($rest)*]($($arg_sig)*) $(-> $ret)? + ); + }; + ( + $(#[doc=3D$doc:expr])* + $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)? + ) =3D> { + declare_atomic_method!( + $(#[doc =3D $doc])* + $func($($arg_sig)*) $(-> $ret)? + ); + } +} + +// This macro generates the function implementation with given argument li= st and return type, and it +// will replace "call(...)" expression with "$ctype _ $func" to call the r= eal C function. +macro_rules! impl_atomic_method { + ( + ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $re= t:ty)? { + call($($c_arg:expr),*) + } + ) =3D> { + paste!( + #[inline(always)] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)= ? { + // SAFETY: Per function safety requirement, all pointers a= re aligned and valid, and + // accesses won't cause data race per LKMM. + unsafe { bindings::[< $ctype _ $func >]($($c_arg,)*) } + } + ); + }; + ( + ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_= sig:tt)*) $(-> $ret:ty)? { + call($($arg:tt)*) + } + ) =3D> { + paste!( + impl_atomic_method!( + ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)?= { + call($($arg)*) + } + ); + ); + impl_atomic_method!( + ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? { + call($($arg)*) + } + ); + }; + ( + ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? { + call($($arg:tt)*) + } + ) =3D> { + impl_atomic_method!( + ($ctype) $func($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + } +} + +// Delcares $ops trait with methods and implements the trait for `i32` and= `i64`. +macro_rules! declare_and_impl_atomic_methods { + ($(#[$attr:meta])* pub trait $ops:ident { + $( + $(#[doc=3D$doc:expr])* + unsafe fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $= ( -> $ret:ty)? { + bindings::#call($($arg:tt)*) + } + )* + }) =3D> { + $(#[$attr])* + pub trait $ops: AtomicImpl { + $( + declare_atomic_method!( + $(#[doc=3D$doc])* + $func[$($variant)*]($($arg_sig)*) $(-> $ret)? + ); + )* + } + + impl $ops for i32 { + $( + impl_atomic_method!( + (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)?= { + call($($arg)*) + } + ); + )* + } + + impl $ops for i64 { + $( + impl_atomic_method!( + (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret= )? { + call($($arg)*) + } + ); + )* + } + } +} + +declare_and_impl_atomic_methods!( + /// Basic atomic operations + pub trait AtomicHasBasicOps { + /// Atomic read (load). + /// + /// # Safety + /// - `ptr` is aligned to [`align_of::()`]. + /// - `ptr` is valid for reads. + /// + /// [`align_of::()`]: core::mem::align_of + unsafe fn read[acquire](ptr: *mut Self) -> Self { + bindings::#call(ptr.cast()) + } + + /// Atomic set (store). + /// + /// # Safety + /// - `ptr` is aligned to [`align_of::()`]. + /// - `ptr` is valid for writes. + /// + /// [`align_of::()`]: core::mem::align_of + unsafe fn set[release](ptr: *mut Self, v: Self) { + bindings::#call(ptr.cast(), v) + } + } +); + +declare_and_impl_atomic_methods!( + /// Exchange and compare-and-exchange atomic operations + pub trait AtomicHasXchgOps { + /// Atomic exchange. + /// + /// Atomically updates `*ptr` to `v` and returns the old value. + /// + /// # Safety + /// - `ptr` is aligned to [`align_of::()`]. + /// - `ptr` is valid for reads and writes. + /// + /// [`align_of::()`]: core::mem::align_of + unsafe fn xchg[acquire, release, relaxed](ptr: *mut Self, v: Self)= -> Self { + bindings::#call(ptr.cast(), v) + } + + /// Atomic compare and exchange. + /// + /// If `*ptr` =3D=3D `*old`, atomically updates `*ptr` to `new`. O= therwise, `*ptr` is not + /// modified, `*old` is updated to the current value of `*ptr`. + /// + /// Return `true` if the update of `*ptr` occured, `false` otherwi= se. + /// + /// # Safety + /// - `ptr` is aligned to [`align_of::()`]. + /// - `ptr` is valid for reads and writes. + /// - `old` is aligned to [`align_of::()`]. + /// - `old` is valid for reads and writes. + /// + /// [`align_of::()`]: core::mem::align_of + unsafe fn try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, o= ld: *mut Self, new: Self) -> bool { + bindings::#call(ptr.cast(), old, new) + } + } +); + +declare_and_impl_atomic_methods!( + /// Atomic arithmetic operations + pub trait AtomicHasArithmeticOps { + /// Atomic add (wrapping). + /// + /// Atomically updates `*ptr` to `(*ptr).wrapping_add(v)`. + /// + /// # Safety + /// - `ptr` is aligned to `align_of::()`. + /// - `ptr` is valid for reads and writes. + /// + /// [`align_of::()`]: core::mem::align_of + unsafe fn add[](ptr: *mut Self, v: Self::Delta) { + bindings::#call(v, ptr.cast()) + } + + /// Atomic fetch and add (wrapping). + /// + /// Atomically updates `*ptr` to `(*ptr).wrapping_add(v)`, and ret= urns the value of `*ptr` + /// before the update. + /// + /// # Safety + /// - `ptr` is aligned to `align_of::()`. + /// - `ptr` is valid for reads and writes. + /// + /// [`align_of::()`]: core::mem::align_of + unsafe fn fetch_add[acquire, release, relaxed](ptr: *mut Self, v: = Self::Delta) -> Self { + bindings::#call(v, ptr.cast()) + } + } +); --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 05:43:05 2025 Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A8561E9906; Mon, 14 Jul 2025 05:37:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471433; cv=none; b=KVNe/Ry5DeLTKINqZrJFNcoHNRDbMJ5u15MVLJ1R63M7BaqUt8LwCqAwCti7KTvgT/Jq6u7f57/OvOE5NcIbRdcKAUNgxef5GQLXlKKMy3MoCDLR+2RV7nuFPf3IjBwhIIgOJpT/plMTKYEDpYEnDgJLzc9McjTgoQOcAHf9wCs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471433; c=relaxed/simple; bh=xlWvdx4K4cpu4DCrpHnXsezY642w8KKmsqGNrNT6ghQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tIes9O/Z6sMwjEsnEfwMxYgPtPLHIiI6oV3GU/YyLux3VE6a35gu3mik+bL0JGjTZtISp4p70jiK0d/rvIFuo5ZA4Ve2kUeMjNG2ZYjKnqXQTUa+oQygg7ejQohc6KLApg6u6UR0cymMZC+lqUG/Iod+0DQvSiTSpmTL83dKTTE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=E2VCPVMU; arc=none smtp.client-ip=209.85.222.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="E2VCPVMU" Received: by mail-qk1-f172.google.com with SMTP id af79cd13be357-7d3dd14a7edso569727785a.2; Sun, 13 Jul 2025 22:37:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752471431; x=1753076231; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=y3PTADJROK8RUselFZejqtVBUTIPbEacB2CE9Mn3j20=; b=E2VCPVMUcFAKQ2cDmq5JfCm5dmECvht7KZZG5+6V/YdQeKN6W9YF2mtqFA0Sq2fM3+ bkSifFVNwyhi96LY2ev/TEwiQPeRoqeYHr+zAwKAtKmj9YJZhcP5HgqGjF7LrKuHMvr4 1AqQM9Jf67axc4NdXMCanMkWbc26JF34O1wYWDpSHWtQaiF886goC8omqBgjXOC93vr5 4Ph3cb14KrJQMvfyi1A4Bm7fYdM7RaTSYDlGtrAxKVQq9CweofJSuY1Olztnmov+IB5z AfjevEPOiAPFqNfW8W1AVsyuO7CE0GslzB5jbZkVBEzeOWUgzWtxxrnctP2pK25GdIwj 7EcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752471431; x=1753076231; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=y3PTADJROK8RUselFZejqtVBUTIPbEacB2CE9Mn3j20=; b=jBmvVsqq4q+SDIZQ0n/uc6ctpfvpBewOrDzJBmoJGH4A6Uz0yLORJnNakuMybXgrMW hES4amBZ2QLpCMWl3yi+1c1qHF1D1JARdMuWWfESAKivWI0tfTVHsmeCg/3ovt6fDZAz Aqfy1ZsiScgWzXGqPZqawC1WBCmoAoN1NvNaVP+xHRn4HUopYqRQMB62sDogzNBs43Bd iDPsL7awi1LfQ6bOrEh2TKNofbu0Huubk5gwGfkj3x8NQpEvxAVxlXX89HOFiXNS3n/Y RhMhdtVSA4sdUXMf9xtRRipPOa+w/kj/iGZTNdllmC3Bngjb0Cd5Il8+ZEDdsql5wFTN iJTQ== X-Forwarded-Encrypted: i=1; AJvYcCUX4KR6GhfMOnPaAodBAwDCv+Yg+W81t1f74f9SUj7aOGFz1Ox3WyON9wD6xitzJmosRmj/tB3/52mzKp3ypEM=@vger.kernel.org, AJvYcCXLmUpUuk/uffZNwFDuzUE+8xiZOTbqBoOt9t9rD1Y4O0uwRRgCbCN8Gk6+AF6UbWZhxxiHr3EStzMi@vger.kernel.org X-Gm-Message-State: AOJu0Yy9Armqohao8gMuyKtzViGzsaTiY6pFfnxpXFif9GL2innOLlgs zxAq+KB7O6bezbeLVZXFSN57OYXCc27oJ2Eb2nRpaFqv+Fotyh6NL0em X-Gm-Gg: ASbGnctrKCR+c57zwMmJDNkFUkwLc5sH/tm89ZaAeE8cnuaQjDVHh1SH2B4YM8IjsfW Dx5qgRBW0P/9cyphEl55EFOkIw0JRQpY+g5vsLqpUIlqiOGMWEemh5gWqRwVO7iWC0qza8pKn6Y g4bsyzlywCP/syD8w4CNkmNeVnvTgYvZUvwVKc0VtnJkfvCE2RL7x/lDh1XNOwvX1D6D78Wd6jW um1o9W+8XV/6p/77ZdNDhE14m2CaPNzJ4ALLHfQIYvkWDW01U9q42lQ3pIEdqYHZBGgmY422JyA yZF99ufbKSb0P1ukJSCJhpX59aVRuiStAruuRtaU3TcvlJdqx+vUQlT+FqL8WO6Y2u97n1nSWmO JYmLdXVY0tAZ3aRFFY2dySUMJFQj2dG1r0rPTWSI9WTYZWrhWtramwLGS639ESMHiT0RKIOQrjF mQGxVqiXzgPR/mKPHA3jhKbfw= X-Google-Smtp-Source: AGHT+IEAuOUBLVlPtb7IgisS84aKVc2oQXRMxeGYtZS4sxdj+N8tflO03IXpyxI9gp7jwAgezZk3zg== X-Received: by 2002:a05:620a:1b98:b0:7e0:6402:bece with SMTP id af79cd13be357-7e06402c8c7mr1133157585a.38.1752471430660; Sun, 13 Jul 2025 22:37:10 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7dcdc0fa32asm473931685a.48.2025.07.13.22.37.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Jul 2025 22:37:10 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id 7F137F40066; Mon, 14 Jul 2025 01:37:09 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-10.internal (MEProxy); Mon, 14 Jul 2025 01:37:09 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgffhffevhffhvdfgjefgkedvlefgkeegveeuheelhfeivdegffejgfetuefgheei necuffhomhgrihhnpehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtne curfgrrhgrmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghr shhonhgrlhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvg hngheppehgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohep vdejpdhmohguvgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlh esvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehruhhsthdqfhhorhdqlhhi nhhugiesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlkhhmmheslhhish htshdrlhhinhhugidruggvvhdprhgtphhtthhopehlihhnuhigqdgrrhgthhesvhhgvghr rdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrgh dprhgtphhtthhopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthht ohepsghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihse hgrghrhihguhhordhnvghtpdhrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhn mhgrihhlrdgtohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:08 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types Date: Sun, 13 Jul 2025 22:36:50 -0700 Message-Id: <20250714053656.66712-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for atomic primitives. Instead of a suffix like _acquire, a method parameter along with the corresponding generic parameter will be used to specify the ordering of an atomic operations. For example, atomic load() can be defined as: impl Atomic { pub fn load(&self, _o: O) -> T { ... } } and acquire users would do: let r =3D x.load(Acquire); relaxed users: let r =3D x.load(Relaxed); doing the following: let r =3D x.load(Release); will cause a compiler error. Compared to suffixes, it's easier to tell what ordering variants an operation has, and it also make it easier to unify the implementation of all ordering variants in one method via generic. The `TYPE` associate const is for generic function to pick up the particular implementation specified by an ordering annotation. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- Benno, please take a good and if you want to provide your Reviewed-by for this one. I didn't apply your Reviewed-by because I used `ordering::Any` instead of `AnyOrdering`, I think you're Ok with it [1], but I could be wrong. Thanks! [1]: https://lore.kernel.org/rust-for-linux/DB8M91D7KIT4.14W69YK7108ND@kern= el.org/ rust/kernel/sync/atomic.rs | 3 + rust/kernel/sync/atomic/ordering.rs | 109 ++++++++++++++++++++++++++++ 2 files changed, 112 insertions(+) create mode 100644 rust/kernel/sync/atomic/ordering.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index c9c7c3617dd5..e80ac049f36b 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -17,3 +17,6 @@ //! [`LKMM`]: srctree/tools/memory-model/ =20 pub mod ops; +pub mod ordering; + +pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/= ordering.rs new file mode 100644 index 000000000000..aea0a2bbb1b9 --- /dev/null +++ b/rust/kernel/sync/atomic/ordering.rs @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory orderings. +//! +//! The semantics of these orderings follows the [`LKMM`] definitions and = rules. +//! +//! - [`Acquire`] provides ordering between the load part of the annotated= operation and all the +//! following memory accesses, and if there is a store part, the store p= art has the [`Relaxed`] +//! ordering. +//! - [`Release`] provides ordering between all the preceding memory acces= ses and the store part of +//! the annotated operation, and if there is a load part, the load part = has the [`Relaxed`] +//! ordering. +//! - [`Full`] means "fully-ordered", that is: +//! - It provides ordering between all the preceding memory accesses and= the annotated operation. +//! - It provides ordering between the annotated operation and all the f= ollowing memory accesses. +//! - It provides ordering between all the preceding memory accesses and= all the following memory +//! accesses. +//! - All the orderings are the same strength as a full memory barrier (= i.e. `smp_mb()`). +//! - [`Relaxed`] provides no ordering except the dependency orderings. De= pendency orderings are +//! described in "DEPENDENCY RELATIONS" in [`LKMM`]'s [`explanation`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.= txt + +/// The annotation type for relaxed memory ordering, for the description o= f relaxed memory +/// ordering, see [module-level documentation]. +/// +/// [module-level documentation]: crate::sync::atomic::ordering +pub struct Relaxed; + +/// The annotation type for acquire memory ordering, for the description o= f acquire memory +/// ordering, see [module-level documentation]. +/// +/// [module-level documentation]: crate::sync::atomic::ordering +pub struct Acquire; + +/// The annotation type for release memory ordering, for the description o= f release memory +/// ordering, see [module-level documentation]. +/// +/// [module-level documentation]: crate::sync::atomic::ordering +pub struct Release; + +/// The annotation type for fully-ordered memory ordering, for the descrip= tion fully-ordered memory +/// ordering, see [module-level documentation]. +/// +/// [module-level documentation]: crate::sync::atomic::ordering +pub struct Full; + +/// Describes the exact memory ordering. +#[doc(hidden)] +pub enum OrderingType { + /// Relaxed ordering. + Relaxed, + /// Acquire ordering. + Acquire, + /// Release ordering. + Release, + /// Fully-ordered. + Full, +} + +mod internal { + /// Sealed trait, can be only implemented inside atomic mod. + pub trait Sealed {} + + impl Sealed for super::Relaxed {} + impl Sealed for super::Acquire {} + impl Sealed for super::Release {} + impl Sealed for super::Full {} +} + +/// The trait bound for annotating operations that support any ordering. +pub trait Any: internal::Sealed { + /// Describes the exact memory ordering. + const TYPE: OrderingType; +} + +impl Any for Relaxed { + const TYPE: OrderingType =3D OrderingType::Relaxed; +} + +impl Any for Acquire { + const TYPE: OrderingType =3D OrderingType::Acquire; +} + +impl Any for Release { + const TYPE: OrderingType =3D OrderingType::Release; +} + +impl Any for Full { + const TYPE: OrderingType =3D OrderingType::Full; +} + +/// The trait bound for operations that only support acquire or relaxed or= dering. +pub trait AcquireOrRelaxed: Any {} + +impl AcquireOrRelaxed for Acquire {} +impl AcquireOrRelaxed for Relaxed {} + +/// The trait bound for operations that only support release or relaxed or= dering. +pub trait ReleaseOrRelaxed: Any {} + +impl ReleaseOrRelaxed for Release {} +impl ReleaseOrRelaxed for Relaxed {} + +/// The trait bound for operations that only support relaxed ordering. +pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + Any {} + +impl RelaxedOnly for Relaxed {} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 05:43:05 2025 Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61E9B1FE451; Mon, 14 Jul 2025 05:37:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471435; cv=none; b=UNOfk13sPrapIu6o7c7QJGYHR0ghffRQTswwe3BqRWW+Y3tU73kIN71dElxsepxcjO8cXs3KYdcYFp6t8fRhssiUKMBP0ittS2hhWGa/K657ipssOfcybRiZcyFb00QhTQy3oE9pmPBt1cebb1B4SjZZra3IJQ0vTqu67YsKCD8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471435; c=relaxed/simple; bh=AOpC4nLV0ayXAQjhNEWyt8cYV0M1f04jwfy/8d6V11Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=o+prSP20WfFNGJJ4cEfSIu9iuAtd/Ye3lCL2kGOsJx8yyPREJ1in9I05Vpvye/5mQ4+fxHmjmGmGrvNioIcuJn2uB7xnzjYVRGxUXKp+K7o98oXCAi8mHyjMVNQ6apfwB8EK4aV5fV+Nxu/FcPo3KMrLhPGYrX84TeSVVkseAhc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=cINuQI67; arc=none smtp.client-ip=209.85.222.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cINuQI67" Received: by mail-qk1-f179.google.com with SMTP id af79cd13be357-7dfff5327fbso319419685a.1; Sun, 13 Jul 2025 22:37:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752471432; x=1753076232; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=R47XgB/2q6odbPziaMCGNZTE+GhWB/ReI/ubO+uWNN4=; b=cINuQI67KQjjF11x5ZbPV1+edyHfYsifYCyKTTlI0n4nkjgZq0cF/ScnvEJ8wUSjPK P0jRGRehfMtXVc1hZJrbK8aq9fUFRa+5fjpG2TD0zmeAtN+dkP3HQPMRIDrCcNbV9wNv dYaBh6/BxfHIatqXglVKxune6BHjSG9ZV1z9kNYjz2vzhHYFAU20ksl7wyceE3HlIlyo z56omapKPo4YUyFcY1sqiwnfNrtx/WdxoTOmX1lMrDQ0II7V5zXN9XeB7c7c0/xzRRwE Nyfbx64eSAul6+JsNkbr48Xe7WV+Wfr5sIAZv3ZV9RgVnjbE+CPawaUwLVvFKySlW0DE hDEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752471432; x=1753076232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=R47XgB/2q6odbPziaMCGNZTE+GhWB/ReI/ubO+uWNN4=; b=jtvnRddc2pEdGHeLGZItW9dhZDBAQPnR5XCN2Bf8nAVeQW+JocpR/3zOdJilHoJ96c nfnyTNF+NfV4EvYg0hpDK3cE+YjTOBFOGhwmqBEMQjG0qHdR1ZWYlxRC17JI6pbA3z/g h/6bgDnXPoM9k0gNNkoOMs3gPqMDzeRzjRr+hPUghj6OIlao3ogY+aGFwer11vAobgae Zk0OuSzRzL+AlYoZmbSodEGD4qaH1HP/u9nAsOaMG7apgTrc6e7YhcTSO+qsf7/Odrl4 NNdyo30VUShXKXiPOXSVPyfS+mSI90KVTdsntgDZq/LFc6qKtN2TC3mnOlY/XtFr0bQz 4kAw== X-Forwarded-Encrypted: i=1; AJvYcCW3G9tVq2dj8LRn8+14sz8O8MWvp/j6z6LtuciB1NduXAmB5lVh5S+HgxfO7Pn58h6A++rTpZPy562Ot0FXwxY=@vger.kernel.org, AJvYcCWrrmvKfeiFfOvTnWYe4s1Ww6hFXmtQu7bj+eggBO71sNVTwRr/mY7oALSvanAO9LIfooF5plgueATC@vger.kernel.org X-Gm-Message-State: AOJu0YxyhEzurwvA8epDlO9VHCwV9gzxm0WWLKR2lfAQNuSlW4LfBksX RPVORUXOxP4LqOJA1xkUueTpadPKcsvNCU6h9wELzagrA2Dtge8P6ldS X-Gm-Gg: ASbGncssDBqHnFMsReGgV/vbui61FBMtyEI6fIDIfKLMjWaPZBwpDgPZX2nFzwGvfzg XwdX3RzA44mUK9YBtpqrtEu/94qTYIbQN7XrWRUnSgMmxniq5mmyXYAhUgNHp+3u7/cGdxrxggy Jcr0xClQRtV9zKeSdZ5QO3D+NBdGap6sLlq9K1AS2rL+Lk1fn6WM0GEMIWatArt+Tu1qQOa+YmX ulIkY3PIB/cDvgDfWrF22r9EQ8bOc1PXKwM887/u0+TVWT83WqtXAPJIlDxu9f2C1e3NvW7qefU h6vHpxzze4hxjxIsQGLC1e2obeWlbfQjTkMaJ+KAIBKz1hXvOWmkSiGvDbHAh5CJSIOFtqkm9bE ze+iz0zJ/Idew67ker4XgpNDJaKerZF+R0fMMcO6Yh3UfqHeL9n6fNiH2DcOoyL46bOUv6pGPrk xUk6dvkYfnPPu8 X-Google-Smtp-Source: AGHT+IFnVcm8bS4GMHARDiXRGgzOvY+eTa7LjuzczhxG76dKUjyjx8yr0Xu9EniLG4HuM9fwYafQWg== X-Received: by 2002:a05:620a:d8d:b0:7c5:405e:e88f with SMTP id af79cd13be357-7ddea60f5e1mr1902488385a.21.1752471432067; Sun, 13 Jul 2025 22:37:12 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7e331294837sm8953285a.49.2025.07.13.22.37.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Jul 2025 22:37:11 -0700 (PDT) Received: from phl-compute-06.internal (phl-compute-06.phl.internal [10.202.2.46]) by mailfauth.phl.internal (Postfix) with ESMTP id 00F4EF40066; Mon, 14 Jul 2025 01:37:11 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-06.internal (MEProxy); Mon, 14 Jul 2025 01:37:11 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:10 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics Date: Sun, 13 Jul 2025 22:36:51 -0700 Message-Id: <20250714053656.66712-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To provide using LKMM atomics for Rust code, a generic `Atomic` is added, currently `T` needs to be Send + Copy because these are the straightforward usages and all basic types support this. Implement `AllowAtomic` for `i32` and `i64`, and so far only basic operations load() and store() are introduced. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 14 ++ rust/kernel/sync/atomic/generic.rs | 285 +++++++++++++++++++++++++++++ 2 files changed, 299 insertions(+) create mode 100644 rust/kernel/sync/atomic/generic.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index e80ac049f36b..c5193c1c90fe 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -16,7 +16,21 @@ //! //! [`LKMM`]: srctree/tools/memory-model/ =20 +pub mod generic; pub mod ops; pub mod ordering; =20 +pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; + +// SAFETY: `i32` has the same size and alignment with itself, and is round= -trip transmutable to +// itself. +unsafe impl generic::AllowAtomic for i32 { + type Repr =3D i32; +} + +// SAFETY: `i64` has the same size and alignment with itself, and is round= -trip transmutable to +// itself. +unsafe impl generic::AllowAtomic for i64 { + type Repr =3D i64; +} diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs new file mode 100644 index 000000000000..b3e07328d857 --- /dev/null +++ b/rust/kernel/sync/atomic/generic.rs @@ -0,0 +1,285 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Generic atomic primitives. + +use super::ops::{AtomicHasBasicOps, AtomicImpl}; +use super::{ordering, ordering::OrderingType}; +use crate::build_error; +use core::cell::UnsafeCell; + +/// A memory location which can be safely modified from multiple execution= contexts. +/// +/// This has the same size, alignment and bit validity as the underlying t= ype `T`. +/// +/// The atomic operations are implemented in a way that is fully compatibl= e with the [Linux Kernel +/// Memory (Consistency) Model][LKMM], hence they should be modeled as the= corresponding +/// [`LKMM`][LKMM] atomic primitives. With the help of [`Atomic::from_ptr(= )`] and +/// [`Atomic::as_ptr()`], this provides a way to interact with [C-side ato= mic operations] +/// (including those without the `atomic` prefix, e.g. `READ_ONCE()`, `WRI= TE_ONCE()`, +/// `smp_load_acquire()` and `smp_store_release()`). +/// +/// [LKMM]: srctree/tools/memory-model/ +/// [C-side atomic operations]: srctree/Documentation/atomic_t.txt +#[repr(transparent)] +pub struct Atomic(UnsafeCell); + +// SAFETY: `Atomic` is safe to share among execution contexts because a= ll accesses are atomic. +unsafe impl Sync for Atomic {} + +/// Types that support basic atomic operations. +/// +/// # Round-trip transmutability +/// +/// `T` is round-trip transmutable to `U` if and only if both of these pro= perties hold: +/// +/// - Any valid bit pattern for `T` is also a valid bit pattern for `U`. +/// - Transmuting (e.g. using [`transmute()`]) a value of type `T` to `U` = and then to `T` again +/// yields a value that is in all aspects equivalent to the original val= ue. +/// +/// # Safety +/// +/// - [`Self`] must have the same size and alignment as [`Self::Repr`]. +/// - [`Self`] must be [round-trip transmutable] to [`Self::Repr`]. +/// +/// Note that this is more relaxed than requiring the bi-directional trans= mutability (i.e. +/// [`transmute()`] is always sound between `U` to `T`) because of the sup= port for atomic variables +/// over unit-only enums, see [Examples]. +/// +/// # Limitations +/// +/// Because C primitives are used to implement the atomic operations, and = a C function requires a +/// valid object of a type to operate on (i.e. no `MaybeUninit<_>`), hence= at the Rust <-> C +/// surface, only types with no uninitialized bits can be passed. As a res= ult, types like `(u8, +/// u16)` (a tuple with a `MaybeUninit` hole in it) are currently not supp= orted. Note that +/// technically these types can be supported if some APIs are removed for = them and the inner +/// implementation is tweaked, but the justification of support such a typ= e is not strong enough at +/// the moment. This should be resolved if there is an implementation for = `MaybeUninit` as +/// `AtomicImpl`. +/// +/// # Examples +/// +/// A unit-only enum that implements [`AllowAtomic`]: +/// +/// ``` +/// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed}; +/// +/// #[derive(Clone, Copy, PartialEq, Eq)] +/// #[repr(i32)] +/// enum State { +/// Uninit =3D 0, +/// Working =3D 1, +/// Done =3D 2, +/// }; +/// +/// // SAFETY: `State` and `i32` has the same size and alignment, and it's= round-trip +/// // transmutable to `i32`. +/// unsafe impl AllowAtomic for State { +/// type Repr =3D i32; +/// } +/// +/// let s =3D Atomic::new(State::Uninit); +/// +/// assert_eq!(State::Uninit, s.load(Relaxed)); +/// ``` +/// [`transmute()`]: core::mem::transmute +/// [round-trip transmutable]: AllowAtomic#round-trip-transmutability +/// [Examples]: AllowAtomic#examples +pub unsafe trait AllowAtomic: Sized + Send + Copy { + /// The backing atomic implementation type. + type Repr: AtomicImpl; +} + +#[inline(always)] +const fn into_repr(v: T) -> T::Repr { + // SAFETY: Per the safety requirement of `AllowAtomic`, the transmute = operation is sound. + unsafe { core::mem::transmute_copy(&v) } +} + +/// # Safety +/// +/// `r` must be a valid bit pattern of `T`. +#[inline(always)] +const unsafe fn from_repr(r: T::Repr) -> T { + // SAFETY: Per the safety requirement of the function, the transmute o= peration is sound. + unsafe { core::mem::transmute_copy(&r) } +} + +impl Atomic { + /// Creates a new atomic `T`. + pub const fn new(v: T) -> Self { + Self(UnsafeCell::new(v)) + } + + /// Creates a reference to an atomic `T` from a pointer of `T`. + /// + /// # Safety + /// + /// - `ptr` is aligned to `align_of::()`. + /// - `ptr` is valid for reads and writes for `'a`. + /// - For the duration of `'a`, other accesses to `*ptr` must not caus= e data races (defined + /// by [`LKMM`]) against atomic operations on the returned reference= . Note that if all other + /// accesses are atomic, then this safety requirement is trivially f= ulfilled. + /// + /// [`LKMM`]: srctree/tools/memory-model + /// + /// # Examples + /// + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [= `Atomic::store()`] can + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire(= )` or + /// `WRITE_ONCE()`/`smp_store_release()` in C side: + /// + /// ``` + /// # use kernel::types::Opaque; + /// use kernel::sync::atomic::{Atomic, Relaxed, Release}; + /// + /// // Assume there is a C struct `foo`. + /// mod cbindings { + /// #[repr(C)] + /// pub(crate) struct foo { + /// pub(crate) a: i32, + /// pub(crate) b: i32 + /// } + /// } + /// + /// let tmp =3D Opaque::new(cbindings::foo { a: 1, b: 2 }); + /// + /// // struct foo *foo_ptr =3D ..; + /// let foo_ptr =3D tmp.get(); + /// + /// // SAFETY: `foo_ptr` is valid, and `.a` is in bounds. + /// let foo_a_ptr =3D unsafe { &raw mut (*foo_ptr).a }; + /// + /// // a =3D READ_ONCE(foo_ptr->a); + /// // + /// // SAFETY: `foo_a_ptr` is valid for read, and all other accesses o= n it is atomic, so no + /// // data race. + /// let a =3D unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed); + /// # assert_eq!(a, 1); + /// + /// // smp_store_release(&foo_ptr->a, 2); + /// // + /// // SAFETY: `foo_a_ptr` is valid for writes, and all other accesses= on it is atomic, so + /// // no data race. + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release); + /// ``` + /// + /// However, this should be only used when communicating with C side o= r manipulating a C + /// struct. + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self + where + T: Sync, + { + // CAST: `T` is transparent to `Atomic`. + // SAFETY: Per function safety requirement, `ptr` is a valid point= er and the object will + // live long enough. It's safe to return a `&Atomic` because fu= nction safety requirement + // guarantees other accesses won't cause data races. + unsafe { &*ptr.cast::() } + } + + /// Returns a pointer to the underlying atomic `T`. + /// + /// Note that use of the return pointer must not cause data races defi= ned by [`LKMM`]. + /// + /// # Guarantees + /// + /// The returned pointer is properly aligned (i.e. aligned to [`align_= of::()`]) + /// + /// [`LKMM`]: srctree/tools/memory-model + /// [`align_of::()`]: core::mem::align_of + pub const fn as_ptr(&self) -> *mut T { + // GUARANTEE: `self.0` has the same alignment of `T`. + self.0.get() + } + + /// Returns a mutable reference to the underlying atomic `T`. + /// + /// This is safe because the mutable reference of the atomic `T` guara= ntees the exclusive + /// access. + pub fn get_mut(&mut self) -> &mut T { + // SAFETY: `self.as_ptr()` is a valid pointer to `T`. `&mut self` = guarantees the exclusive + // access, so it's safe to reborrow mutably. + unsafe { &mut *self.as_ptr() } + } +} + +impl Atomic +where + T::Repr: AtomicHasBasicOps, +{ + /// Loads the value from the atomic `T`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// let x =3D Atomic::new(42i64); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_read", "atomic64_read"))] + #[inline(always)] + pub fn load(&self, _: Ordering) = -> T { + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is a valid + // pointer of `T::Repr` for reads and valid for writes of values t= ransmutable to `T`. + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - `a` is aligned to `align_of::()` because of the safe= ty requirement of + // `AllowAtomic` and the guarantee of `Atomic::as_ptr()`. + // - `a` is a valid pointer per the CAST justification above. + let v =3D unsafe { + match Ordering::TYPE { + OrderingType::Relaxed =3D> T::Repr::atomic_read(a), + OrderingType::Acquire =3D> T::Repr::atomic_read_acquire(a), + _ =3D> build_error!("Wrong ordering"), + } + }; + + // SAFETY: `v` comes from reading `a` which was derived from `self= .as_ptr()` which points + // at a valid `T`. + unsafe { from_repr(v) } + } + + /// Stores a value to the atomic `T`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.store(43, Relaxed); + /// + /// assert_eq!(43, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_set", "atomic64_set"))] + #[inline(always)] + pub fn store(&self, v: T, _: Ord= ering) { + let v =3D into_repr(v); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is a valid + // pointer of `T::Repr` for reads and valid for writes of values t= ransmutable to `T`. + let a =3D self.as_ptr().cast::(); + + // `*self` remains valid after `atomic_set*()` because `v` is tran= smutable to `T`. + // + // SAFETY: + // - `a` is aligned to `align_of::()` because of the safe= ty requirement of + // `AllowAtomic` and the guarantee of `Atomic::as_ptr()`. + // - `a` is a valid pointer per the CAST justification above. + unsafe { + match Ordering::TYPE { + OrderingType::Relaxed =3D> T::Repr::atomic_set(a, v), + OrderingType::Release =3D> T::Repr::atomic_set_release(a, = v), + _ =3D> build_error!("Wrong ordering"), + } + }; + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 05:43:05 2025 Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7D4F221FC8; Mon, 14 Jul 2025 05:37:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471436; cv=none; b=tFFuiyZevyy/EkRVoKHhWSJ8XAtmKEgWAEl/Zo+ngYjY1gDNr1TePmGLaUcQQouHy/7Zz7nGx0I410HfhT4Shs8Wy8SfQ7B9ykbonqFn4xmpbcwIXCFLDud+9zuDMqjoNjBRlCSxnSUy/iFF9grS/riwOHaEhRLtI5fv6PxYdYM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471436; c=relaxed/simple; bh=tQRIXOjQt/G4BbfoakV420RE3iFISVqEOqhjGPULAjY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Vye+yjxSRK2G/FYXes2hys1wfzTUVUWSX4lTI0DvcZWfSYAlJ/9N+miHIUEj6ChZc0qHGSn11U3WuMghAfyDh3XLwzVl/oS7EirKvbqBbBtK62mwzHBWChZRBWqUjpNvYEBV2YitN0pHG+fUTr9LkrY534YIgIYUF7pMRQVgyf4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FwXETQdg; arc=none smtp.client-ip=209.85.160.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FwXETQdg" Received: by mail-qt1-f174.google.com with SMTP id d75a77b69052e-4a58f79d6e9so51645051cf.2; Sun, 13 Jul 2025 22:37:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752471434; x=1753076234; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=7iQezcDKX45u7T7jf0+8vs//gWeKVYRGgAdvtsdbNEI=; b=FwXETQdgbBSwZvSgnGo8arUGBGCws6ir0p31VxNNZXFqCCHjbCWj+qHk6DqlkzSK+1 33lE70jKyBKkOGfs3pQnpe/1y64qKv+07UhBbCZ6KvE11e5cl1QdWWsgDLaJLXt8BrOo 9hcd6Gw7CNrcDPYZGmcJgsS62stFy7dFRxG8gAbvTtTVyd+K0FJCYTb5saqMBbu8SLly V9JmydeVxHqaUBfcRMVQwzlIYiRnZ40+zG8N0ER3fDpvQbhiiTqWTsRRhS2q3eU4Xs1e 3iK2eJUDebXv536Kids33d++8e0orE/fo/4eI1brTMS92Q/wZYw9M1dLuUXO/ljyDwPu aofg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752471434; x=1753076234; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=7iQezcDKX45u7T7jf0+8vs//gWeKVYRGgAdvtsdbNEI=; b=DWugOQzYSOSg5Qv7WG10ilBjeOmA5d9dEmhcvhAPKxC8xY1S4lQBPMPx6SG4WMqq0Y r3Ohw3J6cuT1pdLCo12V02uYlvj5m3/qUflgsJLtAAHPGB8RO3n/32YPHmB9lk20hSnh IedC4gXJfGVdg4HCuNA7cxD+aOu88C4hAv8U6b+RCmNUVFIjjFebhVPk/5FsgviCAqjv XYYXWJpKjzvv/0cQLAASW/Hqd3eZ9b0Mh7gNue8J/N78P6UF//5MlyP/0JUksBJHM55t lBwH2JpnkPFXp0x99Zw2R52LOP8PhpnNKfHyJPrEUB8Z/G9xoa70h3+LJeDaCwjmjNN9 kTGg== X-Forwarded-Encrypted: i=1; AJvYcCU4jQVFM6jdZkLBPBeZY7+rf6NaPRHNIlxSnKOQJ/ZA21ZflAoauNQa/vJ56DTnFzsWRU/8KPqTRsXJ@vger.kernel.org, AJvYcCVmSeHNHDFtrI6GaWYdn7Qr82do+xCkYU4N0i6YQHwNJJQVXmQ/lfgboLfYM+iVhoEMc2QbBhlaR3ODkhjB7TY=@vger.kernel.org X-Gm-Message-State: AOJu0YxiXXm1ksKSCKD/YdTYqiTORWFk4Y2yvZzvAjOLgd/U7gIU0vyj MnqKJJLLzgK2i4xR84djF7g/0OfFMQPfQyVjQhe5UIov7vSKFQdD15AB X-Gm-Gg: ASbGncu3Q4Ghyc5XA2xEH5yXqKcYoLsk0IwN784Ik6D3twVD5a8iQSl0syYgvSNLJ4v zddaIOZz6nxPNP6ot1eNHRQO3Q2rpz7OomS+b6FAJiWQQKsh27knZWRJTcvrIJ8U+jG0//0snWQ VND94CngtuE0u6M3RzbSSam9D7bEGVC57bIfWe96pAN3VuHhe92h9B+HlILunvF75iJIi+Bv2RQ xHEl13aAbeqXZz2B6xm4rGjfZwBAuxXq8MP1Vijee6pQrJa0zvKKkTFuvqBe8MARotKzNp+2u/5 0Lra6QKDL6mVXFupAP+xWb/3wP2Qh2RDqUIj04HrfbneucLVcYxzfy+rXtWl+PLnUEbdXVpaXHK fWMZ8w38p0Owoewf33ichgN9X8zsNxqR/mEsDH32+0zcSqtsKbdt42z/twnBijFo0a1b+2ts0Er HrsBqNAoEE+mu6 X-Google-Smtp-Source: AGHT+IGEK6OTHWyVkiXtk3A2/5TOpoBBjDF0pkraUTwQMp3vvt/Hp8Ixn8FNj90tzIfGYrAAgLeOCA== X-Received: by 2002:a05:6214:2e8a:b0:704:ac29:dda0 with SMTP id 6a1803df08f44-704ac29e0b7mr120334996d6.18.1752471433728; Sun, 13 Jul 2025 22:37:13 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-704979e44fcsm43442056d6.48.2025.07.13.22.37.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Jul 2025 22:37:13 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 9250AF40066; Mon, 14 Jul 2025 01:37:12 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-01.internal (MEProxy); Mon, 14 Jul 2025 01:37:12 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:11 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations Date: Sun, 13 Jul 2025 22:36:52 -0700 Message-Id: <20250714053656.66712-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" xchg() and cmpxchg() are basic operations on atomic. Provide these based on C APIs. Note that cmpxchg() use the similar function signature as compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means the operation succeeds and `Err(old)` means the operation fails. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin --- rust/kernel/sync/atomic/generic.rs | 181 ++++++++++++++++++++++++++++- 1 file changed, 180 insertions(+), 1 deletion(-) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index b3e07328d857..4e45d594d8ef 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -2,7 +2,7 @@ =20 //! Generic atomic primitives. =20 -use super::ops::{AtomicHasBasicOps, AtomicImpl}; +use super::ops::{AtomicHasBasicOps, AtomicHasXchgOps, AtomicImpl}; use super::{ordering, ordering::OrderingType}; use crate::build_error; use core::cell::UnsafeCell; @@ -283,3 +283,182 @@ pub fn store(&s= elf, v: T, _: Ordering) { }; } } + +impl Atomic +where + T::Repr: AtomicHasXchgOps, +{ + /// Atomic exchange. + /// + /// Atomically updates `*self` to `v` and returns the old value of `*s= elf`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.xchg(52, Acquire)); + /// assert_eq!(52, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_xchg", "atomic64_xchg", "swap"))] + #[inline(always)] + pub fn xchg(&self, v: T, _: Ordering) -> T { + let v =3D into_repr(v); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is a valid + // pointer of `T::Repr` for reads and valid for writes of values t= ransmutable to `T`. + let a =3D self.as_ptr().cast::(); + + // `*self` remains valid after `atomic_xchg*()` because `v` is tra= nsmutable to `T`. + // + // SAFETY: + // - `a` is aligned to `align_of::()` because of the safe= ty requirement of + // `AllowAtomic` and the guarantee of `Atomic::as_ptr()`. + // - `a` is a valid pointer per the CAST justification above. + let ret =3D unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_xchg(a, v), + OrderingType::Acquire =3D> T::Repr::atomic_xchg_acquire(a,= v), + OrderingType::Release =3D> T::Repr::atomic_xchg_release(a,= v), + OrderingType::Relaxed =3D> T::Repr::atomic_xchg_relaxed(a,= v), + } + }; + + // SAFETY: `v` comes from reading `a` which was derived from `self= .as_ptr()` which points + // at a valid `T`. + unsafe { from_repr(ret) } + } + + /// Atomic compare and exchange. + /// + /// If `*self` =3D=3D `old`, atomically updates `*self` to `new`. Othe= rwise, `*self` is not + /// modified. + /// + /// Compare: The comparison is done via the byte level comparison betw= een `*self` and `old`. + /// + /// Ordering: When succeeds, provides the corresponding ordering as th= e `Ordering` type + /// parameter indicates, and a failed one doesn't provide any ordering= , the load part of a + /// failed cmpxchg is a [`Relaxed`] load. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed= to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the current value o= f `*self`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// // Checks whether cmpxchg succeeded. + /// let success =3D x.cmpxchg(52, 64, Relaxed).is_ok(); + /// # assert!(!success); + /// + /// // Checks whether cmpxchg failed. + /// let failure =3D x.cmpxchg(52, 64, Relaxed).is_err(); + /// # assert!(failure); + /// + /// // Uses the old value if failed, probably re-try cmpxchg. + /// match x.cmpxchg(52, 64, Relaxed) { + /// Ok(_) =3D> { }, + /// Err(old) =3D> { + /// // do something with `old`. + /// # assert_eq!(old, 42); + /// } + /// } + /// + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in= C. + /// let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); + /// # assert_eq!(42, latest); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + /// + /// [`Relaxed`]: super::ordering::Relaxed + #[doc(alias( + "atomic_cmpxchg", + "atomic64_cmpxchg", + "atomic_try_cmpxchg", + "atomic64_try_cmpxchg", + "compare_exchange" + ))] + #[inline(always)] + pub fn cmpxchg( + &self, + mut old: T, + new: T, + o: Ordering, + ) -> Result { + // Note on code generation: + // + // try_cmpxchg() is used to implement cmpxchg(), and if the helper= functions are inlined, + // the compiler is able to figure out that branch is not needed if= the users don't care + // about whether the operation succeeds or not. One exception is o= n x86, due to commit + // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semanti= cs"), the + // atomic_try_cmpxchg() on x86 has a branch even if the caller doe= sn't care about the + // success of cmpxchg and only wants to use the old value. For exa= mple, for code like: + // + // let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old|= old); + // + // It will still generate code: + // + // movl $0x40, %ecx + // movl $0x34, %eax + // lock + // cmpxchgl %ecx, 0x4(%rsp) + // jne 1f + // 2: + // ... + // 1: movl %eax, %ecx + // jmp 2b + // + // This might be "fixed" by introducing a try_cmpxchg_exclusive() = that knows the "*old" + // location in the C function is always safe to write. + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } + + /// Atomic compare and exchange and returns whether the operation succ= eeds. + /// + /// If `*self` =3D=3D `old`, atomically updates `*self` to `new`. Othe= rwise, `*self` is not + /// modified, `*old` is updated to the current value of `*self`. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`= ]. + /// + /// Returns `true` means the cmpxchg succeeds otherwise returns `false= `. + #[inline(always)] + fn try_cmpxchg(&self, old: &mut T, new: T, _:= Ordering) -> bool { + let mut old_tmp =3D into_repr(*old); + let oldp =3D &raw mut old_tmp; + let new =3D into_repr(new); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is a valid + // pointer of `T::Repr` for reads and valid for writes of values t= ransmutable to `T`. + let a =3D self.0.get().cast::(); + + // `*self` remains valid after `atomic_try_cmpxchg*()` because `ne= w` is transmutable to + // `T`. + // + // SAFETY: + // - `a` is aligned to `align_of::()` because of the safe= ty requirement of + // `AllowAtomic` and the guarantee of `Atomic::as_ptr()`. + // - `a` is a valid pointer per the CAST justification above. + // - `oldp` is a valid and properly aligned pointer of `T::Repr`. + let ret =3D unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_try_cmpxchg(a, old= p, new), + OrderingType::Acquire =3D> T::Repr::atomic_try_cmpxchg_acq= uire(a, oldp, new), + OrderingType::Release =3D> T::Repr::atomic_try_cmpxchg_rel= ease(a, oldp, new), + OrderingType::Relaxed =3D> T::Repr::atomic_try_cmpxchg_rel= axed(a, oldp, new), + } + }; + + // SAFETY: `old_tmp` comes from reading `a` which was derived from= `self.as_ptr()` which + // points at a valid `T` + *old =3D unsafe { from_repr(old_tmp) }; + + ret + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 05:43:05 2025 Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 908BB2253FE; Mon, 14 Jul 2025 05:37:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471439; cv=none; b=mFsTAn1z489HSfYCArtAR/E1/ITk5EtGOKVoYn4/kNwIkMbvRz7zMpC8lWrn0FW+MVGbLl9HyWFIrq7kI9WZ62P3lX672cB2oN/jccBUyl56cwMvc/aq8Njir3ithxyt6AWrjjisG1Faa7F7cyu+30dMKmVGqGTH5BIiCW3sjww= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471439; c=relaxed/simple; bh=RdqOL7HUpVhGhmtZOWeFb6/5oGV3e0YDMIsRXjOH6lg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TphP8XOsAhKwtJSaDT9ZTQjKhsPyjhbOpnDNHz1BS3+XFCetnrXUrooc0NHEXMBm5GHqtyf4eLLB8BgHItGvjb1j2sqDznUykC+b5CJuYN/6hovrKeym+Y/umnEvEv9PkNaBT6DtPo/ukbS6gcjmZEV4XZPQqPoBqcjlDUWja9U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nVlXjbjQ; arc=none smtp.client-ip=209.85.222.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nVlXjbjQ" Received: by mail-qk1-f179.google.com with SMTP id af79cd13be357-7e32c5a174cso30139485a.1; Sun, 13 Jul 2025 22:37:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752471435; x=1753076235; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=B9wJ4c2U301QOkQuCdVz7fDf/TBtX7Q85JWYXNeFCIg=; b=nVlXjbjQJ5mkPg1UHhvsxfj3dfTzJSAPykyyv2Mnw1CjteNFOhNTsNJ5X37kc0AjmP X9Qx10OAjbkkeJIIEGApyDDq8ZBgkgRao246hng4zZAMkHiAH/Bfrs4u1c82fxQ1QCi2 sXpEZ8B8VWUd82Kz1E8ByYGPnCEyflM27zswqie5w+XV3wr/cZT7hGCWEH7U3TCQYJRv MUp6r5SnOZI0sR7Icjl+zWSd0U61+NRUKO3KTB6i1a6pw3nYeR/kKiW8TybRwU5Erp/i JEoaOrIF2nmlb1kcmsG8/pnuigi0Z47c/tgv8N1gzzgR3W1GP9XZhbuUu195EHG5Ip6D c68Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752471435; x=1753076235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=B9wJ4c2U301QOkQuCdVz7fDf/TBtX7Q85JWYXNeFCIg=; b=iVBV6vxAbtO3DUmxgMQ+9N/pjKaG2+ZfYZanRnDMhRB1HbWQhzm2IMdv2X7cfFaL4s WSnO6UfaBeEtvoOXMaA+dHQtVj+lbqGUTZ2mr9IM9mDHe9WmUCddvhRqqYN2VA7ag6iN rGKtZmc4/K/DLGMF3FkUGbyXzRn3BAFR56da+SBJPh6Fi4RXj4FfvbZyOEq8d8b6vv81 0fLqWL3FVGtHBAbLg8oFNKgU4oVJJoFSSGv5bzeKkxAu/w/tgODh2f6Ey1UgvCjTWwty Ddgun0wY8NUli59ILvn8l3UODyJgEREBnTrkHgvyeDhm/Wnq5Quvts6xWqtYgvgRJOXw 52TA== X-Forwarded-Encrypted: i=1; AJvYcCVjFC9kNtzvdjq4Ty1K2S2L8jlnQWH5NyeYs8mSe60x10yGdLxSq0uD+saGE7covFkGvi2azq5gKBla@vger.kernel.org, AJvYcCXu6D7jvAm/gfcmZX5StMvpo0gmfdvOcvfWwOrcRAHVkszXJZ8eql324GCw19CKrnvXuXQAuDVC2Y8O0plAF5Y=@vger.kernel.org X-Gm-Message-State: AOJu0Yxuc+SAiGDgjlKGRdZCqaNaXf1qn7jDb9w3tfavQz5husouOmKA eP10CAJj/x4fEzNpbyCjbrDsYrI8Eke+B9IpUoEtpw5W1568NeiaPjBn2w9Vtw== X-Gm-Gg: ASbGnctrmVTHziLFL87+zMCwL+SOzWOVGCSDBReVX5QK7kSK6S++P5L3zhujeZhWUbu uWCrmzWVkJcUsHc8Sk25GhtLO4VFnU0gaRDUoxQZfHLm2tUCVxSqn43Ao9ShS4WRu/DdRoEW4Zi Jd3bmJ2Oc42KTekqqGXoNXAMIlirzguH2yX/BKV1pvvvDnpMclPPQ3dVjULSoDx0XG4CRYc5gIl 6IOJkNRsEodUcnJnsL2PzkE/I6Ckb2DP93bv7CjFTIcoTS7txuiPzVjp1b4p+Q0kQqb9n6zByQ0 Xbj3AOn9+QVtGTyKu+slC0YGQGcGeF44xrptIoqvEor9pKM5eVF3J2I6binNyCI2VOg4fKm0YED 5U/v2Wj9WuTZNTDlWjv2lNExu/7hKfpPbfM+w2qMGXgNSp9a5R6f3X24hC6RG3X4tgzYNR1v4Sl zOHE4YihoCrvL1agNTfhJUzhk= X-Google-Smtp-Source: AGHT+IG5facqxAvDpAAJ11zvnFYDqew6+xXMuG4BL4VHcOgrov9P1XQMep/v3/xqYQo80hELve2k3A== X-Received: by 2002:ac8:5dc6:0:b0:4ab:6b08:9db8 with SMTP id d75a77b69052e-4ab6b08ad01mr53376821cf.11.1752471435261; Sun, 13 Jul 2025 22:37:15 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4a9edea72cesm46260791cf.57.2025.07.13.22.37.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Jul 2025 22:37:14 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id 3A4A0F40066; Mon, 14 Jul 2025 01:37:14 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Mon, 14 Jul 2025 01:37:14 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:13 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations Date: Sun, 13 Jul 2025 22:36:53 -0700 Message-Id: <20250714053656.66712-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" One important set of atomic operations is the arithmetic operations, i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not make senses for all the types that `AllowAtomic` to have arithmetic operations, for example a `Foo(u32)` may not have a reasonable add() or sub(), plus subword types (`u8` and `u16`) currently don't have atomic arithmetic operations even on C side and might not have them in the future in Rust (because they are usually suboptimal on a few architecures). Therefore the plan is to add a few subtraits of `AllowAtomic` describing which types have and can do atomic arithemtic operations. One trait `AllowAtomicAdd` is added, and only add() and fetch_add() are added. The rest will be added in the future. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 14 ++++ rust/kernel/sync/atomic/generic.rs | 111 ++++++++++++++++++++++++++++- 2 files changed, 124 insertions(+), 1 deletion(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index c5193c1c90fe..54f5b4618337 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -29,8 +29,22 @@ unsafe impl generic::AllowAtomic for i32 { type Repr =3D i32; } =20 +// SAFETY: The wrapping add result of two `i32`s is a valid `i32`. +unsafe impl generic::AllowAtomicAdd for i32 { + fn rhs_into_delta(rhs: i32) -> i32 { + rhs + } +} + // SAFETY: `i64` has the same size and alignment with itself, and is round= -trip transmutable to // itself. unsafe impl generic::AllowAtomic for i64 { type Repr =3D i64; } + +// SAFETY: The wrapping add result of two `i64`s is a valid `i64`. +unsafe impl generic::AllowAtomicAdd for i64 { + fn rhs_into_delta(rhs: i64) -> i64 { + rhs + } +} diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index 4e45d594d8ef..9e2394017202 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -2,7 +2,7 @@ =20 //! Generic atomic primitives. =20 -use super::ops::{AtomicHasBasicOps, AtomicHasXchgOps, AtomicImpl}; +use super::ops::{AtomicHasArithmeticOps, AtomicHasBasicOps, AtomicHasXchgO= ps, AtomicImpl}; use super::{ordering, ordering::OrderingType}; use crate::build_error; use core::cell::UnsafeCell; @@ -104,6 +104,18 @@ const fn into_repr(v: T) -> T::Repr { unsafe { core::mem::transmute_copy(&r) } } =20 +/// Types that support atomic add operations. +/// +/// # Safety +/// +/// Wrapping adding any value of type `Self::Repr::Delta` obtained by [`Se= lf::rhs_into_delta()`] to +/// any value of type `Self::Repr` obtained through transmuting a value of= type `Self` to must +/// yield a value with a bit pattern also valid for `Self`. +pub unsafe trait AllowAtomicAdd: AllowAtomic { + /// Converts `Rhs` into the `Delta` type of the atomic implementation. + fn rhs_into_delta(rhs: Rhs) -> ::Delta; +} + impl Atomic { /// Creates a new atomic `T`. pub const fn new(v: T) -> Self { @@ -462,3 +474,100 @@ fn try_cmpxchg(&self, old: &= mut T, new: T, _: Ordering) ret } } + +impl Atomic +where + T::Repr: AtomicHasArithmeticOps, +{ + /// Atomic add. + /// + /// Atomically updates `*self` to `(*self).wrapping_add(v)`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.add(12, Relaxed); + /// + /// assert_eq!(54, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn add(&self, v: Rhs, _: Ord= ering) + where + T: AllowAtomicAdd, + { + let v =3D T::rhs_into_delta(v); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is a valid + // pointer of `T::Repr` for reads and valid for writes of values t= ransmutable to `T`. + let a =3D self.as_ptr().cast::(); + + // `*self` remains valid after `atomic_add()` because of the safet= y requirement of + // `AllowAtomicAdd`. + // + // SAFETY: + // - `a` is aligned to `align_of::()` because of the safe= ty requirement of + // `AllowAtomic` and the guarantee of `Atomic::as_ptr()`. + // - `a` is a valid pointer per the CAST justification above. + unsafe { + T::Repr::atomic_add(a, v); + } + } + + /// Atomic fetch and add. + /// + /// Atomically updates `*self` to `(*self).wrapping_add(v)`, and retur= ns the value of `*self` + /// before the update. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) }); + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } ); + /// ``` + #[inline(always)] + pub fn fetch_add(&self, v: Rhs, _: Order= ing) -> T + where + T: AllowAtomicAdd, + { + let v =3D T::rhs_into_delta(v); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is a valid + // pointer of `T::Repr` for reads and valid for writes of values t= ransmutable to `T`. + let a =3D self.as_ptr().cast::(); + + // `*self` remains valid after `atomic_fetch_add*()` because of th= e safety requirement of + // `AllowAtomicAdd`. + // + // SAFETY: + // - `a` is aligned to `align_of::()` because of the safe= ty requirement of + // `AllowAtomic` and the guarantee of `Atomic::as_ptr()`. + // - `a` is a valid pointer per the CAST justification above. + let ret =3D unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_fetch_add(a, v), + OrderingType::Acquire =3D> T::Repr::atomic_fetch_add_acqui= re(a, v), + OrderingType::Release =3D> T::Repr::atomic_fetch_add_relea= se(a, v), + OrderingType::Relaxed =3D> T::Repr::atomic_fetch_add_relax= ed(a, v), + } + }; + + // SAFETY: `ret` comes from reading `a` which was derived from `se= lf.as_ptr()` which points + // at a valid `T`. + unsafe { from_repr(ret) } + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 05:43:05 2025 Received: from mail-qv1-f48.google.com (mail-qv1-f48.google.com [209.85.219.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00834226D00; Mon, 14 Jul 2025 05:37:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471440; cv=none; b=p2PZkcleCYcbIijCmB0lh6iVgdWY/oH7f8U6H2j1PisRWAbLDAwGxvABbnQLhyHGHyoA4psLfkOFDveFdQqjL04wJAyHeCwDYpWgyN49tKNTxoDvj09HvaEVroomD0sPiL5odZo9SX5DqJQEivPaWViDwWbJWWamUtgNkn+J+7I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471440; c=relaxed/simple; bh=2Y2Wya5dyH5eO2lH59db9rc2Hj7wst5vUMcXtoDrUgs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=l6fx6FBI0cyYiyMoP821cn4yAjPFbtHnLQdNq0JnPBwYPtGneCLWAGgf1+jiHGHZapECX/j0m1/Ju9Qeyu1OIxmkIhH7oRRnlyMCkLZGd6lorUZOHFfkL8Lile2HigJT6qfqPOaMTFFH7ZoSToWXZxTiXU8Wd/JanxIw51ny7uQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=U6yMqVA0; arc=none smtp.client-ip=209.85.219.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="U6yMqVA0" Received: by mail-qv1-f48.google.com with SMTP id 6a1803df08f44-6facc3b9559so53097346d6.0; Sun, 13 Jul 2025 22:37:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752471438; x=1753076238; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=PNsb1w68dGNOSxkM6AOd5CDMWTKk35ep0cbt9dftdl0=; b=U6yMqVA0pQypbUCIx9vue0vjQ+5h1kxI0qd8vVi2GxWWbc146nOp57ZoVbPC8ni+cW 8C2D9rCD+Eii+LhYp1MB0tFALuYq3/puWZYtCNsU8RDsG7oXD5/1PTfXM13zfDRxUJNB znb2KCQfzevd9NMbYLZSazesWMew1a9JM5fjLT8TNBYEZ3pbrxRxFJmz86efc9+IYGoh udk/9tiEgq87lmVkAU9choOkPbQ8LBLllCUNnkBlcB9bQKO1WRCIHGG5mJTcOaAOgbPe aiwCCzky4givSbYwkjV1e/y4aej52ETvolDqzvGar8tF0KOG2FILKAUHWgVggtkmxAM/ iH5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752471438; x=1753076238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=PNsb1w68dGNOSxkM6AOd5CDMWTKk35ep0cbt9dftdl0=; b=kWVwa79gehNXcJ2tAjfN23TLBabofyUwjkBzOIvgvx2YeExfHyz+ffYk5KE7cpzADB rCFQSkUyHwsxIkizIuKR67rfJPEVVxDENNxSHbcr9Rmbg5ArMd15FhJqwhc2H2BSX4dr qt3BEAX6j7Km/O3iELL4AlSte3Aha7rnsf7V6biCr9CGpI0Nk+jCtUMklLHCU6jyawDX HUxZStSMlOxP8IDtRY8ngNU2KWrnQ5/VSnuHp8kYHgTOG2qIEux6wpk+T74N+MCePNVe 6oRAu4BXz2FNcXXwNLhkBxgFah9dnUKBJ2PjbYmBf8AAfFH1F+YmoPT6Ba8dbeXWZYXo rpCQ== X-Forwarded-Encrypted: i=1; AJvYcCXBxQzcv5DsHbsxY2pA2Dn3x6qBCnN14KyytLZ7SmbAq6YFw7PUNHGFZQherTSh5BrPIR4r9JRvqrVBBfp6C0w=@vger.kernel.org, AJvYcCXy7jXyaGJKAJ9HTa7G1wYqUEQas7UDu6+jqpDUR8WEAfDGlICmlA28cXJEEUPFwkqeafC8hoP/S0p7@vger.kernel.org X-Gm-Message-State: AOJu0YzI6LAIaDrghN4heteqAGHjGi1TDGoOYPGzm77NlS40guv3sowF mPXDx6P/eSBw+WZwZ8NfrfliYoYYkWqvS2uQkrdcFf9cD/4wqBQcJSZg X-Gm-Gg: ASbGncu+9/AF0dJkuHvxfUa65G5KSx00lHYZIWenNLQLvqdEOIDor/+usZaCVd4ufRp cw+oKjIgXUJMfLIhqn/ZOjS0uf7fS4WPXhbTVjk9dj6JMjY42vTd1iQ5mzzHLkhoBYyMy5LYxug ek0kWPdLfI0ua08kRuQOys85LpWNUjrZ/9YzO3TNxkp9NCqgmXfo7aX1m5M6qla3S1zgxKmqjsV eQeEznnZsLcDop2wn6ycmY4Wsx6F4ZTviQ302FyijSscv0BzcptxBPgEaKFfnPUQJEpQBKtGVEh dSTkhlQoEizYCOay66x3ZCktkOC77HBxBuzTFNd1VydAkaX6jsb/GYdQ/1PPh/QUU7L/6oB8xLA Qk8dvtPmBskeethluzkrDjXXBxPw75cfoHmvpsoLW6HBhWnQI3sLgtNdziQ4RJdTMghYuTG8P2r vMIaVRYiiQrkjD X-Google-Smtp-Source: AGHT+IEBbGV+Zh9vJlvwZsK9o53Aw2K7X16dnHr4RRkgnzj9zoHHmmsypvxfF5TeSgxDZmCyQDhWYw== X-Received: by 2002:a05:6214:5405:b0:6fb:51c:395 with SMTP id 6a1803df08f44-704a3aafe67mr165372656d6.41.1752471437928; Sun, 13 Jul 2025 22:37:17 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-704979b4968sm43985616d6.27.2025.07.13.22.37.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Jul 2025 22:37:17 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id DBAD6F40066; Mon, 14 Jul 2025 01:37:16 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Mon, 14 Jul 2025 01:37:16 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:15 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 7/9] rust: sync: atomic: Add Atomic Date: Sun, 13 Jul 2025 22:36:54 -0700 Message-Id: <20250714053656.66712-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for basic unsigned types that have an `AtomicImpl` with the same size and alignment. Unit tests are added including Atomic and Atomic. Reviewed-by: Alice Ryhl Reviewed-by: Andreas Hindborg Reviewed-by: Benno Lossin Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 95 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 54f5b4618337..eb4a47d7e2f3 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -48,3 +48,98 @@ fn rhs_into_delta(rhs: i64) -> i64 { rhs } } + +// SAFETY: `u32` and `i32` has the same size and alignment, and `u32` is r= ound-trip transmutable to +// `i32`. +unsafe impl generic::AllowAtomic for u32 { + type Repr =3D i32; +} + +// SAFETY: The wrapping add result of two `i32`s is a valid `u32`. +unsafe impl generic::AllowAtomicAdd for u32 { + fn rhs_into_delta(rhs: u32) -> i32 { + rhs as i32 + } +} + +// SAFETY: `u64` and `i64` has the same size and alignment, and `u64` is r= ound-trip transmutable to +// `i64`. +unsafe impl generic::AllowAtomic for u64 { + type Repr =3D i64; +} + +// SAFETY: The wrapping add result of two `i64`s is a valid `u64`. +unsafe impl generic::AllowAtomicAdd for u64 { + fn rhs_into_delta(rhs: u64) -> i64 { + rhs as i64 + } +} + +use crate::macros::kunit_tests; + +#[kunit_tests(rust_atomics)] +mod tests { + use super::*; + + // Call $fn($val) with each $type of $val. + macro_rules! for_each_type { + ($val:literal in [$($type:ty),*] $fn:expr) =3D> { + $({ + let v: $type =3D $val; + + $fn(v); + })* + } + } + + #[test] + fn atomic_basic_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + assert_eq!(v, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_xchg_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + let old =3D v; + let new =3D v + 1; + + assert_eq!(old, x.xchg(new, Full)); + assert_eq!(new, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_cmpxchg_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + let old =3D v; + let new =3D v + 1; + + assert_eq!(Err(old), x.cmpxchg(new, new, Full)); + assert_eq!(old, x.load(Relaxed)); + assert_eq!(Ok(old), x.cmpxchg(old, new, Relaxed)); + assert_eq!(new, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_arithmetic_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + assert_eq!(v, x.fetch_add(12, Full)); + assert_eq!(v + 12, x.load(Relaxed)); + + x.add(13, Relaxed); + + assert_eq!(v + 25, x.load(Relaxed)); + }); + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 05:43:05 2025 Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F4A522DA08; Mon, 14 Jul 2025 05:37:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471443; cv=none; b=Ig6lzjTSjcrl53fczijhYNOHXu47M3KCD1DUe86Qg1GombQ7MX0RPqItNaJ0Ob5RSRsCjGU2dIuKtb4xmRzrXW/mjuj+pw1bpWySFVJFRrol8O62niPOfGb4CT/NsYUCWPi0KFHghk6SYMImItjo2r/J6s74cYtuF5zojxVRDu8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471443; c=relaxed/simple; bh=WCT6z5PFKds5elqtScAZ0FqkB10N0S9kF8SYI2tCpwg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KCkkYIl86+dUSK0n9OGA3NGqOzJHhVDfwc752aoBqlz4KuzUaj4xHxS52ZdgZiC/3L+BOw+FWwfpNH/iI/VdXWgbjxRRdR8Slcg2kBdOZMR0Frk84Cks31gFKpUp1gfP2/zSgaAMgADwlaR0ZZT3E8HibwjrU4m+5wuwygYjUkQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Sy1suMsq; arc=none smtp.client-ip=209.85.219.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Sy1suMsq" Received: by mail-qv1-f52.google.com with SMTP id 6a1803df08f44-6fada2dd785so45938256d6.2; Sun, 13 Jul 2025 22:37:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752471441; x=1753076241; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=xbxFvcQp2SKKo4g3N8pGlZT4YowYhGke/9HCeglF0aQ=; b=Sy1suMsqapyGepnuAbv2PRQl1w3+12vJabSR/0GVPUNCPGr9LTHrq0FXcpW9SrtM1H yh/hzDE8x6r+0WNbO+flprHC3rn+xpW22233Z17KYPQxxf8AV0Y5fw6AzhFmRoHx0wyy iInvxNDnuQrpoRReUc/QBnHoI9b/2HCuxKCO8hbEqdGkRNXLCs+pMCixi0RwVG17Ezkr eQ246qv+1U3Lg08XE8CRmaheN5TsR1CJZGp7QrsaXuprdvc9y5S53cWhC5iZhKCjq9Vu P2JQqoKires2gv1kKw32ZLZ/3KcJtKpoQ8iSqxyva0pEPycwawmX/PK152j6z+CG8vgi r6Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752471441; x=1753076241; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=xbxFvcQp2SKKo4g3N8pGlZT4YowYhGke/9HCeglF0aQ=; b=k+xzhkWeIHLaMLYDKHeSfRGg76H2NQSXnQAU1yCyIvXN8Y0qAcZLAg9CNt5pINMc9G iFcSv49g4KhqVKchSCMZ57GBHk2mhMukp8r7dpO8HQFIgRhDL1LF9ylfgH+QApsYHOtI 7hgmAgwQ3GEzvt7eMvXmvbZBJHPjq3VHh7zJ37kv19Imj6jb5vnSchH4PkfCfYONuiLO lS4Foy3twtmSmZf7IO3a0MMOAw09PCdRjTyXbOLPTXrcYgOW6N2ubHaFXWlSwltW1MHb XhuAqNDCVp20d6TTeZBQlN6MixaJf+BkktBs8BVmNivb044tajNxZWvCh1l5C3MYt6qG 0tiw== X-Forwarded-Encrypted: i=1; AJvYcCWOXRhuxwlVrmxAh1dQRHJ9ODYopnPfKMjMk9CBXhXX0d01b4eLdd3WqFWFR6O2DWqTniK1MsCp7aTlzdUo29o=@vger.kernel.org, AJvYcCWn9zH1HudZPaa0qDBewDPHXaNGXwHB8t+8ZbJUOKyCzEHhypwTJ8iZm92E3J7fF3q8GwEirAVSqSLx@vger.kernel.org X-Gm-Message-State: AOJu0YxVQR7/ksTnkWu9HQ/oCL5YRRfpq64fC344mdlgiO0aQMxrh/5V mlm0Zx6l/TqjojTXjAR0h/GTywSK0+tVXFlloXDamdlMFqnjlEM+5hH5 X-Gm-Gg: ASbGncsrfla9Ez0/Vnp7wpO9X/3QSmTbLk7Ko37K1huGNOapYYVozPQpnhtYQ1CPLoC mRMqj5ZQKHAsvYg0ufzHNRuHJOeX5mtwGfURjON96BPIs/0V/k+oU+w4wkNrYz8hCEFZKuId2YB 2H/E6vi+w+5ZiPsEw7wqoR4YyQ3UgzoN7//mHCctNRQALv+16eDdI3/m3GlBkqsKkZOewPvQyhw 629B6eK6oyZsPlH8Lk92WxIM/TrP3f7hHo2xZ82f95/n2x15iDJwuImDfiyyapP3GHiQoBOy3o0 uXjbfFcwAR35uaN8Y/S2Wp07lrrwHMpEfUNF8aig27863aV20/hYj2K9ubZLCn5mQWa8BI0eABK GXg2YIDluQNFBe0bWzjjdmLWe2uUixHeWnu0Y/zS/Vd/vcloT7Ntzp7FUNgGkezJST89GANOHHh TqKpn5hHLKeLlO X-Google-Smtp-Source: AGHT+IHVTAeaiCrtUcuB/dKAJJixW/vao1xVy21/YuOpjp76n21BwqPdOJotV90WaKdr9UDu6Zfe2Q== X-Received: by 2002:a05:620a:2b8a:b0:7e1:5efc:704 with SMTP id af79cd13be357-7e15efc0b3bmr906203085a.24.1752471440771; Sun, 13 Jul 2025 22:37:20 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7e26996873esm111990585a.64.2025.07.13.22.37.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Jul 2025 22:37:20 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id BEBCEF40066; Mon, 14 Jul 2025 01:37:19 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Mon, 14 Jul 2025 01:37:19 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:17 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 8/9] rust: sync: Add memory barriers Date: Sun, 13 Jul 2025 22:36:55 -0700 Message-Id: <20250714053656.66712-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Memory barriers are building blocks for concurrent code, hence provide a minimal set of them. The compiler barrier, barrier(), is implemented in inline asm instead of using core::sync::atomic::compiler_fence() because memory models are different: kernel's atomics are implemented in inline asm therefore the compiler barrier should be implemented in inline asm as well. Also it's currently only public to the kernel crate until there's a reasonable driver usage. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/helpers/barrier.c | 18 +++++++++++ rust/helpers/helpers.c | 1 + rust/kernel/sync.rs | 1 + rust/kernel/sync/barrier.rs | 61 +++++++++++++++++++++++++++++++++++++ 4 files changed, 81 insertions(+) create mode 100644 rust/helpers/barrier.c create mode 100644 rust/kernel/sync/barrier.rs diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c new file mode 100644 index 000000000000..cdf28ce8e511 --- /dev/null +++ b/rust/helpers/barrier.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +void rust_helper_smp_mb(void) +{ + smp_mb(); +} + +void rust_helper_smp_wmb(void) +{ + smp_wmb(); +} + +void rust_helper_smp_rmb(void) +{ + smp_rmb(); +} diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 83e89f6a68fb..8ddfc8f84e87 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -9,6 +9,7 @@ =20 #include "atomic.c" #include "auxiliary.c" +#include "barrier.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index b620027e0641..c7c0e552bafe 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -11,6 +11,7 @@ =20 mod arc; pub mod atomic; +pub mod barrier; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs new file mode 100644 index 000000000000..8f2d435fcd94 --- /dev/null +++ b/rust/kernel/sync/barrier.rs @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory barriers. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions +//! of semantics can be found at [`LKMM`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ + +/// A compiler barrier. +/// +/// A barrier that prevents compiler from reordering memory accesses acros= s the barrier. +#[inline(always)] +pub(crate) fn barrier() { + // By default, Rust inline asms are treated as being able to access an= y memory or flags, hence + // it suffices as a compiler barrier. + // + // SAFETY: An empty asm block. + unsafe { core::arch::asm!("") }; +} + +/// A full memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory access= es across the barrier. +#[inline(always)] +pub fn smp_mb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_mb()` is safe to call. + unsafe { bindings::smp_mb() }; + } else { + barrier(); + } +} + +/// A write-write memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory write = accesses across the +/// barrier. +#[inline(always)] +pub fn smp_wmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_wmb()` is safe to call. + unsafe { bindings::smp_wmb() }; + } else { + barrier(); + } +} + +/// A read-read memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory read a= ccesses across the +/// barrier. +#[inline(always)] +pub fn smp_rmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_rmb()` is safe to call. + unsafe { bindings::smp_rmb() }; + } else { + barrier(); + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 05:43:05 2025 Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C1BD22F389; Mon, 14 Jul 2025 05:37:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471445; cv=none; b=VnLTDO2IWsOVckl4scCLmF3NAE5M86cpQm0svCgRlRsjuFJjOi4hJkaxg9VEeBgVMnREO1PvaeEmqUcK5A+lAGJLR+McatK93iVKwQajUMl2OTxwBcQmzmx/KqrWBf7hde5pBdgOjJBFDyxYexa//ATrgITUtaxBbCfymcF1tSI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752471445; c=relaxed/simple; bh=Xx7XrimI1grrY3rjitNo3nkWmAHUSiS/xYf9TETdwcM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tUQKMWTOmu+HwJY0yMW0onWsxg/eh4SFQRoX+tVHkk6RhBj9D0xBxbVrTK3r4+yMC4XNiTelmLJLBcUcD1EvdcdUpevCoEgyV01c5TqrR8k2owpGW20ZBcYlDFbr/XcUbJ7BcbTCsAfHcamaMiIxPWBkua4cZEsKuwEA+HlW28k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QtDMzsyZ; arc=none smtp.client-ip=209.85.219.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QtDMzsyZ" Received: by mail-qv1-f43.google.com with SMTP id 6a1803df08f44-6fafd3cc8f9so58093126d6.3; Sun, 13 Jul 2025 22:37:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752471442; x=1753076242; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=9o14AcfIbJR88gN5wImGuKOPlipr5VAPIVjAMRfZdsk=; b=QtDMzsyZ4ZLEG2Hvbs5kDlQt8CGoY6gE4utAlgIM2XjATzBbUEBwgl/Qqgk6LcDRDi paLHnGW0h8or/SzrOs0B7Pma8RBeFHD05f8dTVVm3rSdBJhT/NcpNXcJ9x9Mlup8gtCw mSEpukXrw79IFuAR2YdGlFRxCtJ80qeT/EVHzecLYYgPh4HjtO1et70y2Iw3IJqF1PyS FAdEIKOCqaCJ1fVdswvCOeyLZalDSC8HD04ghekaLfVd6LxAmJSTKPMjADWEhr9sL5Wo +8Bu7OoGG9Zf+xyAxrIO1UZowzdhGydAEEUTx6Q0QZtUaY1J6q9uWgFbO4rm0SYttgzb 03/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752471442; x=1753076242; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=9o14AcfIbJR88gN5wImGuKOPlipr5VAPIVjAMRfZdsk=; b=cIcgBN4CoXGfG3qkNyiVoJFwRQenQB08KB1eJcuueM8eOxTRrZNKJyEOailTGoWhd2 qgffuMSQSDPSAfcG3Dv4kWXLbFVgv6LkBZm3UtP2UvA99Zed9i3+5hSbpdaaAEq9D5Ij OYE1Ed8wAlOlAsHV4sYSZ7L4Q+96+jNYq07Uw4ToG095zI4FikdPMq4X/IlGjUuJRfbJ 9ERwD6GJG06tS5pQNWEFoU0IQEjH6979QmNScm5m4lpcGVpBDOkcyb1gBQhvR0KDbYsy mjM3oMos4HkWmEkBQe20wjrZXAECXfPBcY785d2IgawCjptRMKukb8zVQd1FkXQ6gb9F /2sg== X-Forwarded-Encrypted: i=1; AJvYcCVMNs/8yz/Jmmpu8WVmSD/BHJzPpLG6cHlMAS/8qOri/R1POF+d7izRU4LujVVEXQJiXZJomgwDwx85@vger.kernel.org, AJvYcCXFPVIAIl3ECxfR+Qqhg0YGMd51jePnEG/2eECxDrCV5E8Q1Q7AUITLgzKxD/gQ4LiFXk7i0k3x22H+8YM5EfE=@vger.kernel.org X-Gm-Message-State: AOJu0YwnDRFk6Vt48O0tV78FqOv0VyaDgXl92xN8vnDqmLA5hlEz2oLZ hfuybQVezd2uk8nVeNMzUvyrX9W/D7DAgdilDkwNXtOcrn6rIt6/eETC X-Gm-Gg: ASbGncutGhDmyAdWcyRsLb9bo1WPeBHcwUXRQ5Qx9fg0aXJYNOOVFqDgCrbDZuUrlzt eug421DjF6vMW33GDNclu4N9gUDepWwj80FzHCffFkUqrqk3VM6+e/4K85Zt7mchu3kdypgqSLY WNrvqqwO1tXCWPW5ZNEA4F2pxvyWoFKXXQmy31k1hbGvdTWFFzbqkyKdGlrKQAZ3cdLo3LJgoO7 t8ge+WSXRB5uM8Dx3B/T1ZewDAeNq1Y+GoithzPwleWxN8O5qLStdBW2zueXcpGo7Ojc3rMWzho V4PWVLM6lQl4/8P17L00hmQwjtuk638pOtw9xYwmyx9A5w3W+SlgGVW5Jo4GtuazkFX/KoTWRp0 N68Z7bjE7kNiAw82myOP2rTCep3eZzy+xMNayoyFCupJ3Nzf/SGYxGG1A70r+9QgpSN13WPfz3P M5nx685x3ESIvS X-Google-Smtp-Source: AGHT+IHrL78L7nYkPxpZf9whcVqknp6Yf5dLicLrAotiRmMUMxMDJa2AV0OPspkgKmrWBtKNoC1D8Q== X-Received: by 2002:a05:622a:480c:b0:4a9:befe:c13 with SMTP id d75a77b69052e-4a9fb85c9d9mr167555741cf.9.1752471442433; Sun, 13 Jul 2025 22:37:22 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4ab7755b45fsm551571cf.39.2025.07.13.22.37.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Jul 2025 22:37:22 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id 636AEF40066; Mon, 14 Jul 2025 01:37:21 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-10.internal (MEProxy); Mon, 14 Jul 2025 01:37:21 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdehuddufecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Jul 2025 01:37:20 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v7 9/9] rust: sync: atomic: Add Atomic<{usize,isize}> Date: Sun, 13 Jul 2025 22:36:56 -0700 Message-Id: <20250714053656.66712-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250714053656.66712-1-boqun.feng@gmail.com> References: <20250714053656.66712-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for `usize` and `isize`. Note that instead of mapping directly to `atomic_long_t`, the represention type (`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces the necessity of creating `atomic_long_*` helpers, which could save the binary size of kernel if inline helpers are not available. To do so, an internal type `isize_atomic_repr` is defined, it's `i32` in 32bit kernel and `i64` in 64bit kernel. Reviewed-by: Alice Ryhl Reviewed-by: Andreas Hindborg Reviewed-by: Benno Lossin Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 50 +++++++++++++++++++++++++++++++++++--- 1 file changed, 46 insertions(+), 4 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index eb4a47d7e2f3..3c1bb0c4d396 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -49,6 +49,35 @@ fn rhs_into_delta(rhs: i64) -> i64 { } } =20 +// Defines an internal type that always maps to the integer type which has= the same size alignment +// as `isize` and `usize`, and `isize` and `usize` are always bi-direction= al transmutable to +// `isize_atomic_repr`, which also always implements `AtomicImpl`. +#[allow(non_camel_case_types)] +#[cfg(not(CONFIG_64BIT))] +type isize_atomic_repr =3D i32; +#[allow(non_camel_case_types)] +#[cfg(CONFIG_64BIT)] +type isize_atomic_repr =3D i64; + +// Ensure size and alignment requirements are checked. +crate::static_assert!(core::mem::size_of::() =3D=3D core::mem::size= _of::()); +crate::static_assert!(core::mem::align_of::() =3D=3D core::mem::ali= gn_of::()); +crate::static_assert!(core::mem::size_of::() =3D=3D core::mem::size= _of::()); +crate::static_assert!(core::mem::align_of::() =3D=3D core::mem::ali= gn_of::()); + +// SAFETY: `isize` has the same size and alignment with `isize_atomic_repr= `, and is round-trip +// transmutable to `isize_atomic_repr`. +unsafe impl generic::AllowAtomic for isize { + type Repr =3D isize_atomic_repr; +} + +// SAFETY: The wrapping add result of two `isize_atomic_repr`s is a valid = `usize`. +unsafe impl generic::AllowAtomicAdd for isize { + fn rhs_into_delta(rhs: isize) -> isize_atomic_repr { + rhs as isize_atomic_repr + } +} + // SAFETY: `u32` and `i32` has the same size and alignment, and `u32` is r= ound-trip transmutable to // `i32`. unsafe impl generic::AllowAtomic for u32 { @@ -75,6 +104,19 @@ fn rhs_into_delta(rhs: u64) -> i64 { } } =20 +// SAFETY: `usize` has the same size and alignment with `isize_atomic_repr= `, and is round-trip +// transmutable to `isize_atomic_repr`. +unsafe impl generic::AllowAtomic for usize { + type Repr =3D isize_atomic_repr; +} + +// SAFETY: The wrapping add result of two `isize_atomic_repr`s is a valid = `usize`. +unsafe impl generic::AllowAtomicAdd for usize { + fn rhs_into_delta(rhs: usize) -> isize_atomic_repr { + rhs as isize_atomic_repr + } +} + use crate::macros::kunit_tests; =20 #[kunit_tests(rust_atomics)] @@ -94,7 +136,7 @@ macro_rules! for_each_type { =20 #[test] fn atomic_basic_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 assert_eq!(v, x.load(Relaxed)); @@ -103,7 +145,7 @@ fn atomic_basic_tests() { =20 #[test] fn atomic_xchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 let old =3D v; @@ -116,7 +158,7 @@ fn atomic_xchg_tests() { =20 #[test] fn atomic_cmpxchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 let old =3D v; @@ -131,7 +173,7 @@ fn atomic_cmpxchg_tests() { =20 #[test] fn atomic_arithmetic_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 assert_eq!(v, x.fetch_add(12, Full)); --=20 2.39.5 (Apple Git-154)