From nobody Mon Oct 6 15:15:53 2025 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 842741FF5F9; Sat, 19 Jul 2025 03:08:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894519; cv=none; b=LWUZStzquLJemZu3cXrAc69O74EzAYlaklGk+q84SP3Kw+kAWmDkaxlbsFUhgu8rvzoFkJGqrd74i0r7xa0P1TPBcFEIysZwcQh7IdxNUROjx2eYxk1gKomaRd3+Km6AVoy+vvyAEgRMc19kaIokUtikKhvAkbzqt29gL2Zalb8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894519; c=relaxed/simple; bh=22djv8E6MRg4uvxlUWiDx6U/LmxxxIyjKjQ4JApsS2M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gNJY/WFPibvR5fwZXaCN2K34mfUvYQBJ2jIZu5AqypB/qHIxpvEfqwyaMP7MVrfB/rq8K9FVE4JV64Xsv2Bj3PRHYmyb/54P5m+7ewQdltKPXk/tRAdIiAdMfV2+ZVDjEFpWLC4qDnJDpU1TqdWhLoPSgGhUmIuxaM/yHHOQVfE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GzFdkGMl; arc=none smtp.client-ip=209.85.219.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GzFdkGMl" Received: by mail-qv1-f54.google.com with SMTP id 6a1803df08f44-6fd1b2a57a0so26114736d6.1; Fri, 18 Jul 2025 20:08:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752894514; x=1753499314; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=MROVhjbXAwDJvnFQvYhC8HYg2UDwZTq9JXFvovjW0rc=; b=GzFdkGMlOjTRSZ/0F41r0xDcvHpTHaUpRLf6XHa9F/RxjrYm87iaWKPh9NGbpIX7B3 m/6ZcCI31g3Q8DlQ/aEY3gpASYXTWcaQW2H3o89OV0+Ur2hWs2A/7ixf2vw/5htNbyxX ePnlRj22kX/E7yrjJP5xRrDFTi3VFSQG6cW5+ExVhJM/nQxzMsPzJgiXOTZhowewbbAt ajYlfkdjkZG69sArCwPsSYbJqsFo47i5Gtini3JWetVCkYqY/D1icXJdPdSlx1o9LdM6 48cyr0yFY+1sTi7rqP0o79vN690CjUe7DAcgEseNM/4oMlnLvbX0aDwMj6stFhvNLzX2 AqNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752894514; x=1753499314; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=MROVhjbXAwDJvnFQvYhC8HYg2UDwZTq9JXFvovjW0rc=; b=HPBxaGGyHGy1mTBqVk72DJ4a1eGY5UUBXieLwpkRbotey4bBjwwongek5T9K4BBZZ6 IaBIl65YQHapkMxV5UN2iHIzf/AQb7bUMNcYHUOjCzNS6TDKJZBhQxDp1XNWR/71NQX+ 4u8SYH6uh0cWIOpp+XPQibLaJseZKvW7Q4cLPktU6e3UCv81foiQ+0k9PQn4z4z1c636 Om/kz69l3Ixw7obN0/iPdIDxI50CIMlZ3K4qLVCHejdXMxwgQ8EjiswNsEWxC4nxJzlw rz7g6jlkPv+ZQKNxIWJVYPBiEAgGuac2Ouk+jyvh09SPtwd9N8OtzpzOuVi2zTywiL1U c3dw== X-Forwarded-Encrypted: i=1; AJvYcCU7M7m65OA20iNQN8fq7XY7tKhgfuRK017KIbIaIjr/B2kkcvwPqg/fbOXfuch6uB0tm1A+g96VJXAXeUUicaA=@vger.kernel.org, AJvYcCVd8UDrtETQYJxbZcmYabrPVRf6Z8YS/LEQpPyUm4ovECraCsPUC1ZgEeXBcRlg2NFwLRDDCHDjm+87@vger.kernel.org X-Gm-Message-State: AOJu0YyDahO7Rz/aIgYXzqPfof025dO5zLVySPlpEZQDdnb6I7+kX8Di 2MSzvkIF2PTdTxUzdgfaLtV8WMbMpfeARDpaadvCXUts9fbxHBt8ooez X-Gm-Gg: ASbGnctGJQaTQ7+PBwCdqMOwVgJmi1Azi6kcOvwL1/cO3svT9j4254YEW6DwsSgBeMQ aeKLFj0iM8+B9CHZpmoAAIqkW38qz0AI1em5/7YQClKAyCcySgf0KhP+BDIcDgCxteTHt5mz69u 73bYj9UemdAzptdq48ALyDUz2qLgX43JCPCJ6i+CCG4dSsVgjGaqh1W3feiYqx0gTMAiOVqCVIW xCmuP/ZuwhkxnSIEmNZDxk/z2J2rGzIhbWYB8dPowMFlzvXaIPm0zVjSGH3uYt1nLBtYlwKAzw0 zBzp7KpAI4VwPGyuerD2D7XOQ28iAgvUA/d3eDekusBXe+TqicDqp6S77OXgJ6VNAB+I0cJIOWE mqMR4hRZa6lL8qTBluVjihS0if6Y2CTIEcnitD4QpGwlxDZW9dt1RqIDP8IqvXxtyu7eOoEhgJ1 0E4v/c0eDwrLcrwhnvRSWLd0s= X-Google-Smtp-Source: AGHT+IHeHdkc2XnKSK6fjIzJDzNt9YNwCzKvTe+1fOq0tvQpJKlXZEXGDjGVsU1VmkHp1ux133qPlA== X-Received: by 2002:ad4:594b:0:b0:706:6ce9:b141 with SMTP id 6a1803df08f44-7066d54b3femr15712036d6.40.1752894514194; Fri, 18 Jul 2025 20:08:34 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-7051b8bc2f1sm14525856d6.19.2025.07.18.20.08.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Jul 2025 20:08:33 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id 1959AF40066; Fri, 18 Jul 2025 23:08:33 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-10.internal (MEProxy); Fri, 18 Jul 2025 23:08:33 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:32 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 1/9] rust: Introduce atomic API helpers Date: Fri, 18 Jul 2025 20:08:19 -0700 Message-Id: <20250719030827.61357-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to support LKMM atomics in Rust, add rust_helper_* for atomic APIs. These helpers ensure the implementation of LKMM atomics in Rust is the same as in C. This could save the maintenance burden of having two similar atomic implementations in asm. Originally-by: Mark Rutland Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/helpers/atomic.c | 1040 +++++++++++++++++++++ rust/helpers/helpers.c | 1 + scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic-helpers.sh | 67 ++ 4 files changed, 1109 insertions(+) create mode 100644 rust/helpers/atomic.c create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c new file mode 100644 index 000000000000..cf06b7ef9a1c --- /dev/null +++ b/rust/helpers/atomic.c @@ -0,0 +1,1040 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +/* + * This file provides helpers for the various atomic functions for Rust. + */ +#ifndef _RUST_ATOMIC_API_H +#define _RUST_ATOMIC_API_H + +#include + +// TODO: Remove this after INLINE_HELPERS support is added. +#ifndef __rust_helper +#define __rust_helper +#endif + +__rust_helper int +rust_helper_atomic_read(const atomic_t *v) +{ + return atomic_read(v); +} + +__rust_helper int +rust_helper_atomic_read_acquire(const atomic_t *v) +{ + return atomic_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic_set(atomic_t *v, int i) +{ + atomic_set(v, i); +} + +__rust_helper void +rust_helper_atomic_set_release(atomic_t *v, int i) +{ + atomic_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic_add(int i, atomic_t *v) +{ + atomic_add(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return(int i, atomic_t *v) +{ + return atomic_add_return(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_acquire(int i, atomic_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_release(int i, atomic_t *v) +{ + return atomic_add_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add(int i, atomic_t *v) +{ + return atomic_fetch_add(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_release(int i, atomic_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_sub(int i, atomic_t *v) +{ + atomic_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return(int i, atomic_t *v) +{ + return atomic_sub_return(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_release(int i, atomic_t *v) +{ + return atomic_sub_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub(int i, atomic_t *v) +{ + return atomic_fetch_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_inc(atomic_t *v) +{ + atomic_inc(v); +} + +__rust_helper int +rust_helper_atomic_inc_return(atomic_t *v) +{ + return atomic_inc_return(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_inc_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_release(atomic_t *v) +{ + return atomic_inc_return_release(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_inc(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_inc_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_dec(atomic_t *v) +{ + atomic_dec(v); +} + +__rust_helper int +rust_helper_atomic_dec_return(atomic_t *v) +{ + return atomic_dec_return(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_dec_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_release(atomic_t *v) +{ + return atomic_dec_return_release(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_dec(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_dec_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_and(int i, atomic_t *v) +{ + atomic_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and(int i, atomic_t *v) +{ + return atomic_fetch_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_andnot(int i, atomic_t *v) +{ + atomic_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_or(int i, atomic_t *v) +{ + atomic_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or(int i, atomic_t *v) +{ + return atomic_fetch_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_release(int i, atomic_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_xor(int i, atomic_t *v) +{ + atomic_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor(int i, atomic_t *v) +{ + return atomic_fetch_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_xchg(atomic_t *v, int new) +{ + return atomic_xchg(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_acquire(atomic_t *v, int new) +{ + return atomic_xchg_acquire(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_release(atomic_t *v, int new) +{ + return atomic_xchg_release(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return atomic_xchg_relaxed(v, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return atomic_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_release(int i, atomic_t *v) +{ + return atomic_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return atomic_add_negative_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_inc_not_zero(atomic_t *v) +{ + return atomic_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic_inc_unless_negative(atomic_t *v) +{ + return atomic_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic_dec_unless_positive(atomic_t *v) +{ + return atomic_dec_unless_positive(v); +} + +__rust_helper int +rust_helper_atomic_dec_if_positive(atomic_t *v) +{ + return atomic_dec_if_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_read(const atomic64_t *v) +{ + return atomic64_read(v); +} + +__rust_helper s64 +rust_helper_atomic64_read_acquire(const atomic64_t *v) +{ + return atomic64_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic64_set(atomic64_t *v, s64 i) +{ + atomic64_set(v, i); +} + +__rust_helper void +rust_helper_atomic64_set_release(atomic64_t *v, s64 i) +{ + atomic64_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic64_add(s64 i, atomic64_t *v) +{ + atomic64_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return atomic64_add_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_sub(s64 i, atomic64_t *v) +{ + atomic64_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_inc(atomic64_t *v) +{ + atomic64_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return(atomic64_t *v) +{ + return atomic64_inc_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_inc_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_dec(atomic64_t *v) +{ + atomic64_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return(atomic64_t *v) +{ + return atomic64_dec_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_dec_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_and(s64 i, atomic64_t *v) +{ + atomic64_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_or(s64 i, atomic64_t *v) +{ + atomic64_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_xor(s64 i, atomic64_t *v) +{ + atomic64_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_xchg(atomic64_t *v, s64 new) +{ + return atomic64_xchg(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return atomic64_xchg_acquire(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return atomic64_xchg_release(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return atomic64_xchg_relaxed(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_acquire(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_release(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_unless_negative(atomic64_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic64_dec_unless_positive(atomic64_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_if_positive(atomic64_t *v) +{ + return atomic64_dec_if_positive(v); +} + +#endif /* _RUST_ATOMIC_API_H */ +// 615a0e0c98b5973a47fe4fa65e92935051ca00ed diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 16fa9bca5949..83e89f6a68fb 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -7,6 +7,7 @@ * Sorted alphabetically. */ =20 +#include "atomic.c" #include "auxiliary.c" #include "blk.c" #include "bug.c" diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 5b98a8307693..02508d0d6fe4 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -11,6 +11,7 @@ cat < ${LINUXDIR}/include= /${header} diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen= -rust-atomic-helpers.sh new file mode 100755 index 000000000000..45b1e100ed7c --- /dev/null +++ b/scripts/atomic/gen-rust-atomic-helpers.sh @@ -0,0 +1,67 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=3D$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +gen_proto_order_variant() +{ + local meta=3D"$1"; shift + local pfx=3D"$1"; shift + local name=3D"$1"; shift + local sfx=3D"$1"; shift + local order=3D"$1"; shift + local atomic=3D"$1"; shift + local int=3D"$1"; shift + + local atomicname=3D"${atomic}_${pfx}${name}${sfx}${order}" + + local ret=3D"$(gen_ret_type "${meta}" "${int}")" + local params=3D"$(gen_params "${int}" "${atomic}" "$@")" + local args=3D"$(gen_args "$@")" + local retstmt=3D"$(gen_ret_stmt "${meta}")" + +cat < + +// TODO: Remove this after INLINE_HELPERS support is added. +#ifndef __rust_helper +#define __rust_helper +#endif + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic" "int" ${args} +done + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat < X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:33 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 2/9] rust: sync: Add basic atomic operation mapping framework Date: Fri, 18 Jul 2025 20:08:20 -0700 Message-Id: <20250719030827.61357-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for generic atomic implementation. To unify the implementation of a generic method over `i32` and `i64`, the C side atomic methods need to be grouped so that in a generic method, they can be referred as ::, otherwise their parameters and return value are different between `i32` and `i64`, which would require using `transmute()` to unify the type into a `T`. Introduce `AtomicImpl` to represent a basic type in Rust that has the direct mapping to an atomic implementation from C. Use a sealed trait to restrict `AtomicImpl` to only support `i32` and `i64` for now. Further, different methods are put into different `*Ops` trait groups, and this is for the future when smaller types like `i8`/`i16` are supported but only with a limited set of API (e.g. only set(), load(), xchg() and cmpxchg(), no add() or sub() etc). While the atomic mod is introduced, documentation is also added for memory models and data races. Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect my responsibility on the Rust atomic mod. Reviewed-by: Alice Ryhl Reviewed-by: Benno Lossin Signed-off-by: Boqun Feng --- MAINTAINERS | 4 +- rust/kernel/sync.rs | 1 + rust/kernel/sync/atomic.rs | 22 +++ rust/kernel/sync/atomic/internal.rs | 265 ++++++++++++++++++++++++++++ 4 files changed, 291 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/sync/atomic.rs create mode 100644 rust/kernel/sync/atomic/internal.rs diff --git a/MAINTAINERS b/MAINTAINERS index 0c1d245bf7b8..5eef524975ca 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c ATOMIC INFRASTRUCTURE M: Will Deacon M: Peter Zijlstra -R: Boqun Feng +M: Boqun Feng R: Mark Rutland L: linux-kernel@vger.kernel.org S: Maintained @@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h F: include/linux/refcount.h F: scripts/atomic/ +F: rust/kernel/sync/atomic.rs +F: rust/kernel/sync/atomic/ =20 ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 36a719015583..b620027e0641 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -10,6 +10,7 @@ use pin_init; =20 mod arc; +pub mod atomic; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs new file mode 100644 index 000000000000..b9f2f4780073 --- /dev/null +++ b/rust/kernel/sync/atomic.rs @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic primitives. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions of +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Con= sistency) Model is the +//! only model for Rust code in kernel, and Rust's own atomics should be a= voided. +//! +//! # Data races +//! +//! [`LKMM`] atomics have different rules regarding data races: +//! +//! - A normal write from C side is treated as an atomic write if +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=3Dy. +//! - Mixed-size atomic accesses don't cause data races. +//! +//! [`LKMM`]: srctree/tools/memory-model/ + +#[allow(dead_code, unreachable_pub)] +mod internal; + +pub use internal::AtomicImpl; diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/= internal.rs new file mode 100644 index 000000000000..0d442ef83747 --- /dev/null +++ b/rust/kernel/sync/atomic/internal.rs @@ -0,0 +1,265 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic internal implementations. +//! +//! Provides 1:1 mapping to the C atomic operations. + +use crate::bindings; +use crate::macros::paste; +use core::cell::UnsafeCell; + +mod private { + /// Sealed trait marker to disable customized impls on atomic implemen= tation traits. + pub trait Sealed {} +} + +// `i32` and `i64` are only supported atomic implementations. +impl private::Sealed for i32 {} +impl private::Sealed for i64 {} + +/// A marker trait for types that implement atomic operations with C side = primitives. +/// +/// This trait is sealed, and only types that have directly mapping to the= C side atomics should +/// impl this: +/// +/// - `i32` maps to `atomic_t`. +/// - `i64` maps to `atomic64_t`. +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed { + /// The type of the delta in arithmetic or logical operations. + /// + /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usuall= y it's the same type of + /// [`Self`], but it may be different for the atomic pointer type. + type Delta; +} + +// `atomic_t` implements atomic operations on `i32`. +impl AtomicImpl for i32 { + type Delta =3D Self; +} + +// `atomic64_t` implements atomic operations on `i64`. +impl AtomicImpl for i64 { + type Delta =3D Self; +} + +/// Atomic representation. +#[repr(transparent)] +pub struct AtomicRepr(UnsafeCell); + +impl AtomicRepr { + /// Creates a new atomic representation `T`. + pub const fn new(v: T) -> Self { + Self(UnsafeCell::new(v)) + } + + /// Returns a pointer to the underlying `T`. + /// + /// # Guarantees + /// + /// The returned pointer is valid and properly aligned (i.e. aligned t= o [`align_of::()`]). + pub const fn as_ptr(&self) -> *mut T { + // GUARANTEE: `self.0` is an `UnsafeCell`, therefore the pointe= r returned by `.get()` + // must be valid and properly aligned. + self.0.get() + } +} + +// This macro generates the function signature with given argument list an= d return type. +macro_rules! declare_atomic_method { + ( + $(#[doc=3D$doc:expr])* + $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)? + ) =3D> { + paste!( + $(#[doc =3D $doc])* + fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)?; + ); + }; + ( + $(#[doc=3D$doc:expr])* + $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(->= $ret:ty)? + ) =3D> { + paste!( + declare_atomic_method!( + $(#[doc =3D $doc])* + [< $func _ $variant >]($($arg_sig)*) $(-> $ret)? + ); + ); + + declare_atomic_method!( + $(#[doc =3D $doc])* + $func [$($rest)*]($($arg_sig)*) $(-> $ret)? + ); + }; + ( + $(#[doc=3D$doc:expr])* + $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)? + ) =3D> { + declare_atomic_method!( + $(#[doc =3D $doc])* + $func($($arg_sig)*) $(-> $ret)? + ); + } +} + +// This macro generates the function implementation with given argument li= st and return type, and it +// will replace "call(...)" expression with "$ctype _ $func" to call the r= eal C function. +macro_rules! impl_atomic_method { + ( + ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $re= t:ty)? { + $unsafe:tt { call($($c_arg:expr),*) } + } + ) =3D> { + paste!( + #[inline(always)] + fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)? { + // TODO: Ideally we want to use the SAFETY comments writte= n at the macro invocation + // (e.g. in `declare_and_impl_atomic_methods!()`, however,= since SAFETY comments + // are just comments, and they are not passed to macros as= tokens, therefore we + // cannot use them here. One potential improvement is that= if we support using + // attributes as an alternative for SAFETY comments, then = we can use that for macro + // generating code. + // + // SAFETY: specified on macro invocation. + $unsafe { bindings::[< $ctype _ $func >]($($c_arg,)*) } + } + ); + }; + ( + ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_= sig:tt)*) $(-> $ret:ty)? { + $unsafe:tt { call($($arg:tt)*) } + } + ) =3D> { + paste!( + impl_atomic_method!( + ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)?= { + $unsafe { call($($arg)*) } + } + ); + ); + impl_atomic_method!( + ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? { + $unsafe { call($($arg)*) } + } + ); + }; + ( + ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? { + $unsafe:tt { call($($arg:tt)*) } + } + ) =3D> { + impl_atomic_method!( + ($ctype) $func($($arg_sig)*) $(-> $ret)? { + $unsafe { call($($arg)*) } + } + ); + } +} + +// Delcares $ops trait with methods and implements the trait for `i32` and= `i64`. +macro_rules! declare_and_impl_atomic_methods { + ($(#[$attr:meta])* $pub:vis trait $ops:ident { + $( + $(#[doc=3D$doc:expr])* + fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $r= et:ty)? { + $unsafe:tt { bindings::#call($($arg:tt)*) } + } + )* + }) =3D> { + $(#[$attr])* + $pub trait $ops: AtomicImpl { + $( + declare_atomic_method!( + $(#[doc=3D$doc])* + $func[$($variant)*]($($arg_sig)*) $(-> $ret)? + ); + )* + } + + impl $ops for i32 { + $( + impl_atomic_method!( + (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)?= { + $unsafe { call($($arg)*) } + } + ); + )* + } + + impl $ops for i64 { + $( + impl_atomic_method!( + (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret= )? { + $unsafe { call($($arg)*) } + } + ); + )* + } + } +} + +declare_and_impl_atomic_methods!( + /// Basic atomic operations + pub trait AtomicBasicOps { + /// Atomic read (load). + fn read[acquire](a: &AtomicRepr) -> Self { + // SAFETY: `a.as_ptr()` is valid and properly aligned. + unsafe { bindings::#call(a.as_ptr().cast()) } + } + + /// Atomic set (store). + fn set[release](a: &AtomicRepr, v: Self) { + // SAFETY: `a.as_ptr()` is valid and properly aligned. + unsafe { bindings::#call(a.as_ptr().cast(), v) } + } + } +); + +declare_and_impl_atomic_methods!( + /// Exchange and compare-and-exchange atomic operations + pub trait AtomicExchangeOps { + /// Atomic exchange. + /// + /// Atomically updates `*a` to `v` and returns the old value. + fn xchg[acquire, release, relaxed](a: &AtomicRepr, v: Self) = -> Self { + // SAFETY: `a.as_ptr()` is valid and properly aligned. + unsafe { bindings::#call(a.as_ptr().cast(), v) } + } + + /// Atomic compare and exchange. + /// + /// If `*a` =3D=3D `*old`, atomically updates `*a` to `new`. Other= wise, `*a` is not + /// modified, `*old` is updated to the current value of `*a`. + /// + /// Return `true` if the update of `*a` occured, `false` otherwise. + fn try_cmpxchg[acquire, release, relaxed]( + a: &AtomicRepr, old: &mut Self, new: Self + ) -> bool { + // SAFETY: `a.as_ptr()` is valid and properly aligned. `core::= ptr::from_mut(old)` + // is valid and properly aligned. + unsafe { bindings::#call(a.as_ptr().cast(), core::ptr::from_mu= t(old), new) } + } + } +); + +declare_and_impl_atomic_methods!( + /// Atomic arithmetic operations + pub trait AtomicArithmeticOps { + /// Atomic add (wrapping). + /// + /// Atomically updates `*a` to `(*a).wrapping_add(v)`. + fn add[](a: &AtomicRepr, v: Self::Delta) { + // SAFETY: `a.as_ptr()` is valid and properly aligned. + unsafe { bindings::#call(v, a.as_ptr().cast()) } + } + + /// Atomic fetch and add (wrapping). + /// + /// Atomically updates `*a` to `(*a).wrapping_add(v)`, and returns= the value of `*a` + /// before the update. + fn fetch_add[acquire, release, relaxed](a: &AtomicRepr, v: S= elf::Delta) -> Self { + // SAFETY: `a.as_ptr()` is valid and properly aligned. + unsafe { bindings::#call(v, a.as_ptr().cast()) } + } + } +); --=20 2.39.5 (Apple Git-154) From nobody Mon Oct 6 15:15:53 2025 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C9C11F4C8D; Sat, 19 Jul 2025 03:08:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894520; cv=none; b=Cgh0rel6DOSFCOx1Zy2mmZJzGRwJTIvQobM01RkaE4NkgCXC3PMFPF73rjGVI+1yJB6UbVDuZt97MOaIJVlMFoaYTWRhJN8DCC9OwgZL2Yh+U72p1cNyCoHYkfHIkjivL8n61GUHTzsV7Mo+XTSftb6qs62S7hzS/leNJSxYelY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894520; c=relaxed/simple; bh=gpySuT8I7+X9pczz/mskf/PsJB/pdom+HIWfSnuPxII=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dFZnvWEXjcjovJuzrowA6b1rZpa013TYuSvUhTuGgk77aMmTFbWbnsdcC860kxsvvyNKzw16AEj8c3V/ktQPGcKSkET0dLi8eqSl5nUnpaxFaelqYz1T668Yk8S0GDnKZMTxWxiYAtHKUUczbSjfQsVVsSB80UQ1s+gdoZ40efQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GojQTE0j; arc=none smtp.client-ip=209.85.160.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GojQTE0j" Received: by mail-qt1-f173.google.com with SMTP id d75a77b69052e-4abc006bcadso6779481cf.0; Fri, 18 Jul 2025 20:08:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752894517; x=1753499317; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=kVsshkU23k6tDEb3Czqo3gMlPOvW9sRvdxme9Qv0Uto=; b=GojQTE0jEqpUrGP6gZI1cv2bsLIU30XcfhjIZVRAKYds8U+uqwU/sUn/UqBx3JgUFr oK/y8qzi7K7Cb5FrZPr5GbjG5nGi8o5+rcr1wnvxcuTlpA66Ml7tQ6fHpwmcgBkJ8L0W SgZxdYC7I/2ZV7yTEY4NK7tMW1bngLdHpsJRvJWMdDiElhq7n7nRGs2VsCkdDt8ypDf0 KqeqAPS6ldUPRk/41tjXaqBUZ/OoR3n7EueWF7MFg9PH+g295Px8uvr2VEODfHwvLzYM euRBNXEWHDj4liIQ65OQVaCGp9F6okaWptPy44Gwl+Pj66/LGrxaIqkcUqMoyM9KqgFe 1mMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752894517; x=1753499317; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=kVsshkU23k6tDEb3Czqo3gMlPOvW9sRvdxme9Qv0Uto=; b=iS2Z5WyglQGoRzLmvYroY6cLTvEW33MJKP4J5zBJEbxqtlaQHIZjkivb3+5qq99lYz l+2awHJi8hn0T+KIbSFjeDZBeyy7GRQxKYKFI2ZrbXxzKa1rGIdN7zZUxgdVN2wCb1sM bWaQJejQk/D0izmw3dL/RzXRqKODyCwupwP0uHFYZh5+WdQpa0tk20hjtdlOLRqxyTue cYVN9rexpVyYeGEP1lCgCQ1EVJQFBCjCFVOExgShx2iz4QF1oypbJY/PypI3ZMg+2GBb nys3/9N3QDgtSqipvkcIyWYlfjyXrk3TUoF7wYjY/OdaXRHvXizF2+ORgQZkNTMT90EX 6fTg== X-Forwarded-Encrypted: i=1; AJvYcCU+VI9B0TRkF7mOIInyDQYLdsN0izB1woT5jsS6HGSUuqzdk+IJROqe3iv2i+ZIgH1il36PICHzKa/24jjuZD0=@vger.kernel.org, AJvYcCWSjSuZ9Xq5pTJPno4z5wXVJ+eqM/nbpopHTs6ouadnlo7PWfVhy8AWLi7UFc6Ei25KGFhu9M2yQE3+@vger.kernel.org X-Gm-Message-State: AOJu0YxOiQZBNEC5pb0g+1hO4mjCeTmSRAHQ/DraxGxbxF+57y7DKMyu SbhLo5+KX6sQV4BkaYt8C70w3Vl4ynfWiytLTYrQNFQvgFlVL/mamxP1 X-Gm-Gg: ASbGncvX6DJ6MjFBTaGny/c5VTihVNWMSxNv5FvY0oBGNXc/FVneziqoVy+0dQORt+0 dujGBq+QNJOfMxNy5pfdmhF+rbD3/FE7/XD7WEucC5YhXchKuROCHHa0lWFk2w909BBaF7FqL9i thDZFY1ifXN0g2rcGMI5SxH07IWu1tfd/TsDnGSeARjVnspJY/3mywuLEdyA7DCa5Am5fPg9lRU /geoVgGVnoNAqLi2iBkqS5178HyKmm8U/xDSTkI262mulEYoqHwYrQGfBSP33/jjnU6dwpeq70J 0+F3qeimn9qiVRZ6rsoYqnD1tyZdacWZYq9VGlXAE3jlgPuyWiAMK59lV2Qny7ELelnw3dN8t2+ BpC/1gwQZOMjpZtCaJwWcPjmJXdMxp2KRVngTltL5zCBZdIyH+VgudICEOhNRgS54Jd6DrP85Dc FZaBZ3Loefj8mUZFheNNzva/I= X-Google-Smtp-Source: AGHT+IFxNfGmv7pB7ZXB96wltCgGECR2DxQrAs7z/fp22mL0wFYhHuOhrg1I6aWOLimXAM/BRsSHvA== X-Received: by 2002:a05:6214:529a:b0:706:c5f3:1da0 with SMTP id 6a1803df08f44-706c5f32e0emr14546356d6.36.1752894516948; Fri, 18 Jul 2025 20:08:36 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-7051ba6b02dsm14281326d6.63.2025.07.18.20.08.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Jul 2025 20:08:36 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id D5F88F40066; Fri, 18 Jul 2025 23:08:35 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Fri, 18 Jul 2025 23:08:35 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:35 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 3/9] rust: sync: atomic: Add ordering annotation types Date: Fri, 18 Jul 2025 20:08:21 -0700 Message-Id: <20250719030827.61357-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for atomic primitives. Instead of a suffix like _acquire, a method parameter along with the corresponding generic parameter will be used to specify the ordering of an atomic operations. For example, atomic load() can be defined as: impl Atomic { pub fn load(&self, _o: O) -> T { ... } } and acquire users would do: let r =3D x.load(Acquire); relaxed users: let r =3D x.load(Relaxed); doing the following: let r =3D x.load(Release); will cause a compiler error. Compared to suffixes, it's easier to tell what ordering variants an operation has, and it also make it easier to unify the implementation of all ordering variants in one method via generic. The `TYPE` associate const is for generic function to pick up the particular implementation specified by an ordering annotation. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin --- rust/kernel/sync/atomic.rs | 2 + rust/kernel/sync/atomic/ordering.rs | 104 ++++++++++++++++++++++++++++ 2 files changed, 106 insertions(+) create mode 100644 rust/kernel/sync/atomic/ordering.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index b9f2f4780073..2302e6d51fe2 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -18,5 +18,7 @@ =20 #[allow(dead_code, unreachable_pub)] mod internal; +pub mod ordering; =20 pub use internal::AtomicImpl; +pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/= ordering.rs new file mode 100644 index 000000000000..3f103aa8db99 --- /dev/null +++ b/rust/kernel/sync/atomic/ordering.rs @@ -0,0 +1,104 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory orderings. +//! +//! The semantics of these orderings follows the [`LKMM`] definitions and = rules. +//! +//! - [`Acquire`] provides ordering between the load part of the annotated= operation and all the +//! following memory accesses, and if there is a store part, the store p= art has the [`Relaxed`] +//! ordering. +//! - [`Release`] provides ordering between all the preceding memory acces= ses and the store part of +//! the annotated operation, and if there is a load part, the load part = has the [`Relaxed`] +//! ordering. +//! - [`Full`] means "fully-ordered", that is: +//! - It provides ordering between all the preceding memory accesses and= the annotated operation. +//! - It provides ordering between the annotated operation and all the f= ollowing memory accesses. +//! - It provides ordering between all the preceding memory accesses and= all the following memory +//! accesses. +//! - All the orderings are the same strength as a full memory barrier (= i.e. `smp_mb()`). +//! - [`Relaxed`] provides no ordering except the dependency orderings. De= pendency orderings are +//! described in "DEPENDENCY RELATIONS" in [`LKMM`]'s [`explanation`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.= txt + +/// The annotation type for relaxed memory ordering, for the description o= f relaxed memory +/// ordering, see [module-level documentation]. +/// +/// [module-level documentation]: crate::sync::atomic::ordering +pub struct Relaxed; + +/// The annotation type for acquire memory ordering, for the description o= f acquire memory +/// ordering, see [module-level documentation]. +/// +/// [module-level documentation]: crate::sync::atomic::ordering +pub struct Acquire; + +/// The annotation type for release memory ordering, for the description o= f release memory +/// ordering, see [module-level documentation]. +/// +/// [module-level documentation]: crate::sync::atomic::ordering +pub struct Release; + +/// The annotation type for fully-ordered memory ordering, for the descrip= tion fully-ordered memory +/// ordering, see [module-level documentation]. +/// +/// [module-level documentation]: crate::sync::atomic::ordering +pub struct Full; + +/// Describes the exact memory ordering. +#[doc(hidden)] +pub enum OrderingType { + /// Relaxed ordering. + Relaxed, + /// Acquire ordering. + Acquire, + /// Release ordering. + Release, + /// Fully-ordered. + Full, +} + +mod internal { + /// Sealed trait, can be only implemented inside atomic mod. + pub trait Sealed {} + + impl Sealed for super::Relaxed {} + impl Sealed for super::Acquire {} + impl Sealed for super::Release {} + impl Sealed for super::Full {} +} + +/// The trait bound for annotating operations that support any ordering. +pub trait Ordering: internal::Sealed { + /// Describes the exact memory ordering. + const TYPE: OrderingType; +} + +impl Ordering for Relaxed { + const TYPE: OrderingType =3D OrderingType::Relaxed; +} + +impl Ordering for Acquire { + const TYPE: OrderingType =3D OrderingType::Acquire; +} + +impl Ordering for Release { + const TYPE: OrderingType =3D OrderingType::Release; +} + +impl Ordering for Full { + const TYPE: OrderingType =3D OrderingType::Full; +} + +/// The trait bound for operations that only support acquire or relaxed or= dering. +pub trait AcquireOrRelaxed: Ordering {} + +impl AcquireOrRelaxed for Acquire {} +impl AcquireOrRelaxed for Relaxed {} + +/// The trait bound for operations that only support release or relaxed or= dering. +pub trait ReleaseOrRelaxed: Ordering {} + +impl ReleaseOrRelaxed for Release {} +impl ReleaseOrRelaxed for Relaxed {} --=20 2.39.5 (Apple Git-154) From nobody Mon Oct 6 15:15:53 2025 Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C756215773; Sat, 19 Jul 2025 03:08:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894521; cv=none; b=MQpkw5enpK5jfaN8ExysoTi4UhjcSMMUY2aTQ9aiX00AAtIB3J0eMnrs+rmuH5KmL03pE8gJMHl/MZZqj18A/R4+6RnKjXpdweN7/y+njpPdJeA+c52Rkb02C440y6bR1DcDtz3cyOKmBouzPnc4osMTybAoBLKbsHQ9iaTKMrA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894521; c=relaxed/simple; bh=3TofzzU1vS9PaF3vfaB2IJX3q8KqQPHfMx+SNKONC/8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PdmEHtaEg4kjd91DF3WlUKeJtn5atnskfsHQWSIY3cF9lxGgaWVM05loKm+OAAZt6Dj65GCdzTuCl/5PqvAAlhqqdJv2Lkz6wYroS301c7b/I8PIqnjGZ+udgSvEzbmXKQ/k4sXucOY7O+YGDvMc0SWqUqpdA+TOKdSL2z4Lsqk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=OVE4R1FF; arc=none smtp.client-ip=209.85.222.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OVE4R1FF" Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-7e34607e575so462874985a.2; Fri, 18 Jul 2025 20:08:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752894518; x=1753499318; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=qCMEP7/JiBsXcaJ7IxOVQUtQQDRbDyp8NF6CGJ+yhVQ=; b=OVE4R1FFEv/xNhnwDA9pBeCSnRY8lelcHXavT7Y7rEc6awJErzZvVKAMlo8kUR4CMv dPzcMjYbbr1qYR+GKej1HwrOp8uD1OVVWnB2P4o7/1iuMw5h561yv5IvNeTIM/EKp4CE 9TXvndmx25Y2UdWKojUAZhe23n0JAwKmAlS7C2JM+eV1/SfK0lprWnNtUItQ+ZbwHsjY NUvrKo6VYB7ncg0hDPsc+2hs4YJ3apOicFcqOq1YfHj92T6OWJqKiVtGgWO1sjo4Lzk6 TEqdAx0nqzGyWHZcUNpHL742/aPP/CIv9YVMPpaUW9wPPBBEiEgAP0TOlF0Q0Bn6UwL9 aIBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752894518; x=1753499318; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=qCMEP7/JiBsXcaJ7IxOVQUtQQDRbDyp8NF6CGJ+yhVQ=; b=j5RRHchnDor98Jdcw8LOiSL/FWlCiScXBlJdQoMKT3xThsCW5vU9iN97ntkFFCM0RH szW7ReIOt1l+9zJFAV7CoCdqlzfNg2zxqIkd2xxBMfDFJ8LMNlpfvdVKrZk78n3RzloT a6Gv/e9Eku9ZaJ9u32o7ncJ18+EBBa23O4kRF+tO39RA5+bh8wYlBSuB4kKKEnvDEEUZ X3c6cPwTCQ5uK1di5M2x8WWGz+1DE3X0MUQq7onQn3HnEm4Z/dgIcLt7adkESwFWM78M 4Se3DvuZphPJKqw8D+xjDCcN9idCPveOSkW0rZIywdPNVSQExq2EOaBLJWAULsSr3Js9 fZLg== X-Forwarded-Encrypted: i=1; AJvYcCWDLq7607GCwf0TGgK/hsjNAOdgSxC+gUcBQMqVvpP2+wTmPZafuBvJg7hqFyzTfEZAw9sJ8JeL36mj@vger.kernel.org, AJvYcCXM0xiyAZbesdO/uskSxzhRo6LAEmSqyqDOFc2bKm10/BCiAonAxecLpGBeTTl2LLI/YJM5Q3+ZP6B0V2i8s/g=@vger.kernel.org X-Gm-Message-State: AOJu0YwtBPlSntoel2pVGX9aR7QbKqP2bd3ah9evFZyQXlG4PTG31kAp JsMut80DU7J+It3zLSdu/WH4WZpIYy6HCbiQ9lu6xG3Vta6snZZz4C0S X-Gm-Gg: ASbGncuGWj1RiSbIZ9AokkjK3QyZqHZc2zwEI5ItJqyTAq6ycGs7xRcL3qdWvz+okd2 xjv4t5iNU+uoo+x2Mkbg/Tn/Kpe+gcQy60F/Gc41u9G/LmXZ2dp1mBZybGs8wKL7TQgri+pdXn4 5k0WhP0VMk8b0qnnJaLlPjCZXIjeVKhosW/O1igQJ0pelj1gS5Ud7aN7Bjw/CISP5V/gy2Dqo2V U78Q/ZS5HQPi527kLt38luaUffznxceEQQl+mkaVx2BBuiNk0a8iM3Sm/0uHyWRmp817dQXnML/ 1XygxfmUHwt73OzVyUZ6xkbgLZsWt8CgNEqpR+IKuaSi6CGUTkeiwf1r/cko/+01CYKtvHeBGly 5nhqrjlr60QkYzuiDjSSCDxicuwJsSlIJWnXgX6VpiXa2wx8VlMAOHgDnGGFZM999SW91pczK2/ 1cKPAOcMQ/yvYgk4qIYk2JJfs= X-Google-Smtp-Source: AGHT+IHj49PauWOY2PTYKnX0AVHbbEr1lCh6ncfAGROs13BKK8DCqtD/6ECMBVe43HfOvhkqTajhUw== X-Received: by 2002:a05:620a:4249:b0:7e1:5b6c:dd11 with SMTP id af79cd13be357-7e356a6a34emr794853985a.25.1752894518421; Fri, 18 Jul 2025 20:08:38 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7e356b28a64sm159087885a.11.2025.07.18.20.08.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Jul 2025 20:08:37 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id 4A151F40066; Fri, 18 Jul 2025 23:08:37 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-10.internal (MEProxy); Fri, 18 Jul 2025 23:08:37 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:36 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 4/9] rust: sync: atomic: Add generic atomics Date: Fri, 18 Jul 2025 20:08:22 -0700 Message-Id: <20250719030827.61357-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To provide using LKMM atomics for Rust code, a generic `Atomic` is added, currently `T` needs to be Send + Copy because these are the straightforward usages and all basic types support this. Implement `AtomicType` for `i32` and `i64`, and so far only basic operations load() and store() are introduced. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin --- rust/kernel/sync/atomic.rs | 274 +++++++++++++++++++++++++++ rust/kernel/sync/atomic/predefine.rs | 15 ++ 2 files changed, 289 insertions(+) create mode 100644 rust/kernel/sync/atomic/predefine.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 2302e6d51fe2..14097ebc5f85 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -19,6 +19,280 @@ #[allow(dead_code, unreachable_pub)] mod internal; pub mod ordering; +mod predefine; =20 pub use internal::AtomicImpl; pub use ordering::{Acquire, Full, Relaxed, Release}; + +use crate::build_error; +use internal::{AtomicBasicOps, AtomicRepr}; +use ordering::OrderingType; + +/// A memory location which can be safely modified from multiple execution= contexts. +/// +/// This has the same size, alignment and bit validity as the underlying t= ype `T`. And it disables +/// niche optimization for the same reason as [`UnsafeCell`]. +/// +/// The atomic operations are implemented in a way that is fully compatibl= e with the [Linux Kernel +/// Memory (Consistency) Model][LKMM], hence they should be modeled as the= corresponding +/// [`LKMM`][LKMM] atomic primitives. With the help of [`Atomic::from_ptr(= )`] and +/// [`Atomic::as_ptr()`], this provides a way to interact with [C-side ato= mic operations] +/// (including those without the `atomic` prefix, e.g. `READ_ONCE()`, `WRI= TE_ONCE()`, +/// `smp_load_acquire()` and `smp_store_release()`). +/// +/// # Invariants +/// +/// `self.0` is a valid `T`. +/// +/// [`UnsafeCell`]: core::cell::UnsafeCell +/// [LKMM]: srctree/tools/memory-model/ +/// [C-side atomic operations]: srctree/Documentation/atomic_t.txt +#[repr(transparent)] +pub struct Atomic(AtomicRepr); + +// SAFETY: `Atomic` is safe to share among execution contexts because a= ll accesses are atomic. +unsafe impl Sync for Atomic {} + +/// Types that support basic atomic operations. +/// +/// # Round-trip transmutability +/// +/// `T` is round-trip transmutable to `U` if and only if both of these pro= perties hold: +/// +/// - Any valid bit pattern for `T` is also a valid bit pattern for `U`. +/// - Transmuting (e.g. using [`transmute()`]) a value of type `T` to `U` = and then to `T` again +/// yields a value that is in all aspects equivalent to the original val= ue. +/// +/// # Safety +/// +/// - [`Self`] must have the same size and alignment as [`Self::Repr`]. +/// - [`Self`] must be [round-trip transmutable] to [`Self::Repr`]. +/// +/// Note that this is more relaxed than requiring the bi-directional trans= mutability (i.e. +/// [`transmute()`] is always sound between `U` and `T`) because of the su= pport for atomic +/// variables over unit-only enums, see [Examples]. +/// +/// # Limitations +/// +/// Because C primitives are used to implement the atomic operations, and = a C function requires a +/// valid object of a type to operate on (i.e. no `MaybeUninit<_>`), hence= at the Rust <-> C +/// surface, only types with all the bits initialized can be passed. As a = result, types like `(u8, +/// u16)` (padding bytes are uninitialized) are currently not supported. N= ote that technically +/// these types can be supported if some APIs are removed for them and the= inner implementation is +/// tweaked, but the justification of support such a type is not strong en= ough at the moment. This +/// should be resolved if there is an implementation for `MaybeUninit= ` as `AtomicImpl`. +/// +/// # Examples +/// +/// A unit-only enum that implements [`AtomicType`]: +/// +/// ``` +/// use kernel::sync::atomic::{AtomicType, Atomic, Relaxed}; +/// +/// #[derive(Clone, Copy, PartialEq, Eq)] +/// #[repr(i32)] +/// enum State { +/// Uninit =3D 0, +/// Working =3D 1, +/// Done =3D 2, +/// }; +/// +/// // SAFETY: `State` and `i32` has the same size and alignment, and it's= round-trip +/// // transmutable to `i32`. +/// unsafe impl AtomicType for State { +/// type Repr =3D i32; +/// } +/// +/// let s =3D Atomic::new(State::Uninit); +/// +/// assert_eq!(State::Uninit, s.load(Relaxed)); +/// ``` +/// [`transmute()`]: core::mem::transmute +/// [round-trip transmutable]: AtomicType#round-trip-transmutability +/// [Examples]: AtomicType#examples +pub unsafe trait AtomicType: Sized + Send + Copy { + /// The backing atomic implementation type. + type Repr: AtomicImpl; +} + +#[inline(always)] +const fn into_repr(v: T) -> T::Repr { + // SAFETY: Per the safety requirement of `AtomicType`, `T` is round-tr= ip transmutable to + // `T::Repr`, therefore the transmute operation is sound. + unsafe { core::mem::transmute_copy(&v) } +} + +/// # Safety +/// +/// `r` must be a valid bit pattern of `T`. +#[inline(always)] +const unsafe fn from_repr(r: T::Repr) -> T { + // SAFETY: Per the safety requirement of the function, the transmute o= peration is sound. + unsafe { core::mem::transmute_copy(&r) } +} + +impl Atomic { + /// Creates a new atomic `T`. + pub const fn new(v: T) -> Self { + // INVARIANT: Per the safety requirement of `AtomicType`, `into_re= pr(v)` is a valid `T`. + Self(AtomicRepr::new(into_repr(v))) + } + + /// Creates a reference to an atomic `T` from a pointer of `T`. + /// + /// This usually is used when when communicating with C side or manipu= lating a C struct, see + /// examples below. + /// + /// # Safety + /// + /// - `ptr` is aligned to `align_of::()`. + /// - `ptr` is valid for reads and writes for `'a`. + /// - For the duration of `'a`, other accesses to `*ptr` must not caus= e data races (defined + /// by [`LKMM`]) against atomic operations on the returned reference= . Note that if all other + /// accesses are atomic, then this safety requirement is trivially f= ulfilled. + /// + /// [`LKMM`]: srctree/tools/memory-model + /// + /// # Examples + /// + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [= `Atomic::store()`] can + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire(= )` or + /// `WRITE_ONCE()`/`smp_store_release()` in C side: + /// + /// ``` + /// # use kernel::types::Opaque; + /// use kernel::sync::atomic::{Atomic, Relaxed, Release}; + /// + /// // Assume there is a C struct `foo`. + /// mod cbindings { + /// #[repr(C)] + /// pub(crate) struct foo { + /// pub(crate) a: i32, + /// pub(crate) b: i32 + /// } + /// } + /// + /// let tmp =3D Opaque::new(cbindings::foo { a: 1, b: 2 }); + /// + /// // struct foo *foo_ptr =3D ..; + /// let foo_ptr =3D tmp.get(); + /// + /// // SAFETY: `foo_ptr` is valid, and `.a` is in bounds. + /// let foo_a_ptr =3D unsafe { &raw mut (*foo_ptr).a }; + /// + /// // a =3D READ_ONCE(foo_ptr->a); + /// // + /// // SAFETY: `foo_a_ptr` is valid for read, and all other accesses o= n it is atomic, so no + /// // data race. + /// let a =3D unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed); + /// # assert_eq!(a, 1); + /// + /// // smp_store_release(&foo_ptr->a, 2); + /// // + /// // SAFETY: `foo_a_ptr` is valid for writes, and all other accesses= on it is atomic, so + /// // no data race. + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release); + /// ``` + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self + where + T: Sync, + { + // CAST: `T` and `Atomic` have the same size, alignment and bit= validity. + // SAFETY: Per function safety requirement, `ptr` is a valid point= er and the object will + // live long enough. It's safe to return a `&Atomic` because fu= nction safety requirement + // guarantees other accesses won't cause data races. + unsafe { &*ptr.cast::() } + } + + /// Returns a pointer to the underlying atomic `T`. + /// + /// Note that use of the return pointer must not cause data races defi= ned by [`LKMM`]. + /// + /// # Guarantees + /// + /// The returned pointer is valid and properly aligned (i.e. aligned t= o [`align_of::()`]). + /// + /// [`LKMM`]: srctree/tools/memory-model + /// [`align_of::()`]: core::mem::align_of + pub const fn as_ptr(&self) -> *mut T { + // GUARANTEE: Per the function guarantee of `AtomicRepr::as_ptr()`= , the `self.0.as_ptr()` + // must be a valid and properly aligned pointer for `T::Repr`, and= per the safety guarantee + // of `AtomicType`, it's a valid and properly aligned pointer of `= T`. + self.0.as_ptr().cast() + } + + /// Returns a mutable reference to the underlying atomic `T`. + /// + /// This is safe because the mutable reference of the atomic `T` guara= ntees exclusive access. + pub fn get_mut(&mut self) -> &mut T { + // CAST: `T` and `T::Repr` has the same size and alignment per the= safety requirement of + // `AtomicType`, and per the type invariants `self.0` is a valid `= T`, therefore the casting + // result is a valid pointer of `T`. + // SAFETY: The pointer is valid per the CAST comment above, and th= e mutable reference + // guarantees exclusive access. + unsafe { &mut *self.0.as_ptr().cast() } + } +} + +impl Atomic +where + T::Repr: AtomicBasicOps, +{ + /// Loads the value from the atomic `T`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// let x =3D Atomic::new(42i64); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_read", "atomic64_read"))] + #[inline(always)] + pub fn load(&self, _: Ordering) = -> T { + let v =3D { + match Ordering::TYPE { + OrderingType::Relaxed =3D> T::Repr::atomic_read(&self.0), + OrderingType::Acquire =3D> T::Repr::atomic_read_acquire(&s= elf.0), + _ =3D> build_error!("Wrong ordering"), + } + }; + + // SAFETY: `v` comes from reading `self.0`, which is a valid `T` p= er the type invariants. + unsafe { from_repr(v) } + } + + /// Stores a value to the atomic `T`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.store(43, Relaxed); + /// + /// assert_eq!(43, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_set", "atomic64_set"))] + #[inline(always)] + pub fn store(&self, v: T, _: Ord= ering) { + let v =3D into_repr(v); + + // INVARIANT: `v` is a valid `T`, and is stored to `self.0` by `at= omic_set*()`. + match Ordering::TYPE { + OrderingType::Relaxed =3D> T::Repr::atomic_set(&self.0, v), + OrderingType::Release =3D> T::Repr::atomic_set_release(&self.0= , v), + _ =3D> build_error!("Wrong ordering"), + } + } +} diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic= /predefine.rs new file mode 100644 index 000000000000..33356deee952 --- /dev/null +++ b/rust/kernel/sync/atomic/predefine.rs @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Pre-defined atomic types + +// SAFETY: `i32` has the same size and alignment with itself, and is round= -trip transmutable to +// itself. +unsafe impl super::AtomicType for i32 { + type Repr =3D i32; +} + +// SAFETY: `i64` has the same size and alignment with itself, and is round= -trip transmutable to +// itself. +unsafe impl super::AtomicType for i64 { + type Repr =3D i64; +} --=20 2.39.5 (Apple Git-154) From nobody Mon Oct 6 15:15:53 2025 Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C276521CC60; Sat, 19 Jul 2025 03:08:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894522; cv=none; b=bwvUohfdMOiztstsu4XLTwqvzd2TwW6bavu6P19+VsSAhCBwCgbpWmg3ihNbH1QP1z57hMaY59qmzqQP6U5wef94njw1Zm1XTD8Xh2umc6oafbPq+HNENxuDhTWR8cdMcTvoyFvLi3BY6ZzwY5JFfA8rmKGCPKm/bSf3CK1VdzY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894522; c=relaxed/simple; bh=qr/BLV35XP2oSWzk0PKe6iGENLQXmDjK4lcR4P9eqss=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eMv2ya1/LnIxgPneH0LtxadkTZUAFO7oucqPrUsyODn1l5Uph1P+7szltGYfQvwooKnX7x1fR0DNOF1bllfaq21V86Hg/O03aTTfzUJGGhGCY7DGb1Ndn+OsOBbv0NRQ20jN/W3by7WU+CePFA0Rs2wkoVqKCv6mMI723TAzaE4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DhDlceeZ; arc=none smtp.client-ip=209.85.222.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DhDlceeZ" Received: by mail-qk1-f179.google.com with SMTP id af79cd13be357-7e32cf22fdfso245287285a.0; Fri, 18 Jul 2025 20:08:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752894520; x=1753499320; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=0xTbZ5T5Hvj0Ijk0x3qff7deKH90th0WSGebbz2fch0=; b=DhDlceeZ5yRr+6E3hfoz3BTY3N0cVfjE3bJ+qiy3JerEiWs4RCAuUhr9iStfglI8c8 Di3/l1onsPD1BTH3bIxdJvB6ttb0FQsZzatIBVwX45gByiMkSRygTsDP5Moe2ZL5k7fH 2otSmrLwtTItczpT0SNoXpbuTLezScJT95jLn02m2Buh903lTExj15xwya54hgPeJIVB qBuKOPx853R+27GpYYQeV+KLXrKy2YYyQegwDnMrmKBateJo0w7ivVbDKr29jQ8cqZfv mwe9TRNytkNmrblEOy8r/Lq9S5NjWi9PuncNhYyRMbg+wM2CzdmAR4hJiKVSn9nfp6wL rwKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752894520; x=1753499320; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=0xTbZ5T5Hvj0Ijk0x3qff7deKH90th0WSGebbz2fch0=; b=hwFFU4sxsWrDpNE05ro8auR7VIbxr/qjx08wqOID/EClxZfYwwVDhzUAWqtEaM+oL8 60A92IgBWL42t5iwzZff07fMbjulZ2VsHcjw5BJJLyPKmvMEbXt1cOQHDs0Vcg/WlmyC Rddtc9hJfvu7m4LVqjm5bc/OhgzXlxLS+5aIasf1N9JqBFa4CfbBlQZGi+86vwgPcecp jmf+CKrWbY0QLE2PPHFdu2zlRl79O/9xu2UQFoyNnbvWOX5FmPq0LTLFpBoYzGXnCrRQ tGBWAdhAbDqOyx0TdBn125FfFnIfDKAqahD9bskbGl1WCFexPt8hV7mSKYDYS4SNgx8o TSew== X-Forwarded-Encrypted: i=1; AJvYcCWW8X1cmIKVL7+sCweU0BSIHgK5sYXeox0I+s10JSKuHesAZNAfQdRL3RVL653DrLw2cPJz03coxgXRDsDUMug=@vger.kernel.org, AJvYcCXHURNzk3DmU0aSGn8s4mtSbzkWVZXjejElw4xBQyCCZDh0ZAArlhEVVVOy0hwfypKCK3bLZ2hhwPlL@vger.kernel.org X-Gm-Message-State: AOJu0YwdMEUzT6SIXrFAk9CHUPXPLhZ/TMkEg0EQYWTLm/RTOXtXlV7d fkr/tsGqLNSifwGRpPYYL+CqTRw2bmsy4pmYP4UVaO+OSji07HF68FIe X-Gm-Gg: ASbGncvE4fKrg7/a7hrSG2rh6RHMVXDuR4LkZRnlKu7d1pRDzWI7poJMBt8IJOn/iRF iGll+jzifMTMV0sdsGBj/WV73Er5baa69t4Py366KTs9Z3b+ZYU5qq9YTPYNDSSWPI34Avba2RY Ygn8Ifi3pveIiQJyC0hJe+kzvKQWWPASXmtve183fog5JtaUYE6bBziYVV+6YoPB58pg/IyfPHf 0GqjHkqhZi2KhPN0bsUN1KysYTmP9e5uaYRs971uQ0vVZlFZXuDwk6b3OnPwGL5eqDowXi9wZdq bkKwXXfkvAV9Q25cpKi1+S7SuQZVRG3yZUf+85PVyv5YfQBp89IhqWwrIEogpn5ZdQte5s8ra8L yizRYOamrYyM4IsIC8WR4Bzrnb0T+S28q1P+OewBv4U/x3PeWpo+bOzrL07YEMWgmr1vFDT9bYP w4niWr69nbBiQMEbGZWLiyEV8= X-Google-Smtp-Source: AGHT+IFf1+u3+wvJoL53DLwYJThiLBprhFka7dPRpaBpXbAoP/xf6bKPE2chYFWWswYnYug5IQhnzA== X-Received: by 2002:a05:620a:4405:b0:7e3:5620:1683 with SMTP id af79cd13be357-7e356201704mr792644785a.19.1752894519629; Fri, 18 Jul 2025 20:08:39 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7e356c9242bsm157500885a.97.2025.07.18.20.08.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Jul 2025 20:08:39 -0700 (PDT) Received: from phl-compute-03.internal (phl-compute-03.phl.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id A1500F40066; Fri, 18 Jul 2025 23:08:38 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-03.internal (MEProxy); Fri, 18 Jul 2025 23:08:38 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:38 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations Date: Fri, 18 Jul 2025 20:08:23 -0700 Message-Id: <20250719030827.61357-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" xchg() and cmpxchg() are basic operations on atomic. Provide these based on C APIs. Note that cmpxchg() use the similar function signature as compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means the operation succeeds and `Err(old)` means the operation fails. Reviewed-by: Alice Ryhl Reviewed-by: Benno Lossin Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 168 ++++++++++++++++++++++++++++++++++++- 1 file changed, 167 insertions(+), 1 deletion(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 14097ebc5f85..793134aeaac1 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -25,7 +25,7 @@ pub use ordering::{Acquire, Full, Relaxed, Release}; =20 use crate::build_error; -use internal::{AtomicBasicOps, AtomicRepr}; +use internal::{AtomicBasicOps, AtomicExchangeOps, AtomicRepr}; use ordering::OrderingType; =20 /// A memory location which can be safely modified from multiple execution= contexts. @@ -296,3 +296,169 @@ pub fn store(&s= elf, v: T, _: Ordering) { } } } + +impl Atomic +where + T::Repr: AtomicExchangeOps, +{ + /// Atomic exchange. + /// + /// Atomically updates `*self` to `v` and returns the old value of `*s= elf`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.xchg(52, Acquire)); + /// assert_eq!(52, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_xchg", "atomic64_xchg", "swap"))] + #[inline(always)] + pub fn xchg(&self, v: T, _: Ordering) ->= T { + let v =3D into_repr(v); + + // INVARIANT: `self.0` is a valid `T` after `atomic_xchg*()` becau= se `v` is transmutable to + // `T`. + let ret =3D { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_xchg(&self.0, v), + OrderingType::Acquire =3D> T::Repr::atomic_xchg_acquire(&s= elf.0, v), + OrderingType::Release =3D> T::Repr::atomic_xchg_release(&s= elf.0, v), + OrderingType::Relaxed =3D> T::Repr::atomic_xchg_relaxed(&s= elf.0, v), + } + }; + + // SAFETY: `ret` comes from reading `*self`, which is a valid `T` = per type invariants. + unsafe { from_repr(ret) } + } + + /// Atomic compare and exchange. + /// + /// If `*self` =3D=3D `old`, atomically updates `*self` to `new`. Othe= rwise, `*self` is not + /// modified. + /// + /// Compare: The comparison is done via the byte level comparison betw= een `*self` and `old`. + /// + /// Ordering: When succeeds, provides the corresponding ordering as th= e `Ordering` type + /// parameter indicates, and a failed one doesn't provide any ordering= , the load part of a + /// failed cmpxchg is a [`Relaxed`] load. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed= to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the current value o= f `*self`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// // Checks whether cmpxchg succeeded. + /// let success =3D x.cmpxchg(52, 64, Relaxed).is_ok(); + /// # assert!(!success); + /// + /// // Checks whether cmpxchg failed. + /// let failure =3D x.cmpxchg(52, 64, Relaxed).is_err(); + /// # assert!(failure); + /// + /// // Uses the old value if failed, probably re-try cmpxchg. + /// match x.cmpxchg(52, 64, Relaxed) { + /// Ok(_) =3D> { }, + /// Err(old) =3D> { + /// // do something with `old`. + /// # assert_eq!(old, 42); + /// } + /// } + /// + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in= C. + /// let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); + /// # assert_eq!(42, latest); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + /// + /// [`Relaxed`]: ordering::Relaxed + #[doc(alias( + "atomic_cmpxchg", + "atomic64_cmpxchg", + "atomic_try_cmpxchg", + "atomic64_try_cmpxchg", + "compare_exchange" + ))] + #[inline(always)] + pub fn cmpxchg( + &self, + mut old: T, + new: T, + o: Ordering, + ) -> Result { + // Note on code generation: + // + // try_cmpxchg() is used to implement cmpxchg(), and if the helper= functions are inlined, + // the compiler is able to figure out that branch is not needed if= the users don't care + // about whether the operation succeeds or not. One exception is o= n x86, due to commit + // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semanti= cs"), the + // atomic_try_cmpxchg() on x86 has a branch even if the caller doe= sn't care about the + // success of cmpxchg and only wants to use the old value. For exa= mple, for code like: + // + // let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old|= old); + // + // It will still generate code: + // + // movl $0x40, %ecx + // movl $0x34, %eax + // lock + // cmpxchgl %ecx, 0x4(%rsp) + // jne 1f + // 2: + // ... + // 1: movl %eax, %ecx + // jmp 2b + // + // This might be "fixed" by introducing a try_cmpxchg_exclusive() = that knows the "*old" + // location in the C function is always safe to write. + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } + + /// Atomic compare and exchange and returns whether the operation succ= eeds. + /// + /// If `*self` =3D=3D `old`, atomically updates `*self` to `new`. Othe= rwise, `*self` is not + /// modified, `*old` is updated to the current value of `*self`. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`= ]. + /// + /// Returns `true` means the cmpxchg succeeds otherwise returns `false= `. + #[inline(always)] + fn try_cmpxchg(&self, old: &mut T, new: = T, _: Ordering) -> bool { + let mut tmp =3D into_repr(*old); + let new =3D into_repr(new); + + // INVARIANT: `self.0` is a valid `T` after `atomic_try_cmpxchg*()= ` because `new` is + // transmutable to `T`. + let ret =3D { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_try_cmpxchg(&self.= 0, &mut tmp, new), + OrderingType::Acquire =3D> { + T::Repr::atomic_try_cmpxchg_acquire(&self.0, &mut tmp,= new) + } + OrderingType::Release =3D> { + T::Repr::atomic_try_cmpxchg_release(&self.0, &mut tmp,= new) + } + OrderingType::Relaxed =3D> { + T::Repr::atomic_try_cmpxchg_relaxed(&self.0, &mut tmp,= new) + } + } + }; + + // SAFETY: `tmp` comes from reading `*self`, which is a valid `T` = per type invariants. + *old =3D unsafe { from_repr(tmp) }; + + ret + } +} --=20 2.39.5 (Apple Git-154) From nobody Mon Oct 6 15:15:53 2025 Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E98E21FF42; Sat, 19 Jul 2025 03:08:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894525; cv=none; b=nchPrVhyae5ukq7pjHrnIBSPxlDwCcy91IWY82YuewIRENb1Zd9fAIA+W2WNfef1e3XLeSUf5nwBILrKggX0fwOBSnZc4AHlJKAVLJyIhC8eTbJuya44ErPZ9ZV8P1ENQzZGsvdTN+qZNY+lbcyeG/BsQ0kUrtooG5FOt2ZWvxs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894525; c=relaxed/simple; bh=rT8/jX4sRu6LTU4WYteSZSqRX1DtiFWLv/jTGTQzhU8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=o07UDDGzaMQs0dgB9ls9ZbRHOsvEGtJc20FNJvWVg0Z/9EC89uOWPsTUA4AF4pqe9TmkINEYFeppZz8MWKU3PSUyLucAnEtFu2jPPDUdGDMTK8yzqVUqcDHqmVjqUcZ+6dhCmVMTE3+ztibZItVhEHw4tG89xTmIonUG+Y3yMUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=V1dypLCs; arc=none smtp.client-ip=209.85.160.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="V1dypLCs" Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-4ab7384b108so30259911cf.0; Fri, 18 Jul 2025 20:08:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752894521; x=1753499321; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=COdJ3V51vaX1EoPXgut8XfxRVUy1aTIFOkuQTW2lPh8=; b=V1dypLCsmaK0IiibqyuOJbaDI8XMr7lPiC6t1FbLEiKsG7KskE1R9F/78yTHPt0SvO hVyWtwtyaI9jm5X+dpgx18uN1WtafXli0/ejgF8s9FO6VRHdVmoIcYYusIzm8T2bGXgz qfz4/me6qX0H/hDBu6Edk1fkbSlPKLY3tGpcbduEJUprefH95npAbIJDtW+x/b6ee0rY KHXUbmsLlo7domWnrkVPrTzqsRF11Zs5mQ13JHbOrPXuiAVCv30pBk+G+9cWEgH59n1F +QFa8WQmqnoJPODEzfl1iJB+Yf+wIZ+OVpkkHo1wIZW+K2y8fMkF9Muouh3uPwCpCbag tODw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752894521; x=1753499321; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=COdJ3V51vaX1EoPXgut8XfxRVUy1aTIFOkuQTW2lPh8=; b=b5v8JBXhCXYw03P7XKEXuIF6ppc+uJixXVvuSOpswgmCWLpSylV4CyGDJDyGNFLCwV RUvFEzy/g9XccAgJKYlbFWkFNVsUH0B/z/d9r1REsrdUDHxbhlyUo0+hZnMrzoEXpRIm XpZVX01YRJzlgqVkz4t5wvcTPSxKCK6LjJIuHEJU+zJRK2CYD4QAT9FrDLMTtdMuN3AZ iOzigSiLNS0e4ObVYYhcebglf/LLdJmOonE3iDjg/CL7tAYG3FfEaJxNpEO8mg7AroKN 5n0DRlvzpf0bbtziM7XpAMRGtpgPC+v/vd4EnhwpLBWYoPscxvfqJW4/DnmgWIoN74oy OqDg== X-Forwarded-Encrypted: i=1; AJvYcCV2SQsATu3B2s1tqANfuVN3//B26SFTnBz2/2XF9934i1VE+/LQuKxHeVxh9hZ1upjQsB9Vi/7frXnbXsQQbNQ=@vger.kernel.org, AJvYcCVpcNzzTGJkt2pXIE2ttOrAQWi4Rwj962aU+zLHRSJM9qtaG//fxDNcD8fSYK+6ZqqQa7Rubp3SWQOr@vger.kernel.org X-Gm-Message-State: AOJu0Yz48txqnYE/jRK4V3Lq6FB+DfDkSsqriw80w1nGQsBUpXIzc/BN RITEW0dDA2PHrztmSk/GMJ3w/nPCxiFmh8UzTvkF1tzT6heKDYxenhOb X-Gm-Gg: ASbGncvXHlN/iMIzHOYeNQ/Pn2S/tNfrRm2nqybCFGZIubN49e7I6rfJzBNoeDAG+zq 9rCcT5fMi2x3MtDcQW32dkDhZ61KESo7ZfkJz9SOHcB4/7jYSiHQwyIWE8EvDYM34aQmV5wSyhi YAcKYG6R7cp/YZw0FxDLCkpWxC5BF4y4EdKoMKaw7h/cSFunAxHiMrjpd5s0+Te/Qf4HTRwR+2D WY9Zxk1mQX0am7Y7VZSC1hK8bSV5J+bpswmNyc/JHZhzGkttFgx3gNOwioyGV5DkCSd1uumhDgA GHVnF+qiwHfhf2XEOfGUY0j40AQsIB+svOHFByrT9yxQGkKMaU9yD34xdNRRqQKKqzIpdlae4YH lb/QmCBQgmstG6qAbA/RvjJVQc5DzMDyTPxMBaewKyjKmxEZz4P4IEG1Ttftoy2NYa2FV0AZYFj nZ360Zp/Wee1C6NMCLrwXJ7Ds= X-Google-Smtp-Source: AGHT+IHCBuOpr0SfTTkjVmMnXUqgVAxXWB+r+qSXNqH0RjbQYzPVXuRtk2OZeOKwIh5pkAUtU9SfvA== X-Received: by 2002:a05:622a:1898:b0:4ab:3b66:55dd with SMTP id d75a77b69052e-4ab90a22088mr214094271cf.17.1752894521123; Fri, 18 Jul 2025 20:08:41 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4abb4aecf6dsm15080441cf.47.2025.07.18.20.08.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Jul 2025 20:08:40 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 11D8BF40066; Fri, 18 Jul 2025 23:08:40 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Fri, 18 Jul 2025 23:08:40 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:39 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 6/9] rust: sync: atomic: Add the framework of arithmetic operations Date: Fri, 18 Jul 2025 20:08:24 -0700 Message-Id: <20250719030827.61357-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" One important set of atomic operations is the arithmetic operations, i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not make senses for all the types that `AtomicType` to have arithmetic operations, for example a `Foo(u32)` may not have a reasonable add() or sub(), plus subword types (`u8` and `u16`) currently don't have atomic arithmetic operations even on C side and might not have them in the future in Rust (because they are usually suboptimal on a few architecures). Therefore the plan is to add a few subtraits of `AtomicType` describing which types have and can do atomic arithemtic operations. One trait `AtomicAdd` is added, and only add() and fetch_add() are added. The rest will be added in the future. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin --- rust/kernel/sync/atomic.rs | 93 +++++++++++++++++++++++++++- rust/kernel/sync/atomic/predefine.rs | 14 +++++ 2 files changed, 105 insertions(+), 2 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 793134aeaac1..e3a30b6aaee4 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -16,7 +16,6 @@ //! //! [`LKMM`]: srctree/tools/memory-model/ =20 -#[allow(dead_code, unreachable_pub)] mod internal; pub mod ordering; mod predefine; @@ -25,7 +24,7 @@ pub use ordering::{Acquire, Full, Relaxed, Release}; =20 use crate::build_error; -use internal::{AtomicBasicOps, AtomicExchangeOps, AtomicRepr}; +use internal::{AtomicArithmeticOps, AtomicBasicOps, AtomicExchangeOps, Ato= micRepr}; use ordering::OrderingType; =20 /// A memory location which can be safely modified from multiple execution= contexts. @@ -115,6 +114,18 @@ pub unsafe trait AtomicType: Sized + Send + Copy { type Repr: AtomicImpl; } =20 +/// Types that support atomic add operations. +/// +/// # Safety +/// +/// `wrapping_add` any value of type `Self::Repr::Delta` obtained by [`Sel= f::rhs_into_delta()`] to +/// any value of type `Self::Repr` obtained through transmuting a value of= type `Self` to must +/// yield a value with a bit pattern also valid for `Self`. +pub unsafe trait AtomicAdd: AtomicType { + /// Converts `Rhs` into the `Delta` type of the atomic implementation. + fn rhs_into_delta(rhs: Rhs) -> ::Delta; +} + #[inline(always)] const fn into_repr(v: T) -> T::Repr { // SAFETY: Per the safety requirement of `AtomicType`, `T` is round-tr= ip transmutable to @@ -462,3 +473,81 @@ fn try_cmpxchg(&self, ol= d: &mut T, new: T, _: Orde ret } } + +impl Atomic +where + T::Repr: AtomicArithmeticOps, +{ + /// Atomic add. + /// + /// Atomically updates `*self` to `(*self).wrapping_add(v)`. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.add(12, Relaxed); + /// + /// assert_eq!(54, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn add(&self, v: Rhs, _: ordering::Relaxed) + where + T: AtomicAdd, + { + let v =3D T::rhs_into_delta(v); + + // INVARIANT: `self.0` is a valid `T` after `atomic_add()` due to = safety requirement of + // `AtomicAdd`. + T::Repr::atomic_add(&self.0, v); + } + + /// Atomic fetch and add. + /// + /// Atomically updates `*self` to `(*self).wrapping_add(v)`, and retur= ns the value of `*self` + /// before the update. + /// + /// # Examples + /// + /// ``` + /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) }); + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } ); + /// ``` + #[inline(always)] + pub fn fetch_add(&self, v: Rhs, _: = Ordering) -> T + where + T: AtomicAdd, + { + let v =3D T::rhs_into_delta(v); + + // INVARIANT: `self.0` is a valid `T` after `atomic_fetch_add*()` = due to safety requirement + // of `AtomicAdd`. + let ret =3D { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_fetch_add(&self.0,= v), + OrderingType::Acquire =3D> T::Repr::atomic_fetch_add_acqui= re(&self.0, v), + OrderingType::Release =3D> T::Repr::atomic_fetch_add_relea= se(&self.0, v), + OrderingType::Relaxed =3D> T::Repr::atomic_fetch_add_relax= ed(&self.0, v), + } + }; + + // SAFETY: `ret` comes from reading `self.0`, which is a valid `T`= per type invariants. + unsafe { from_repr(ret) } + } +} diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic= /predefine.rs index 33356deee952..a6e5883be7cb 100644 --- a/rust/kernel/sync/atomic/predefine.rs +++ b/rust/kernel/sync/atomic/predefine.rs @@ -8,8 +8,22 @@ unsafe impl super::AtomicType for i32 { type Repr =3D i32; } =20 +// SAFETY: The wrapping add result of two `i32`s is a valid `i32`. +unsafe impl super::AtomicAdd for i32 { + fn rhs_into_delta(rhs: i32) -> i32 { + rhs + } +} + // SAFETY: `i64` has the same size and alignment with itself, and is round= -trip transmutable to // itself. unsafe impl super::AtomicType for i64 { type Repr =3D i64; } + +// SAFETY: The wrapping add result of two `i64`s is a valid `i64`. +unsafe impl super::AtomicAdd for i64 { + fn rhs_into_delta(rhs: i64) -> i64 { + rhs + } +} --=20 2.39.5 (Apple Git-154) From nobody Mon Oct 6 15:15:53 2025 Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C71DE221DB4; Sat, 19 Jul 2025 03:08:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894526; cv=none; b=UQPOgRqFLDlQW2GJ1CDFWaA/HTEyA8g5RS/O9bu6gcvmjAY7qf7ScKLMQeEhFIka8MRXuMBM+ZvNXLc6dlMd+cghI5cYG+R7lXJsKPB1qYJvZ3aNE6N86IgjVZT8m0RsU6YvrtpF4jPVuVzanH42Z9SATQfublHKFeqCtUFGf1w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894526; c=relaxed/simple; bh=TP3jTUuLChJ7+2IgzREnNWToIcGobJhvfGdbfGdDts4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KdNsq5yVsRTb4mLIKCZlaUScNagt+LgsFc53u6e99rE5sbsoKdTk1XYMvGM4zZf1PaEL8a3WYOq7hEslUxztdYMRMnJwRwXW8nkMXSjGQQgLmnn+eKzh417HcZzJylWc/kn7J+ESPzWnecuhDDEz8ecc4izPrgwX/ku+3qoeckc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WheyxYVk; arc=none smtp.client-ip=209.85.160.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WheyxYVk" Received: by mail-qt1-f177.google.com with SMTP id d75a77b69052e-4abc0a296f5so6468531cf.0; Fri, 18 Jul 2025 20:08:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752894522; x=1753499322; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=p9y3yhhEc6XLO5qpQuSE85PA8k6DmYNYdfRyagczjJ4=; b=WheyxYVkYPhEIhxKYf2kTVvCibp238tP9naldjvon1/dex5hGekmm5TRX/GeSc4wpl A72HPQwUkTpitj4VylcFEb8yagFKQGzzJYrIanEehWtqaU3Dkv3zAqbEW/rJAkEXC4gv xiVssvHoHIeSqr4lUoAU2/eh7jl6gO/zHHk7dpwFzgIgeXHF+2RjZfOJlcSVPcWFmSOR p3Wz2kBBEDpCl7hcfIPZXPn+/NW11TcqqHqqvUbXWaOHn6/hx3dIZKotHVyfGHMotLmt P29rW5SsA/BrG5tnZYxaeo8LXQktjSeIP0J+XqGB5/cr4jquIS553lKEsCz1sfDdz6PK NTTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752894522; x=1753499322; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=p9y3yhhEc6XLO5qpQuSE85PA8k6DmYNYdfRyagczjJ4=; b=xHOJgPCDD3BUad/qZ+KU/RbjplWH2Xe8SUCj66OP/jEu9Bcc+514qXDF8zJrCNm87b yyJ8DJQb8/nD256o+q9FLxENFOsdDNPuAJIxfsPkl2acWPsGH3Nt+pNU70Y1fYBg6Zs9 72gC50KsYjXXfWqcmKHP29RV63Be+NvWRT45AtXKasWl27JER7qtRjVsviDOps4gDlBn Q2Y61Kubpsayex6/414nvxNOHbAH9keZFankLlK2siDSpzyM0XvTFi7QBCe4jhA3tSFV oYJY6k7OK7UWA+TTEyjXxDJliOUhNUCmw8kDeYJaFvTFH4ZG2x9yjBPK/ytLu/dUpDmr AXIw== X-Forwarded-Encrypted: i=1; AJvYcCUUZYXZYa8Z/bOXQUG1YuN1hIVgw0viQGTaT7n4+ZfmntmF3fVL9RH8YJQ9Eu+6/5EKVwPYp5VGZnytqJ02aCo=@vger.kernel.org, AJvYcCVHMgDN3vmLF3Ee51agRNNM0eFpH4D41oWINCe3yZ+ETdEU0QY9PBQiJeXvga6/LSpABGJPlUvTYtAi@vger.kernel.org X-Gm-Message-State: AOJu0YzHfAUeUPfOfjP9bdxp/qXT+woo5+sjAvnvzaNsu+pn+/N4pM7N 1gLAxPA5MhrO9uU1d7Tk6WqZjxhh23oyZfqsndOg1/vwm9nuHEhAXOP0 X-Gm-Gg: ASbGncvWeA730RYvwRBrJa/ROGlZhcRYhggXurm8epbZUJESKh5EDw3DI/LaG/sCRth EcajEujEVC84LyHwTj6ax0/K9h7WxNeBR1eOJIh5lGiefJI8FJ1RK6hurUhs3hNS/1TcAtfWKBO s44nML+m3R/kWx6RiKrtQ2UqNaY0VHDZeeSRg6i1N0vqlyu4cJsHIzAfSB2JodNlLHPD0rpvjse bDIpuZVl4kzACaer/RJSEM5j9Nw6hyQysNIpjDIIBir0t5IhDL7WKy70kWTgoHTJB8yhfGwc54W 1LLheiu/+zNo0pqx2kYqCKoAAaYOwW4zsxuWGyGD6eRuLtuKhqvmXsEfuTZ4TsasbvXV4/GSx32 RE+2/YPV3wrNwdOGEfeGfITu37KW4qW8eM0FLLCPWiI/dfl8//P4YG9B9daActaxQfunUUv2DXf t7NizQXhYMJeRleaVWyZZIMHM= X-Google-Smtp-Source: AGHT+IEkVdjbfr5P+zUooTumLYMu1V6g2Av8CBL01ZTkirc8lOzIIL/2TISrhK6wped1NkU4/oiqAg== X-Received: by 2002:a05:6214:f04:b0:702:d60b:c037 with SMTP id 6a1803df08f44-704f6b08b25mr175496856d6.29.1752894522411; Fri, 18 Jul 2025 20:08:42 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-7051b90616asm14661536d6.35.2025.07.18.20.08.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Jul 2025 20:08:42 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id 7BECEF40066; Fri, 18 Jul 2025 23:08:41 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-10.internal (MEProxy); Fri, 18 Jul 2025 23:08:41 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:40 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 7/9] rust: sync: atomic: Add Atomic Date: Fri, 18 Jul 2025 20:08:25 -0700 Message-Id: <20250719030827.61357-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for basic unsigned types that have an `AtomicImpl` with the same size and alignment. Unit tests are added including Atomic and Atomic. Reviewed-by: Alice Ryhl Reviewed-by: Andreas Hindborg Reviewed-by: Benno Lossin Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/predefine.rs | 95 ++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic= /predefine.rs index a6e5883be7cb..d0875812f6ad 100644 --- a/rust/kernel/sync/atomic/predefine.rs +++ b/rust/kernel/sync/atomic/predefine.rs @@ -27,3 +27,98 @@ fn rhs_into_delta(rhs: i64) -> i64 { rhs } } + +// SAFETY: `u32` and `i32` has the same size and alignment, and `u32` is r= ound-trip transmutable to +// `i32`. +unsafe impl super::AtomicType for u32 { + type Repr =3D i32; +} + +// SAFETY: The wrapping add result of two `i32`s is a valid `u32`. +unsafe impl super::AtomicAdd for u32 { + fn rhs_into_delta(rhs: u32) -> i32 { + rhs as i32 + } +} + +// SAFETY: `u64` and `i64` has the same size and alignment, and `u64` is r= ound-trip transmutable to +// `i64`. +unsafe impl super::AtomicType for u64 { + type Repr =3D i64; +} + +// SAFETY: The wrapping add result of two `i64`s is a valid `u64`. +unsafe impl super::AtomicAdd for u64 { + fn rhs_into_delta(rhs: u64) -> i64 { + rhs as i64 + } +} + +use crate::macros::kunit_tests; + +#[kunit_tests(rust_atomics)] +mod tests { + use super::super::*; + + // Call $fn($val) with each $type of $val. + macro_rules! for_each_type { + ($val:literal in [$($type:ty),*] $fn:expr) =3D> { + $({ + let v: $type =3D $val; + + $fn(v); + })* + } + } + + #[test] + fn atomic_basic_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + assert_eq!(v, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_xchg_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + let old =3D v; + let new =3D v + 1; + + assert_eq!(old, x.xchg(new, Full)); + assert_eq!(new, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_cmpxchg_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + let old =3D v; + let new =3D v + 1; + + assert_eq!(Err(old), x.cmpxchg(new, new, Full)); + assert_eq!(old, x.load(Relaxed)); + assert_eq!(Ok(old), x.cmpxchg(old, new, Relaxed)); + assert_eq!(new, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_arithmetic_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + assert_eq!(v, x.fetch_add(12, Full)); + assert_eq!(v + 12, x.load(Relaxed)); + + x.add(13, Relaxed); + + assert_eq!(v + 25, x.load(Relaxed)); + }); + } +} --=20 2.39.5 (Apple Git-154) From nobody Mon Oct 6 15:15:53 2025 Received: from mail-qk1-f177.google.com (mail-qk1-f177.google.com [209.85.222.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 991DC224B03; Sat, 19 Jul 2025 03:08:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894529; cv=none; b=bhA8zzpQnQkGqqFOnoT6vnJaEcqdxf5R2p3cGH85zkypmXcj7tD17v3T53dGIpovZgx/e/jAMUSzJWU7iIOJHEYh99hCsvYHtyWe9sJU8ZT4aF2IZqIr84m2XVmzC868cNhhrYCtB43+n17qtK3HlVmfkA69Sr4fQrU5B/ghEOk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894529; c=relaxed/simple; bh=9J8UzkIID/XiEhOWXJ9BC39wu1fLwBzVqB38wRzp7CY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=I890fa8L3wh+pAGmT8qMguaGY++tMNY/znL84/vrIawfex8Ig+2J7mjz9sNEP66Fe0w96nIb8ANleCNgqf0yrUjZnDXv6qjKveJ7v6zCAZsINKCObcGyHpj9Y2uHoiJzx2VThiOMQY1RDi5c/GMfxH0NOrLwfn8S2CfQFlZuTB0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fCRA026r; arc=none smtp.client-ip=209.85.222.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fCRA026r" Received: by mail-qk1-f177.google.com with SMTP id af79cd13be357-7e1f3b95449so365340785a.1; Fri, 18 Jul 2025 20:08:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752894524; x=1753499324; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=hRV31OgwoLIyVdat05xKwhcifuZryi7XwOha7Q+Eics=; b=fCRA026r6S/gYav/vzSOjmjyadEJCKWFWoX7HP530/EieU8Ln/kEu5l2n49+c7OCIC rl4qsGng7v7018GIqJqD346QuC3yGS9J6kMlhrXDbCiztn9S/0ExwQDCsjGF+YdKMTi/ +F+WfUusFsrx3gDgPG+KZG4vQeT1cSVK3YPa8EUu1ZWoYK271xptVHq5LTxne8L5d9aT 5KL2yUwcy+ccCedfv0fNupAmioZJpYxiXeJAhmX+q4pPabnX6UkVQ/0B4X7nMHW47LDV kFMvhWdYnPVBDa9+uBTFyq4nLLa4PfgDuuHhujWVakGvSmXTGRQGJdBjZlQbAYgwuwDG Ot1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752894524; x=1753499324; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=hRV31OgwoLIyVdat05xKwhcifuZryi7XwOha7Q+Eics=; b=JscYjmwIgySLOCU6gXqq7/W0NtseQQqkjDG4ppu9dMfD3cnASRT0c0kXqVo2taQO8b 1CRMcfEfqQ5uSrOXaQS3qVYcgxarU0ucv6ApQJdq/LyUcOqTFB/jOO0AqzAoMW9/0lZq g47P382borxnNLBgWfGBD0Nag7EZbwF5I4LoC66e3O3cwHIzNhfZbSXxEwbisQcqVv5G +Qg8PUL6fclV/II8yZdT6DMBVJiA9UkCDWtviCrNM292n7TEBPlkIbGdA9fbwZRlLet5 xb8GoqLYSZ3GxfulOwBo4PXuRY8bHneEvRs4DbIlXjPGdie1b9hhwGC+Piqux1GWj4FW QxxA== X-Forwarded-Encrypted: i=1; AJvYcCVZVQOMlyBtgan/tVszI7iuP6nCEK/kNWScAbRhQGPMQTAZgvhwxs1JU8tkMpDFG5uMOEwwgksC1YsbEgW6MSk=@vger.kernel.org, AJvYcCXGszCg7Vx7CkaaoBYqSb9NoXalH3lEX8VHkm9p/eZGRzLCDo6w56OP974LyvBDmUWnltGTLgMSp98X@vger.kernel.org X-Gm-Message-State: AOJu0YyBJ7OtWPAHsW6Ez+ubixkYXWdzsMfOolItdOtlJz1w6KuklEAI hbnBHgKGrvbzabmCN/qAQ9naEDFHQwEYu1GKhseoUsKA3551bPidcnG7 X-Gm-Gg: ASbGncsp7XetkDpGCvf+rjFxtFtzGXlB8y5EMtSA+kPBbc6TYhq8rZ8uRVEp2YFWMyo k7EAUOgQ0sWRJ4W+AO5XtUy8hjIKKY62i723mOJMSyuXUh4jnTGUdlfhppF6r0uzsDs9Ryee1ZI MFSCxENSNhZ+z47SXoc7jKGEL12tISdEpV1X97stNwmaWd9nngdWYHc5r87OKYdb9pv9jCl6VFi 8c/kj+8s20CYZUmqLj46YEKit1fzDV3kNLQJvMG1uVaHRSIOevqet8uLETu43KShY6boqkN0IcM 9gh+d52zV3wJoPTzm9wxp9CXrc5FPdlxNt9dCkwDZ1ET1IycRJIlLN/c0BtbH2kdEICQngLbG+d Phw+ToDfHaMSt3H3FDPKl1qubznTkQpDleq/sC1ZnI/xwrTO503d+kxG3c6do6IUGKbph22wkbM DcoOiqttvjMV9IlVPL8J5SVHc= X-Google-Smtp-Source: AGHT+IHyjjzUNmWjmhlbi3o/iAlT48Spt+HaBYBPfrzRKfoQhkBndr62EEj/V4AfCEqT9BforNYfrQ== X-Received: by 2002:a05:620a:2f4:b0:7d5:e374:e2bb with SMTP id af79cd13be357-7e35592ffa7mr694036385a.27.1752894524039; Fri, 18 Jul 2025 20:08:44 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7e356c4780esm157275585a.55.2025.07.18.20.08.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Jul 2025 20:08:43 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id E1F01F40066; Fri, 18 Jul 2025 23:08:42 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-10.internal (MEProxy); Fri, 18 Jul 2025 23:08:42 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:42 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 8/9] rust: sync: atomic: Add Atomic<{usize,isize}> Date: Fri, 18 Jul 2025 20:08:26 -0700 Message-Id: <20250719030827.61357-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for `usize` and `isize`. Note that instead of mapping directly to `atomic_long_t`, the represention type (`AtomicType::Repr`) is selected based on CONFIG_64BIT. This reduces the necessity of creating `atomic_long_*` helpers, which could save the binary size of kernel if inline helpers are not available. To do so, an internal type `isize_atomic_repr` is defined, it's `i32` in 32bit kernel and `i64` in 64bit kernel. Reviewed-by: Alice Ryhl Reviewed-by: Andreas Hindborg Reviewed-by: Benno Lossin Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/predefine.rs | 54 +++++++++++++++++++++++++--- 1 file changed, 50 insertions(+), 4 deletions(-) diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic= /predefine.rs index d0875812f6ad..feba77372bb8 100644 --- a/rust/kernel/sync/atomic/predefine.rs +++ b/rust/kernel/sync/atomic/predefine.rs @@ -2,6 +2,9 @@ =20 //! Pre-defined atomic types =20 +use core::mem::{align_of, size_of}; +use crate::static_assert; + // SAFETY: `i32` has the same size and alignment with itself, and is round= -trip transmutable to // itself. unsafe impl super::AtomicType for i32 { @@ -28,6 +31,36 @@ fn rhs_into_delta(rhs: i64) -> i64 { } } =20 +// Defines an internal type that always maps to the integer type which has= the same size alignment +// as `isize` and `usize`, and `isize` and `usize` are always bi-direction= al transmutable to +// `isize_atomic_repr`, which also always implements `AtomicImpl`. +#[allow(non_camel_case_types)] +#[cfg(not(CONFIG_64BIT))] +type isize_atomic_repr =3D i32; +#[allow(non_camel_case_types)] +#[cfg(CONFIG_64BIT)] +type isize_atomic_repr =3D i64; + +// Ensure size and alignment requirements are checked. +static_assert!(size_of::() =3D=3D size_of::()); +static_assert!(align_of::() =3D=3D align_of::()); +static_assert!(size_of::() =3D=3D size_of::()); +static_assert!(align_of::() =3D=3D align_of::()); + +// SAFETY: `isize` has the same size and alignment with `isize_atomic_repr= `, and is round-trip +// transmutable to `isize_atomic_repr`. +unsafe impl super::AtomicType for isize { + type Repr =3D isize_atomic_repr; +} + +// SAFETY: The wrapping add result of two `isize_atomic_repr`s is a valid = `usize`. +unsafe impl super::AtomicAdd for isize { + fn rhs_into_delta(rhs: isize) -> isize_atomic_repr { + rhs as isize_atomic_repr + } +} + + // SAFETY: `u32` and `i32` has the same size and alignment, and `u32` is r= ound-trip transmutable to // `i32`. unsafe impl super::AtomicType for u32 { @@ -54,6 +87,19 @@ fn rhs_into_delta(rhs: u64) -> i64 { } } =20 +// SAFETY: `usize` has the same size and alignment with `isize_atomic_repr= `, and is round-trip +// transmutable to `isize_atomic_repr`. +unsafe impl super::AtomicType for usize { + type Repr =3D isize_atomic_repr; +} + +// SAFETY: The wrapping add result of two `isize_atomic_repr`s is a valid = `usize`. +unsafe impl super::AtomicAdd for usize { + fn rhs_into_delta(rhs: usize) -> isize_atomic_repr { + rhs as isize_atomic_repr + } +} + use crate::macros::kunit_tests; =20 #[kunit_tests(rust_atomics)] @@ -73,7 +119,7 @@ macro_rules! for_each_type { =20 #[test] fn atomic_basic_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 assert_eq!(v, x.load(Relaxed)); @@ -82,7 +128,7 @@ fn atomic_basic_tests() { =20 #[test] fn atomic_xchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 let old =3D v; @@ -95,7 +141,7 @@ fn atomic_xchg_tests() { =20 #[test] fn atomic_cmpxchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 let old =3D v; @@ -110,7 +156,7 @@ fn atomic_cmpxchg_tests() { =20 #[test] fn atomic_arithmetic_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 assert_eq!(v, x.fetch_add(12, Full)); --=20 2.39.5 (Apple Git-154) From nobody Mon Oct 6 15:15:53 2025 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C560122A4F6; Sat, 19 Jul 2025 03:08:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894529; cv=none; b=W45KeqlnxAi5+Or8CQmwehp8l5YTRX6GRw0Sy2Byty+NafdM09PGAgWI+tNSwmYONhyR61O7VGLF7xstWfvNc79PT9xSBZogpWtSiKe1rNFnXmrjKfawFMD0Mc+bMryrzK70ZwMIubmbIBnxjKMpuoKRCTzs1zuqmrnayC39Hag= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752894529; c=relaxed/simple; bh=WCT6z5PFKds5elqtScAZ0FqkB10N0S9kF8SYI2tCpwg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QOIOl55kppVwCy8bCk7Z85Xq8QvJnnlsiSunbhUYQ6CCNA25cNKymvqhRAR/zQP6fj2wYLAzCuzG4+Xa9KRTYSzWhp0BFA5gzQGrT4fV9BbmCut5/bxkM8nADN72Ib20PTqkilSl0vft8QT/qJ6gi5/GQhYl+u0CJP/EDtxvzXg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=VeQjW80m; arc=none smtp.client-ip=209.85.160.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="VeQjW80m" Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-4ab5aec969eso50398821cf.3; Fri, 18 Jul 2025 20:08:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752894525; x=1753499325; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=xbxFvcQp2SKKo4g3N8pGlZT4YowYhGke/9HCeglF0aQ=; b=VeQjW80mpQwLyDWLA4fFRZACQvYnsZ1Y/mJ0m4VAIB4QEi04jbnRV2D6+8vBdHjjyB 4jFdqb9ClSlO6yjayaDaiQkPV+wi+TTROzkb3gIRV/Ktc2XGPRLFn53gTaT189n5JZWi 0PYt1wPbyU2r1Cht+3i4+isztj/fze2ExP9EKoCNm7/TQra4ZYzl6cwY9MpBqkKZ1uwm ofsfrOP548MChhz2Srds/2uWMhaj8PTEfMI2llDL0mZ7Win4ACb9ji7DtYvjMeMrThIv Nycy2mbAh449kB9rQS36Zb/DzgXDv9haLmgHl+jNf/YMeIWya+3z0tskvcPbTsTEijYF TVLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752894525; x=1753499325; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=xbxFvcQp2SKKo4g3N8pGlZT4YowYhGke/9HCeglF0aQ=; b=CVMRQhb2EuS0EzirnuOGl8Yybt1mIll2+bTdXpaeUM59yNQknJ+wE1yu0U9u5RfC66 PiYqYYeZ9boFWwlbGnx25ePtOdW1pg6W0MucXRrx2qPJZCpCwnwYwoyM1Obh6t4pZEg6 XGK6C6mz/w2hbZaWH3PMGRQ+CLgs1ADwjf9jhfF3ArxrUZFSecnjHOmPJw/mXNV8tygK g0kIGn9PjwQ8wwt/WP87FUsIpSa4jrJU8V7xVFqc7ANjDDVM+Ew4x4jIZpB/EEWeY62Z 1lo1SqWPPsbBaNjFwhmub88SFrSKriCeqt0zo7LRbLNlWe+NoHP8HwtixBIsO3n0cb5Y 76dg== X-Forwarded-Encrypted: i=1; AJvYcCVioDInE/wajm5vm+f3aGmu7XrR7ouXX5uOgpkvSGD9Sfzw9whz3Yb2A+K4XIM2RdHfSZg010CZVtkk@vger.kernel.org, AJvYcCWjl6Q+oByopVLS5bOCDpmQQNIqpdKoBSoLULvCx1X0Tid2WNY/mUurYrMqFjD3HLdatxXpN4THvDiXeXubs7Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yx1wVpyL5YOepeSa+qEuEaSCgvXQVo2XyGvcc95MZLZr7rKdiPK P8odeYbKe06Qy37644HNiZ8l1C9/gmxU7XFm/NKXwL4B7MosLdUAOBBA X-Gm-Gg: ASbGncsW3ZB4MelWfzYkLrSGxIQSxkIoCQom1rF4Mk7+vzl2eoew4vlzRJoG89khRvQ 71dGeWtcDpOwQkbrN6gKf8N4NxX/+xD0LWN9ysW3OCJAWeHoKU464rK/ggwTj9+VNU7tvqvLl/J 2ykkloO4a2ejJyqHQLJkPOdOAOKwpBXxTOH4Acju8LANLStNPbvdPGBlb00ThLzCinHrCQQV0Bq sSeUFrQFV5V9fPeO3uxeqSYw6ezz8b17t++kAm22P0Ps4c1yGzzeND+vHpFksYME4MeA+7hGmcd 1b794msToisVUOQl0HCrv3KbaQG0kWTMr+QAqTjvPsklCPCK0pW701+HFzCUt/dvULD2osx10NQ b31WyjHlnxVeu7ZYHHg8ylw3z48fr+V4d9IObjHavNz6mehMoD1gnIxCzbk5OXznULY9ukjpb6K hLPD0GN0Zv1Uaw/poY17euN3XVQdW3OL49UA== X-Google-Smtp-Source: AGHT+IGvIMhjAo8O2Xy1EXQ+szSYQ0zQRyQvTPo729F9LOE3fLEo20etcZscd7pMYfa3qVnCljRHGA== X-Received: by 2002:a05:622a:260a:b0:4ab:7e22:8553 with SMTP id d75a77b69052e-4abb2ca84f4mr72883261cf.12.1752894525404; Fri, 18 Jul 2025 20:08:45 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-7051b8bc2fdsm14615996d6.23.2025.07.18.20.08.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Jul 2025 20:08:44 -0700 (PDT) Received: from phl-compute-01.internal (phl-compute-01.phl.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 4B4A0F40066; Fri, 18 Jul 2025 23:08:44 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-01.internal (MEProxy); Fri, 18 Jul 2025 23:08:44 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdeihedvudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 18 Jul 2025 23:08:43 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , "Alan Stern" Subject: [PATCH v8 9/9] rust: sync: Add memory barriers Date: Fri, 18 Jul 2025 20:08:27 -0700 Message-Id: <20250719030827.61357-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250719030827.61357-1-boqun.feng@gmail.com> References: <20250719030827.61357-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Memory barriers are building blocks for concurrent code, hence provide a minimal set of them. The compiler barrier, barrier(), is implemented in inline asm instead of using core::sync::atomic::compiler_fence() because memory models are different: kernel's atomics are implemented in inline asm therefore the compiler barrier should be implemented in inline asm as well. Also it's currently only public to the kernel crate until there's a reasonable driver usage. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/helpers/barrier.c | 18 +++++++++++ rust/helpers/helpers.c | 1 + rust/kernel/sync.rs | 1 + rust/kernel/sync/barrier.rs | 61 +++++++++++++++++++++++++++++++++++++ 4 files changed, 81 insertions(+) create mode 100644 rust/helpers/barrier.c create mode 100644 rust/kernel/sync/barrier.rs diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c new file mode 100644 index 000000000000..cdf28ce8e511 --- /dev/null +++ b/rust/helpers/barrier.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +void rust_helper_smp_mb(void) +{ + smp_mb(); +} + +void rust_helper_smp_wmb(void) +{ + smp_wmb(); +} + +void rust_helper_smp_rmb(void) +{ + smp_rmb(); +} diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 83e89f6a68fb..8ddfc8f84e87 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -9,6 +9,7 @@ =20 #include "atomic.c" #include "auxiliary.c" +#include "barrier.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index b620027e0641..c7c0e552bafe 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -11,6 +11,7 @@ =20 mod arc; pub mod atomic; +pub mod barrier; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs new file mode 100644 index 000000000000..8f2d435fcd94 --- /dev/null +++ b/rust/kernel/sync/barrier.rs @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory barriers. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions +//! of semantics can be found at [`LKMM`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ + +/// A compiler barrier. +/// +/// A barrier that prevents compiler from reordering memory accesses acros= s the barrier. +#[inline(always)] +pub(crate) fn barrier() { + // By default, Rust inline asms are treated as being able to access an= y memory or flags, hence + // it suffices as a compiler barrier. + // + // SAFETY: An empty asm block. + unsafe { core::arch::asm!("") }; +} + +/// A full memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory access= es across the barrier. +#[inline(always)] +pub fn smp_mb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_mb()` is safe to call. + unsafe { bindings::smp_mb() }; + } else { + barrier(); + } +} + +/// A write-write memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory write = accesses across the +/// barrier. +#[inline(always)] +pub fn smp_wmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_wmb()` is safe to call. + unsafe { bindings::smp_wmb() }; + } else { + barrier(); + } +} + +/// A read-read memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory read a= ccesses across the +/// barrier. +#[inline(always)] +pub fn smp_rmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_rmb()` is safe to call. + unsafe { bindings::smp_rmb() }; + } else { + barrier(); + } +} --=20 2.39.5 (Apple Git-154)