From nobody Tue Oct 7 11:45:07 2025 Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B818F2980DF; Thu, 10 Jul 2025 06:01:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127265; cv=none; b=fUc9+Gy0Gh+fKzywyoGE+vDZ0t8fPxlnT0PkmzBKIUzKz8VhDYZDq5OudXEcO3Zu34O/0dpTeMwLMJT/oOEHl7gBnKH3xij82J3gNsQ6M6YDCWkV8RG0rNkckxHwtFXWWU9hfp8n7c3kf6AL/q4RMH+8tiJXaWwdbJknpdQ8B1s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127265; c=relaxed/simple; bh=22djv8E6MRg4uvxlUWiDx6U/LmxxxIyjKjQ4JApsS2M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HXE+R+WNVmj6VaeZtyd5J8smECuxsktV2NjtZusaPpciMsJdoXj5J3v9ILNul0m6nIt8bQXXcklnXUnOSn+Va0hutI6SJ2LfqMaNv/8aaXF94ZMYlIUEhYxeiitxz0R17Oef5pM2xK5NU5cQ2GftFqxU1CUkt0rYijseJ8s+Tvs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CjoTbiz7; arc=none smtp.client-ip=209.85.160.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CjoTbiz7" Received: by mail-qt1-f179.google.com with SMTP id d75a77b69052e-4a9e7c12decso7552861cf.2; Wed, 09 Jul 2025 23:01:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752127259; x=1752732059; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=MROVhjbXAwDJvnFQvYhC8HYg2UDwZTq9JXFvovjW0rc=; b=CjoTbiz77FR51fNwCqcDAlUWQAwOHPhVVUQFt5kfsNxnpweCo20PKBbcb1V7Ej9cgV mdPKN5zvBJ83rq47pTGFBr9X3LmlQ8xqedxMWLD3UZcuuWgVDT4rilnx6rSSKDz5H4tP LplmQDp4ip3UInS64oK7tz1KxN15Zr4EZ4ePUJUe1sN7lYVYEWa5y4k4nMUTavRA2ZWN wflHpPCuzwvbIYLTDO2JqnRAwn0w4WgrEEPVTtTwIgVONdvsH4HZX6EYaQkSrRGBBJd2 17Zsp3/O5jnjOTLmtNHMmqnqXZLd4Q6VRqeJEAgd056J9n661L4HxwHP5e21mEROf2q/ USig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752127259; x=1752732059; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=MROVhjbXAwDJvnFQvYhC8HYg2UDwZTq9JXFvovjW0rc=; b=nzAcI+BLn++k1eoBBPa9vpkPGciPbbZQmVIykCPVCcv88cKsooAfJDNKw5xKQl0Zzb JXTlKV5oyWPQcG73UVumxAWAyn58DoWn1iCCXz9/LHgn6PjNbJUbDA9KvF5bwc7dcI4W SYa/yJ6erfbDorDGSVQIvvO3brFd7KY2vfzsIgEtkBcSgTQ/P8AKTAeSx4Pu3h0WY0zG 8ULK3MSp/xbfnCuUoioCCw0qBnHuHmrFne1Kq9j0Sjgr+rWlli0wlb5zjrmKM2mRKFAh w+AtqY3mGLbjpDPnjSbghuZaZuHd7J5xgPdMcvo53K6ujPggdgkhPxnl34yE9x7V1NiZ 4U9g== X-Forwarded-Encrypted: i=1; AJvYcCU2Rhd++FEN7yfcxV1Tq9v50AeBcMp9oc26qRWKmA1itYN1YhAssKYNoJodaGXE0/VCcbm2RoQR7CpT@vger.kernel.org, AJvYcCXAwSgfq5rvOiYiEn29zO4Utr9VP+QY8j1z6BoDRD1kus4Grd0lA6/uY8cUJHoF1TyKLYVPWvEJt8ZT72OsnKE=@vger.kernel.org X-Gm-Message-State: AOJu0YxEA49fNvDNTrk+FzYS4O5bYWb4rtRgoVaJrRzcMml/FRWHOhh/ 8daFIDuorEkKTz1DNbPm1lUGbA1p7NlYgWYvNqzj+h4qpCD0BNSAdWfy X-Gm-Gg: ASbGncsoxyUsr5RFLCS3/WnT0DZKL1VXJDSPhO5GsEidrAkBXhYUmv64Ix2QsDybm9B gfNz3SFq8vWKiCw9KYdqs1x5hjfTidgF0hMzZ4mJfObfnwKk4sZ8vSDdVPDHUFDAcSDW+lB3xb9 dcgDMTr0/0HZJYhP6rCXpUyLQzRZOiTwUqSyE//SbUdvWG9B7LSs6PKm5VnOwnuXMFZBapXtbfy uGZky/ltj0KWr2f+zjxrmjRzqtbfex3GYTY5cxDdL3qyOTJeSiSXr4rWz/vYLLWTzJiVBtq0HEU Br80/u4h3CZsX388urTgHPU0pUyTktBk5lXSlj2ftqdGO2qnPd3HutfoGejvKLuWIewV4i67BFm dbTSxwqj5REQJ4AxZ6dI0MMzug7v6qItwlClJZYttToPkkq2C3fWKNbF89teF2Gg= X-Google-Smtp-Source: AGHT+IFqa82fRtbQGzcNoAF4YVxeV8bUWMmABtP0Gv6SNBSif6n0DptVUqiTDDgJtqbZcSWSLVVouw== X-Received: by 2002:ad4:5fc5:0:b0:6fa:8c15:75c1 with SMTP id 6a1803df08f44-7048b818c70mr71644416d6.2.1752127259184; Wed, 09 Jul 2025 23:00:59 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-70497d39d1fsm5104216d6.64.2025.07.09.23.00.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jul 2025 23:00:58 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 31E3FF4006C; Thu, 10 Jul 2025 02:00:58 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-04.internal (MEProxy); Thu, 10 Jul 2025 02:00:58 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:00:57 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 1/9] rust: Introduce atomic API helpers Date: Wed, 9 Jul 2025 23:00:44 -0700 Message-Id: <20250710060052.11955-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to support LKMM atomics in Rust, add rust_helper_* for atomic APIs. These helpers ensure the implementation of LKMM atomics in Rust is the same as in C. This could save the maintenance burden of having two similar atomic implementations in asm. Originally-by: Mark Rutland Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/helpers/atomic.c | 1040 +++++++++++++++++++++ rust/helpers/helpers.c | 1 + scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic-helpers.sh | 67 ++ 4 files changed, 1109 insertions(+) create mode 100644 rust/helpers/atomic.c create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c new file mode 100644 index 000000000000..cf06b7ef9a1c --- /dev/null +++ b/rust/helpers/atomic.c @@ -0,0 +1,1040 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +/* + * This file provides helpers for the various atomic functions for Rust. + */ +#ifndef _RUST_ATOMIC_API_H +#define _RUST_ATOMIC_API_H + +#include + +// TODO: Remove this after INLINE_HELPERS support is added. +#ifndef __rust_helper +#define __rust_helper +#endif + +__rust_helper int +rust_helper_atomic_read(const atomic_t *v) +{ + return atomic_read(v); +} + +__rust_helper int +rust_helper_atomic_read_acquire(const atomic_t *v) +{ + return atomic_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic_set(atomic_t *v, int i) +{ + atomic_set(v, i); +} + +__rust_helper void +rust_helper_atomic_set_release(atomic_t *v, int i) +{ + atomic_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic_add(int i, atomic_t *v) +{ + atomic_add(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return(int i, atomic_t *v) +{ + return atomic_add_return(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_acquire(int i, atomic_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_release(int i, atomic_t *v) +{ + return atomic_add_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add(int i, atomic_t *v) +{ + return atomic_fetch_add(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_release(int i, atomic_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_sub(int i, atomic_t *v) +{ + atomic_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return(int i, atomic_t *v) +{ + return atomic_sub_return(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_release(int i, atomic_t *v) +{ + return atomic_sub_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub(int i, atomic_t *v) +{ + return atomic_fetch_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_inc(atomic_t *v) +{ + atomic_inc(v); +} + +__rust_helper int +rust_helper_atomic_inc_return(atomic_t *v) +{ + return atomic_inc_return(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_inc_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_release(atomic_t *v) +{ + return atomic_inc_return_release(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_inc(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_inc_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_dec(atomic_t *v) +{ + atomic_dec(v); +} + +__rust_helper int +rust_helper_atomic_dec_return(atomic_t *v) +{ + return atomic_dec_return(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_dec_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_release(atomic_t *v) +{ + return atomic_dec_return_release(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_dec(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_dec_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_and(int i, atomic_t *v) +{ + atomic_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and(int i, atomic_t *v) +{ + return atomic_fetch_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_andnot(int i, atomic_t *v) +{ + atomic_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_or(int i, atomic_t *v) +{ + atomic_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or(int i, atomic_t *v) +{ + return atomic_fetch_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_release(int i, atomic_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_xor(int i, atomic_t *v) +{ + atomic_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor(int i, atomic_t *v) +{ + return atomic_fetch_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_xchg(atomic_t *v, int new) +{ + return atomic_xchg(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_acquire(atomic_t *v, int new) +{ + return atomic_xchg_acquire(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_release(atomic_t *v, int new) +{ + return atomic_xchg_release(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return atomic_xchg_relaxed(v, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return atomic_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_release(int i, atomic_t *v) +{ + return atomic_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return atomic_add_negative_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_inc_not_zero(atomic_t *v) +{ + return atomic_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic_inc_unless_negative(atomic_t *v) +{ + return atomic_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic_dec_unless_positive(atomic_t *v) +{ + return atomic_dec_unless_positive(v); +} + +__rust_helper int +rust_helper_atomic_dec_if_positive(atomic_t *v) +{ + return atomic_dec_if_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_read(const atomic64_t *v) +{ + return atomic64_read(v); +} + +__rust_helper s64 +rust_helper_atomic64_read_acquire(const atomic64_t *v) +{ + return atomic64_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic64_set(atomic64_t *v, s64 i) +{ + atomic64_set(v, i); +} + +__rust_helper void +rust_helper_atomic64_set_release(atomic64_t *v, s64 i) +{ + atomic64_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic64_add(s64 i, atomic64_t *v) +{ + atomic64_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return atomic64_add_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_sub(s64 i, atomic64_t *v) +{ + atomic64_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_inc(atomic64_t *v) +{ + atomic64_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return(atomic64_t *v) +{ + return atomic64_inc_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_inc_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_dec(atomic64_t *v) +{ + atomic64_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return(atomic64_t *v) +{ + return atomic64_dec_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_dec_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_and(s64 i, atomic64_t *v) +{ + atomic64_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_or(s64 i, atomic64_t *v) +{ + atomic64_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_xor(s64 i, atomic64_t *v) +{ + atomic64_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_xchg(atomic64_t *v, s64 new) +{ + return atomic64_xchg(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return atomic64_xchg_acquire(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return atomic64_xchg_release(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return atomic64_xchg_relaxed(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_acquire(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_release(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_unless_negative(atomic64_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic64_dec_unless_positive(atomic64_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_if_positive(atomic64_t *v) +{ + return atomic64_dec_if_positive(v); +} + +#endif /* _RUST_ATOMIC_API_H */ +// 615a0e0c98b5973a47fe4fa65e92935051ca00ed diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 16fa9bca5949..83e89f6a68fb 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -7,6 +7,7 @@ * Sorted alphabetically. */ =20 +#include "atomic.c" #include "auxiliary.c" #include "blk.c" #include "bug.c" diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 5b98a8307693..02508d0d6fe4 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -11,6 +11,7 @@ cat < ${LINUXDIR}/include= /${header} diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen= -rust-atomic-helpers.sh new file mode 100755 index 000000000000..45b1e100ed7c --- /dev/null +++ b/scripts/atomic/gen-rust-atomic-helpers.sh @@ -0,0 +1,67 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=3D$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +gen_proto_order_variant() +{ + local meta=3D"$1"; shift + local pfx=3D"$1"; shift + local name=3D"$1"; shift + local sfx=3D"$1"; shift + local order=3D"$1"; shift + local atomic=3D"$1"; shift + local int=3D"$1"; shift + + local atomicname=3D"${atomic}_${pfx}${name}${sfx}${order}" + + local ret=3D"$(gen_ret_type "${meta}" "${int}")" + local params=3D"$(gen_params "${int}" "${atomic}" "$@")" + local args=3D"$(gen_args "$@")" + local retstmt=3D"$(gen_ret_stmt "${meta}")" + +cat < + +// TODO: Remove this after INLINE_HELPERS support is added. +#ifndef __rust_helper +#define __rust_helper +#endif + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic" "int" ${args} +done + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat < X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:00:58 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 2/9] rust: sync: Add basic atomic operation mapping framework Date: Wed, 9 Jul 2025 23:00:45 -0700 Message-Id: <20250710060052.11955-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for generic atomic implementation. To unify the implementation of a generic method over `i32` and `i64`, the C side atomic methods need to be grouped so that in a generic method, they can be referred as ::, otherwise their parameters and return value are different between `i32` and `i64`, which would require using `transmute()` to unify the type into a `T`. Introduce `AtomicImpl` to represent a basic type in Rust that has the direct mapping to an atomic implementation from C. This trait is sealed, and currently only `i32` and `i64` impl this. Further, different methods are put into different `*Ops` trait groups, and this is for the future when smaller types like `i8`/`i16` are supported but only with a limited set of API (e.g. only set(), load(), xchg() and cmpxchg(), no add() or sub() etc). While the atomic mod is introduced, documentation is also added for memory models and data races. Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect my responsiblity on the Rust atomic mod. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin --- MAINTAINERS | 4 +- rust/kernel/sync.rs | 1 + rust/kernel/sync/atomic.rs | 19 ++++ rust/kernel/sync/atomic/ops.rs | 195 +++++++++++++++++++++++++++++++++ 4 files changed, 218 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/sync/atomic.rs create mode 100644 rust/kernel/sync/atomic/ops.rs diff --git a/MAINTAINERS b/MAINTAINERS index 0c1d245bf7b8..5eef524975ca 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c ATOMIC INFRASTRUCTURE M: Will Deacon M: Peter Zijlstra -R: Boqun Feng +M: Boqun Feng R: Mark Rutland L: linux-kernel@vger.kernel.org S: Maintained @@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h F: include/linux/refcount.h F: scripts/atomic/ +F: rust/kernel/sync/atomic.rs +F: rust/kernel/sync/atomic/ =20 ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 36a719015583..b620027e0641 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -10,6 +10,7 @@ use pin_init; =20 mod arc; +pub mod atomic; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs new file mode 100644 index 000000000000..c9c7c3617dd5 --- /dev/null +++ b/rust/kernel/sync/atomic.rs @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic primitives. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions of +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Con= sistency) Model is the +//! only model for Rust code in kernel, and Rust's own atomics should be a= voided. +//! +//! # Data races +//! +//! [`LKMM`] atomics have different rules regarding data races: +//! +//! - A normal write from C side is treated as an atomic write if +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=3Dy. +//! - Mixed-size atomic accesses don't cause data races. +//! +//! [`LKMM`]: srctree/tools/memory-model/ + +pub mod ops; diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs new file mode 100644 index 000000000000..da04dd383962 --- /dev/null +++ b/rust/kernel/sync/atomic/ops.rs @@ -0,0 +1,195 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic implementations. +//! +//! Provides 1:1 mapping of atomic implementations. + +use crate::bindings::*; +use crate::macros::paste; + +mod private { + /// Sealed trait marker to disable customized impls on atomic implemen= tation traits. + pub trait Sealed {} +} + +// `i32` and `i64` are only supported atomic implementations. +impl private::Sealed for i32 {} +impl private::Sealed for i64 {} + +/// A marker trait for types that implement atomic operations with C side = primitives. +/// +/// This trait is sealed, and only types that have directly mapping to the= C side atomics should +/// impl this: +/// +/// - `i32` maps to `atomic_t`. +/// - `i64` maps to `atomic64_t`. +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {} + +// `atomic_t` implements atomic operations on `i32`. +impl AtomicImpl for i32 {} + +// `atomic64_t` implements atomic operations on `i64`. +impl AtomicImpl for i64 {} + +// This macro generates the function signature with given argument list an= d return type. +macro_rules! declare_atomic_method { + ( + $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)? + ) =3D> { + paste!( + #[doc =3D concat!("Atomic ", stringify!($func))] + #[doc =3D "# Safety"] + #[doc =3D "- Any pointer passed to the function has to be a va= lid pointer"] + #[doc =3D "- Accesses must not cause data races per LKMM:"] + #[doc =3D " - Atomic read racing with normal read, normal wri= te or atomic write is not data race."] + #[doc =3D " - Atomic write racing with normal read or normal = write is data-race, unless the"] + #[doc =3D " normal accesses are done at C side and consider= ed as immune to data"] + #[doc =3D " races, e.g. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_AT= OMIC."] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)= ?; + ); + }; + ( + $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(->= $ret:ty)? + ) =3D> { + paste!( + declare_atomic_method!( + [< $func _ $variant >]($($arg_sig)*) $(-> $ret)? + ); + ); + + declare_atomic_method!( + $func [$($rest)*]($($arg_sig)*) $(-> $ret)? + ); + }; + ( + $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)? + ) =3D> { + declare_atomic_method!( + $func($($arg_sig)*) $(-> $ret)? + ); + } +} + +// This macro generates the function implementation with given argument li= st and return type, and it +// will replace "call(...)" expression with "$ctype _ $func" to call the r= eal C function. +macro_rules! impl_atomic_method { + ( + ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $re= t:ty)? { + call($($c_arg:expr),*) + } + ) =3D> { + paste!( + #[inline(always)] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)= ? { + // SAFETY: Per function safety requirement, all pointers a= re valid, and accesses + // won't cause data race per LKMM. + unsafe { [< $ctype _ $func >]($($c_arg,)*) } + } + ); + }; + ( + ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_= sig:tt)*) $(-> $ret:ty)? { + call($($arg:tt)*) + } + ) =3D> { + paste!( + impl_atomic_method!( + ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)?= { + call($($arg)*) + } + ); + ); + impl_atomic_method!( + ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? { + call($($arg)*) + } + ); + }; + ( + ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? { + call($($arg:tt)*) + } + ) =3D> { + impl_atomic_method!( + ($ctype) $func($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + } +} + +// Delcares $ops trait with methods and implements the trait for `i32` and= `i64`. +macro_rules! declare_and_impl_atomic_methods { + ($ops:ident ($doc:literal) { + $( + $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:= ty)? { + call($($arg:tt)*) + } + )* + }) =3D> { + #[doc =3D $doc] + pub trait $ops: AtomicImpl { + $( + declare_atomic_method!( + $func[$($variant)*]($($arg_sig)*) $(-> $ret)? + ); + )* + } + + impl $ops for i32 { + $( + impl_atomic_method!( + (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)?= { + call($($arg)*) + } + ); + )* + } + + impl $ops for i64 { + $( + impl_atomic_method!( + (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret= )? { + call($($arg)*) + } + ); + )* + } + } +} + +declare_and_impl_atomic_methods!( + AtomicHasBasicOps ("Basic atomic operations") { + read[acquire](ptr: *mut Self) -> Self { + call(ptr.cast()) + } + + set[release](ptr: *mut Self, v: Self) { + call(ptr.cast(), v) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasXchgOps ("Exchange and compare-and-exchange atomic operations= ") { + xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self { + call(ptr.cast(), v) + } + + try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut S= elf, new: Self) -> bool { + call(ptr.cast(), old, new) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasArithmeticOps ("Atomic arithmetic operations") { + add[](ptr: *mut Self, v: Self) { + call(v, ptr.cast()) + } + + fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self) -> S= elf { + call(v, ptr.cast()) + } + } +); --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 11:45:07 2025 Received: from mail-qk1-f170.google.com (mail-qk1-f170.google.com [209.85.222.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A7A129CB3C; Thu, 10 Jul 2025 06:01:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127267; cv=none; b=quAG0FZndDTzWpRTRAf72dPS/dXJMqiTNIy2jgg5R3QvZOX04lSEsY+k57P/mpAoQdvOGnWMkNsfyMALBwYSNyaqT/uOWit5Rn/SMKybbIulKc5MK1pHyBHux6Gw4biUzpVajvXFAHSZrKAiRUEwy72+IRxSk+HYdXfSjVf3/cQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127267; c=relaxed/simple; bh=AofuJkxBhjDiz8NtTL9ENppPKcq3Wvnh4vcrq/oOHfc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hC9cPBK75VKm1nqLvgGksDPptx6xo9HhnuoE1D+NchaWd22htKW3DtRGWrUmkDapF3uP8t4mbJuajsJAFwUSl2Kp/fxNYMhsPMhEahz4Rs7DPMOtcOLKZZYhS9bc+OG7wGWCiNI3xZAYyTG85l7nRFnRm2270h83eUZCsokLuj4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dmAXtAuV; arc=none smtp.client-ip=209.85.222.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dmAXtAuV" Received: by mail-qk1-f170.google.com with SMTP id af79cd13be357-7d95b08634fso40056485a.2; Wed, 09 Jul 2025 23:01:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752127263; x=1752732063; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=/DU6Gd6GsvMpxfhg5e8gpmRBqjJBuHlmTByBwz0k8BE=; b=dmAXtAuVH0SnbaAdZmg6BHwySNHI+ju/aMrJ+TYDeH2QCNF+eAymUbRJ5C1YHnU3Hy tOm1TJAzbi9korMhWuxS0Q9T+WCo7W5LjjrhVdRDcITP3pMUtPYPlPE+4BbrMqVLwC/L RuhRmQxrJwQzstAzLnB92ZkUMRR53UT9z3IkFOooXjoQGSHDGAO2YepEB3IGjuos/ERn 9j2ydwvfPSsGuwoaPfHXQdVmy9Isc9f+mlYSLJriGhWyd7P+GOKNWWuTbhTRibN8uJWi 1UubVfWpX5xY9sDQ1LthQrOyEdsLhanfPjp8euO5Sg1fkc4XpGMalalIKr699sTLEFnr Kk5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752127263; x=1752732063; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=/DU6Gd6GsvMpxfhg5e8gpmRBqjJBuHlmTByBwz0k8BE=; b=ik04u/uksdYvBIdBvXeAlvjyc3huEnrPL0LVzv5DaFw4/G4eIglQQ+6nM2Paccg1WA 0QB8SnGmbsauRFsxVl4FKBrHnU/LZJZhlaEHPghGir4v9xSqIKrrbUlxtiKqey+2G4kx izNcWm0+DEUBQckTmT1t6cS0OAQ2PoJCBfHfhVbyqkBUIiR7J+uCdMfWv/6JAMBh5B6/ 25lcqY4yfnAgxyNwQSerStQY0sEy8yHd6Pk0T2QhGKKnB2dLMrDG49zE68sVytIeUwiN PXLgg+Yi5G5gwx0mVX31DBDRgoYAGBXasMf/4rdb4dPBWRBRB+8anzOfx1uycCihNoIt Hq+Q== X-Forwarded-Encrypted: i=1; AJvYcCVd4LXvAjrXbG1fKwIguEJcYXDLwCMRXjfmzY5nr15bosjsjrPyphaMjTGIWITojf/fI/K51SJMiDYy3Iw4G5s=@vger.kernel.org, AJvYcCVxuDJNJmDvzCRTu8MeXKOfg7+5P7CXkv7iEY/edZsffk4qUjNqe3YbgzqZMyrMX3r+Yeex7/J689Yo@vger.kernel.org X-Gm-Message-State: AOJu0YwXW8fxx3++W33MNpPbAgr3CtH0VROdzL1BQYlNuvCVirnHuaeP 5W6BI44ARG6XRk7N608UTBThGaOrTR8jt88tjH7Gxo92Kx34CXDhSubp X-Gm-Gg: ASbGnctWNP5eP2Fa3wOtdG1TOpzEkyo+WUFzAprg5D0yefEqFXtqQA6HaEwCKGmz2BI 2pY2bq2WbwMaeHG53uK4V6jMf1F0rDy/6ZkJsshrWyxs4LN5DXBqps1BxamAQOZiSOnZzy7RDpQ 5sc0WnNQNa+quemvI3vnTawcMT4ShRWKZhyD7ZgGVa9/9Fk03jI9hK7oZUNQ6e25WNjL4q80w04 dEa4n9kUUOmYjubRTIhSB0zAHZ4stPZhtKeq6a61g1/WFylr2GtZzdN0yoSA4RRgNEThReSR+PB qfc9BnDaS3c7x5PPncn7GPUAWCslWHLspd/K+3bGW9O32sSNRNdcriMYjkHJZ7kK7D3hU6NSIL8 XCZN/uvgoJi3vmirunW6m9Epq6wsVR3TxfLBwNFgopAVpL7TZ5vGT X-Google-Smtp-Source: AGHT+IFs5ymxGXZTKT0mW5UQCDuMkgRaKeOffj5osmMYN6AhUjxFiddukVJ8qK5/G6K6QWWdCKvJqg== X-Received: by 2002:a05:620a:1a04:b0:7d4:57c8:af59 with SMTP id af79cd13be357-7dcccbb4d3bmr225348385a.50.1752127262264; Wed, 09 Jul 2025 23:01:02 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7dcdbb1dbc5sm62289085a.22.2025.07.09.23.01.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jul 2025 23:01:01 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 2D85EF4006C; Thu, 10 Jul 2025 02:01:01 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Thu, 10 Jul 2025 02:01:01 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:01:00 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 3/9] rust: sync: atomic: Add ordering annotation types Date: Wed, 9 Jul 2025 23:00:46 -0700 Message-Id: <20250710060052.11955-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Preparation for atomic primitives. Instead of a suffix like _acquire, a method parameter along with the corresponding generic parameter will be used to specify the ordering of an atomic operations. For example, atomic load() can be defined as: impl Atomic { pub fn load(&self, _o: O) -> T { ... } } and acquire users would do: let r =3D x.load(Acquire); relaxed users: let r =3D x.load(Relaxed); doing the following: let r =3D x.load(Release); will cause a compiler error. Compared to suffixes, it's easier to tell what ordering variants an operation has, and it also make it easier to unify the implementation of all ordering variants in one method via generic. The `TYPE` associate const is for generic function to pick up the particular implementation specified by an ordering annotation. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin --- rust/kernel/sync/atomic.rs | 3 + rust/kernel/sync/atomic/ordering.rs | 97 +++++++++++++++++++++++++++++ 2 files changed, 100 insertions(+) create mode 100644 rust/kernel/sync/atomic/ordering.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index c9c7c3617dd5..e80ac049f36b 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -17,3 +17,6 @@ //! [`LKMM`]: srctree/tools/memory-model/ =20 pub mod ops; +pub mod ordering; + +pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/= ordering.rs new file mode 100644 index 000000000000..5fffbaa2fa6d --- /dev/null +++ b/rust/kernel/sync/atomic/ordering.rs @@ -0,0 +1,97 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory orderings. +//! +//! The semantics of these orderings follows the [`LKMM`] definitions and = rules. +//! +//! - [`Acquire`] provides ordering between the load part of the annotated= operation and all the +//! following memory accesses, and if there is a store part, the store p= art has the [`Relaxed`] +//! ordering. +//! - [`Release`] provides ordering between all the preceding memory acces= ses and the store part of +//! the annotated operation, and if there is a load part, the load part = has the [`Relaxed`] +//! ordering. +//! - [`Full`] means "fully-ordered", that is: +//! - It provides ordering between all the preceding memory accesses and= the annotated operation. +//! - It provides ordering between the annotated operation and all the f= ollowing memory accesses. +//! - It provides ordering between all the preceding memory accesses and= all the fllowing memory +//! accesses. +//! - All the orderings are the same strength as a full memory barrier (= i.e. `smp_mb()`). +//! - [`Relaxed`] provides no ordering except the dependency orderings. De= pendency orderings are +//! described in "DEPENDENCY RELATIONS" in [`LKMM`]'s [`explanation`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.= txt + +/// The annotation type for relaxed memory ordering. +pub struct Relaxed; + +/// The annotation type for acquire memory ordering. +pub struct Acquire; + +/// The annotation type for release memory ordering. +pub struct Release; + +/// The annotation type for fully-order memory ordering. +pub struct Full; + +/// Describes the exact memory ordering. +#[doc(hidden)] +pub enum OrderingType { + /// Relaxed ordering. + Relaxed, + /// Acquire ordering. + Acquire, + /// Release ordering. + Release, + /// Fully-ordered. + Full, +} + +mod internal { + /// Sealed trait, can be only implemented inside atomic mod. + pub trait Sealed {} + + impl Sealed for super::Relaxed {} + impl Sealed for super::Acquire {} + impl Sealed for super::Release {} + impl Sealed for super::Full {} +} + +/// The trait bound for annotating operations that support any ordering. +pub trait Any: internal::Sealed { + /// Describes the exact memory ordering. + const TYPE: OrderingType; +} + +impl Any for Relaxed { + const TYPE: OrderingType =3D OrderingType::Relaxed; +} + +impl Any for Acquire { + const TYPE: OrderingType =3D OrderingType::Acquire; +} + +impl Any for Release { + const TYPE: OrderingType =3D OrderingType::Release; +} + +impl Any for Full { + const TYPE: OrderingType =3D OrderingType::Full; +} + +/// The trait bound for operations that only support acquire or relaxed or= dering. +pub trait AcquireOrRelaxed: Any {} + +impl AcquireOrRelaxed for Acquire {} +impl AcquireOrRelaxed for Relaxed {} + +/// The trait bound for operations that only support release or relaxed or= dering. +pub trait ReleaseOrRelaxed: Any {} + +impl ReleaseOrRelaxed for Release {} +impl ReleaseOrRelaxed for Relaxed {} + +/// The trait bound for operations that only support relaxed ordering. +pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + Any {} + +impl RelaxedOnly for Relaxed {} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 11:45:07 2025 Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B88A29B79A; Thu, 10 Jul 2025 06:01:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127267; cv=none; b=jbR0XoLGMHrfOEAMsn90KwnOJcVbeMFXjjMtyTfJrM8AKJH4A6e70rvnrDpTjK5ZUy0tMhLKKaflSLc6o5/isXkdN319Fqav6R5O4wn6F1zO5ezu+wnuZi7OCpFeL4RvSmlMrXH6TtALDv7URLedBs+mbCzJLNmCzmkjq8h2IIA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127267; c=relaxed/simple; bh=VmcFGjNBgQVcosHTOemoi6Xs4LX14iibAVEiHEoGvpg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AgQIyz4aIb6mKVFqBOVzwsMlgbC9U9NINodDyV5+4nuNh3PjVMR+pqPSXxTev/MJQdhwy2kKHcxAtZT+pG225Tu15DAyCtmn63+vRRIbO4BC04YCgvsAIGgFu2Ql59EN68Rhc0JbulIwgiCrnbg6J7+/zrm995zhot/kSujjmMc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HKtndT0d; arc=none smtp.client-ip=209.85.222.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HKtndT0d" Received: by mail-qk1-f179.google.com with SMTP id af79cd13be357-7d9eac11358so67234085a.3; Wed, 09 Jul 2025 23:01:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752127264; x=1752732064; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=x+a40BqT8xHs+EEr59GexxnRcUQ9IWcLKWLKYM1OVGo=; b=HKtndT0diB5sQO8WZPmccVsGWpEVGIhA45efZmNk8tfbmRAWooJVK4ub7boLB+eP/3 kOdXtRCtm4AO7Qvt/PxKzOYfOk7YOZoHym0cbbclboudoPPqt16KcMtUPl2yuL2oTmrJ XM9lemWdBjgf8MNsReHMwwz7QLDb7jOmJe/9iYCnK/2xxOfGFkJEjy3psQx5P80pi8Tw O5GYk9xrDrQQUDJSkN02bZmZ8VUaUgjl0+R0qa70Ufpa2yHqMxHxx43y0zHwd78lt1ER sxIaeHtC6BVeyHZmLKo0inJ25K/oWUJhnEOJ8/OG+k8QpSliabOE2c+wlVZs48gg0Jwh 2hHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752127264; x=1752732064; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=x+a40BqT8xHs+EEr59GexxnRcUQ9IWcLKWLKYM1OVGo=; b=YFHx52p1zAfl1q37CX2LXaYia6yb8qPb1kgObIx1lbP5qBN6WMv1qCLQ+QsQ7D1VP2 1BljodSlz7a6MpawRGHRqxwgW0adZyQEG0PzYf4pQ8eY2z2xTydN9Q6LVSrFZmuVJUXd Xbguk11C8DqTseBRA7AxL3dh/aDO/IpZKNo2BHX1JQHqL2gxdN7fI81Q5G/TAEF8JwAy BQTJK973o3iwycYSrTpcrnb2SqAYd5GZ6yGo2q2Q/+g1nguoUBibSomw/RrtdVt/AKQl 1dTaYneHWrJM2HYcRf27NJ8gtCIEkx/0Qf0rgbG6zVpckW53hI0WsvNXUvskdc06lwLM etKQ== X-Forwarded-Encrypted: i=1; AJvYcCUWP9SYfiCFNkfO5VcSX7CsCpJMonRnNu/EY9/g8KpDG2GaW8jdfxyNgfjxepHytI/P1quJJbUc1JUE3jecBbo=@vger.kernel.org, AJvYcCVyQFINoHSdrhBDSRwev/1zhXSyg6EEFy6J8p3SfaAtI3bTquScTUPg3XL8gIrYPOS6P3t8AdrOHwBt@vger.kernel.org X-Gm-Message-State: AOJu0Yyz5+0AXSn1kuoafUpmWid6aeR9puWEWCLp4aqJlpn7Qj3k+3ai e68qenQvMhZORuOue7m4i/kWEQ/cfEeyKtJSH3T1TpEat5hlGPSJzq1i X-Gm-Gg: ASbGncs2a64lCRCHDDjmG1J/ktyT60qqrUYlNO9V2ATFJjhXc7ATEsFhcFDR2Js7fJz CIXXNNuv1pw6w306sF9YlIW8ogk/GzQ0K8IucAauciQRttwYJlX2T+irpCc42Ek3fk/69h/Wgus NWzW6GMgDV3rlHvo470pO34NqY1vuZNQBjZ2ocOF2zNdFk29PSTsjMeQCkIVDGCk2E3nRv2iu0L i3AW80Ij141xmwHOO3hYghIbAh4nVvK6y+dzhkmJy9U8erxbsnVnHmIOyj2bT50q3/lDJhrNnyJ vaIUxkpGo4Ba/ZKHPpv81cfTJhWeqaICLO1+nVQdF8NE/ps2aLQ/BMVkVwZeRb6RklgVLp7x5D6 KZVdGn9bKsdMVFUU0LL/voRZL4KE92yTKF/9sMbnOsTpuLgIv3Lmi X-Google-Smtp-Source: AGHT+IGvdfEaXO1L1LjPSrP968kTWG+2TSbB3wHTVx8GLflxdyv5bd3I8EKlHuRyDvyWyOFJ2t9G6g== X-Received: by 2002:a05:6214:5186:b0:700:c46f:3bd with SMTP id 6a1803df08f44-7048b8d94dbmr90548736d6.25.1752127263492; Wed, 09 Jul 2025 23:01:03 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-70497d3971dsm5105016d6.62.2025.07.09.23.01.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jul 2025 23:01:03 -0700 (PDT) Received: from phl-compute-06.internal (phl-compute-06.phl.internal [10.202.2.46]) by mailfauth.phl.internal (Postfix) with ESMTP id 88CDEF4006C; Thu, 10 Jul 2025 02:01:02 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-06.internal (MEProxy); Thu, 10 Jul 2025 02:01:02 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:01:01 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 4/9] rust: sync: atomic: Add generic atomics Date: Wed, 9 Jul 2025 23:00:47 -0700 Message-Id: <20250710060052.11955-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To provide using LKMM atomics for Rust code, a generic `Atomic` is added, currently `T` needs to be Send + Copy because these are the straightforward usages and all basic types support this. Implement `AllowAtomic` for `i32` and `i64`, and so far only basic operations load() and store() are introduced. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 14 ++ rust/kernel/sync/atomic/generic.rs | 289 +++++++++++++++++++++++++++++ 2 files changed, 303 insertions(+) create mode 100644 rust/kernel/sync/atomic/generic.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index e80ac049f36b..c5193c1c90fe 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -16,7 +16,21 @@ //! //! [`LKMM`]: srctree/tools/memory-model/ =20 +pub mod generic; pub mod ops; pub mod ordering; =20 +pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; + +// SAFETY: `i32` has the same size and alignment with itself, and is round= -trip transmutable to +// itself. +unsafe impl generic::AllowAtomic for i32 { + type Repr =3D i32; +} + +// SAFETY: `i64` has the same size and alignment with itself, and is round= -trip transmutable to +// itself. +unsafe impl generic::AllowAtomic for i64 { + type Repr =3D i64; +} diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs new file mode 100644 index 000000000000..e044fe21b128 --- /dev/null +++ b/rust/kernel/sync/atomic/generic.rs @@ -0,0 +1,289 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Generic atomic primitives. + +use super::ops::*; +use super::ordering::*; +use crate::build_error; +use core::cell::UnsafeCell; + +/// A generic atomic variable. +/// +/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be ch= osen. +/// +/// # Examples +/// +/// A customized type stored in [`Atomic`]: +/// +/// ```rust +/// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed}; +/// +/// #[derive(Clone, Copy, PartialEq, Eq)] +/// #[repr(i32)] +/// enum State { +/// Uninit =3D 0, +/// Working =3D 1, +/// Done =3D 2, +/// }; +/// +/// // SAFETY: `State` and `i32` has the same size and alignment, and it's= round-trip +/// // transmutable to `i32`. +/// unsafe impl AllowAtomic for State { +/// type Repr =3D i32; +/// } +/// +/// let s =3D Atomic::new(State::Uninit); +/// +/// assert_eq!(State::Uninit, s.load(Relaxed)); +/// ``` +/// +/// # Guarantees +/// +/// Doing an atomic operation while holding a reference of [`Self`] won't = cause a data race, +/// this is guaranteed by the safety requirement of [`Self::from_ptr()`] a= nd the extra safety +/// requirement of the usage on pointers returned by [`Self::as_ptr()`]. +#[repr(transparent)] +pub struct Atomic(UnsafeCell); + +// SAFETY: `Atomic` is safe to share among execution contexts because a= ll accesses are atomic. +unsafe impl Sync for Atomic {} + +/// Types that support basic atomic operations. +/// +/// # Round-trip transmutability +/// +/// Implementing [`AllowAtomic`] requires that the type is round-trip tran= smutable to its +/// representation: +/// +/// - Any value of [`Self`] must be sound to [`transmute()`] to a [`Self::= Repr`], and this also +/// means that a pointer to [`Self`] can be treated as a pointer to [`Se= lf::Repr`] for reading. +/// - If a value of [`Self::Repr`] is a result a [`transmute()`] from a [`= Self`], it must be +/// sound to [`transmute()`] the value back to a [`Self`]. +/// +/// This essentially means a valid bit pattern of `T: AllowAtomic` has to = be a valid bit pattern +/// of `T::Repr`. This is needed because [`Atomic`] operat= es on `T::Repr` to +/// implement atomic operations on `T`. +/// +/// Note that this is more relaxed than bidirectional transmutability (i.e= . [`transmute()`] is +/// always sound between `T` and `T::Repr`) because of the support for ato= mic variables over +/// unit-only enums: +/// +/// ``` +/// #[repr(i32)] +/// enum State { Init =3D 0, Working =3D 1, Done =3D 2, } +/// ``` +/// +/// # Safety +/// +/// - [`Self`] must have the same size and alignment as [`Self::Repr`]. +/// - [`Self`] and [`Self::Repr`] must have the [round-trip transmutabilit= y]. +/// +/// # Limitations +/// +/// Because C primitives are used to implement the atomic operations, and = a C function requires a +/// valid object of a type to operate on (i.e. no `MaybeUninit<_>`), hence= at the Rust <-> C +/// surface, only types with no uninitialized bits can be passed. As a res= ult, types like `(u8, +/// u16)` (a tuple with a `MaybeUninit` hole in it) are currently not supp= orted. Note that +/// technically these types can be supported if some APIs are removed for = them and the inner +/// implementation is tweaked, but the justification of support such a typ= e is not strong enough at +/// the moment. This should be resolved if there is an implementation for = `MaybeUninit` as +/// `AtomicImpl`. +/// +/// [`transmute()`]: core::mem::transmute +/// [round-trip transmutability]: AllowAtomic#round-trip-transmutability +pub unsafe trait AllowAtomic: Sized + Send + Copy { + /// The backing atomic implementation type. + type Repr: AtomicImpl; +} + +#[inline(always)] +const fn into_repr(v: T) -> T::Repr { + // SAFETY: Per the safety requirement of `AllowAtomic`, the transmute = operation is sound. + unsafe { core::mem::transmute_copy(&v) } +} + +/// # Safety +/// +/// `r` must be a valid bit pattern of `T`. +#[inline(always)] +const unsafe fn from_repr(r: T::Repr) -> T { + // SAFETY: Per the safety requirement of the function, the transmute o= peration is sound. + unsafe { core::mem::transmute_copy(&r) } +} + +impl Atomic { + /// Creates a new atomic. + pub const fn new(v: T) -> Self { + Self(UnsafeCell::new(v)) + } + + /// Creates a reference to [`Self`] from a pointer. + /// + /// # Safety + /// + /// - `ptr` has to be a valid pointer. + /// - `ptr` has to be valid for both reads and writes for the whole li= fetime `'a`. + /// - For the duration of `'a`, other accesses to the object cannot ca= use data races (defined + /// by [`LKMM`]) against atomic operations on the returned reference= . Note that if all other + /// accesses are atomic, then this safety requirement is trivially f= ulfilled. + /// + /// [`LKMM`]: srctree/tools/memory-model + /// + /// # Examples + /// + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [= `Atomic::store()`] can + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire(= )` or + /// `WRITE_ONCE()`/`smp_store_release()` in C side: + /// + /// ```rust + /// # use kernel::types::Opaque; + /// use kernel::sync::atomic::{Atomic, Relaxed, Release}; + /// + /// // Assume there is a C struct `Foo`. + /// mod cbindings { + /// #[repr(C)] + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 } + /// } + /// + /// let tmp =3D Opaque::new(cbindings::foo { a: 1, b: 2}); + /// + /// // struct foo *foo_ptr =3D ..; + /// let foo_ptr =3D tmp.get(); + /// + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is in bounds. + /// let foo_a_ptr =3D unsafe { &raw mut (*foo_ptr).a }; + /// + /// // a =3D READ_ONCE(foo_ptr->a); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all access= es on it is atomic, so no + /// // data race. + /// let a =3D unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed); + /// # assert_eq!(a, 1); + /// + /// // smp_store_release(&foo_ptr->a, 2); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all acces= ses on it is atomic, so + /// // no data race. + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release); + /// ``` + /// + /// However, this should be only used when communicating with C side o= r manipulating a C struct. + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self + where + T: Sync, + { + // CAST: `T` is transparent to `Atomic`. + // SAFETY: Per function safety requirement, `ptr` is a valid point= er and the object will + // live long enough. It's safe to return a `&Atomic` because fu= nction safety requirement + // guarantees other accesses won't cause data races. + unsafe { &*ptr.cast::() } + } + + /// Returns a pointer to the underlying atomic variable. + /// + /// Extra safety requirement on using the return pointer: the operatio= ns done via the pointer + /// cannot cause data races defined by [`LKMM`]. + /// + /// [`LKMM`]: srctree/tools/memory-model + pub const fn as_ptr(&self) -> *mut T { + self.0.get() + } + + /// Returns a mutable reference to the underlying atomic variable. + /// + /// This is safe because the mutable reference of the atomic variable = guarantees the exclusive + /// access. + pub fn get_mut(&mut self) -> &mut T { + // SAFETY: `self.as_ptr()` is a valid pointer to `T`. `&mut self` = guarantees the exclusive + // access, so it's safe to reborrow mutably. + unsafe { &mut *self.as_ptr() } + } +} + +impl Atomic +where + T::Repr: AtomicHasBasicOps, +{ + /// Loads the value from the atomic variable. + /// + /// # Examples + /// + /// Simple usages: + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// let x =3D Atomic::new(42i64); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_read", "atomic64_read"))] + #[inline(always)] + pub fn load(&self, _: Ordering) -> T { + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is also a + // valid pointer of `T::Repr`. + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_read*() function: + // - `a` is a valid pointer for the function per the CAST justif= ication above. + // - Per the type guarantees, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr()`: + // - Atomic operations are used here. + let v =3D unsafe { + match Ordering::TYPE { + OrderingType::Relaxed =3D> T::Repr::atomic_read(a), + OrderingType::Acquire =3D> T::Repr::atomic_read_acquire(a), + _ =3D> build_error!("Wrong ordering"), + } + }; + + // SAFETY: The atomic variable holds a valid `T`, so `v` is a vali= d bit pattern of `T`, + // therefore it's safe to call `from_repr()`. + unsafe { from_repr(v) } + } + + /// Stores a value to the atomic variable. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.store(43, Relaxed); + /// + /// assert_eq!(43, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_set", "atomic64_set"))] + #[inline(always)] + pub fn store(&self, v: T, _: Ordering) { + let v =3D into_repr(v); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is also a + // valid pointer of `T::Repr`. + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_set*() function: + // - `a` is a valid pointer for the function per the CAST justif= ication above. + // - Per the type guarantees, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr()`: + // - Atomic operations are used here. + // - For the bit validity of `Atomic`: + // - `v` is a valid bit pattern of `T`, so it's sound to store i= t in an `Atomic`. + unsafe { + match Ordering::TYPE { + OrderingType::Relaxed =3D> T::Repr::atomic_set(a, v), + OrderingType::Release =3D> T::Repr::atomic_set_release(a, = v), + _ =3D> build_error!("Wrong ordering"), + } + }; + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 11:45:07 2025 Received: from mail-qv1-f53.google.com (mail-qv1-f53.google.com [209.85.219.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 604C82BE7A5; Thu, 10 Jul 2025 06:01:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127271; cv=none; b=cEzhDEfDP8Tg1PpItkEh6zzYaTB+MWWHyZdtnzT3+Tk2OlpEWDoWuUcKJPh6mbAQYrKvsmHGbT8ZOPVhB9Mkjy/YfZA3lLMYQSIG2AbArreQq0doY9FiFl8Cv5TpTv5X/uhYBZDEgCX9ipWXARwf7AnVmtfPqSISPLM5s1ShAnc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127271; c=relaxed/simple; bh=HAq/kPb9ujAzsSU+sqSF/chqGGoNuLKQjburQo4hTb8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=S2TzamJHyKO3j071r1RUICMovi2wrHak55A4YhuAB+IHRhv533res9MyC/goNCEATqpe21BN2ItnQZtjo6XNqXHRf7y2GBTasDmjdiCn7BUd1RHi8Bv6GjkRkECraafvUyvwZXfGOWIOaIVk7OYOJSGcXf1e+7oUrbd8niVlpWY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=kk7A4leA; arc=none smtp.client-ip=209.85.219.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="kk7A4leA" Received: by mail-qv1-f53.google.com with SMTP id 6a1803df08f44-6fada2dd785so7387096d6.2; Wed, 09 Jul 2025 23:01:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752127265; x=1752732065; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=T8APr2O5lJC3B4ujP5dNsioemZHw0iFX0dcBhhyWrqo=; b=kk7A4leAVp3bArwS3hyMxCzhGNwYzMn0KMsNI47IPq1RYeyLxuxjVEPZSyrp221itf J8ruI1s2vJTTq+D6oqrpkcva4PxYLvKpk2mLdpt1RqLERZ+w8uKH1MchOgco/7mODN4r 0dxV7qLJBQVUTkHfsiS1W4u68p9J0xKVQrfvr9KsxzeFCinqA3RTX/G4B8aOREpT4Gp8 NnzjGEczshXo16+NI1yz9G6Xi0wiTA6eeRXN0ys3P9vK4H1HjuIpDBeBZT3yiJW/806O hyNzQn3Oc+xdbKy9ATWVS+6WtvSluXVDvPhsoIG/UWcuw1NonMQ1zVpAVSLw1ZLKeijd oXYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752127265; x=1752732065; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=T8APr2O5lJC3B4ujP5dNsioemZHw0iFX0dcBhhyWrqo=; b=DSCvpIGYL+mGkXm4lncfV1mFiHudIW8QkvyWJJyfRhEl9OdHzWQfjc+bvl+qRUDfvL DTN3KsU7/ux87C6wnx69CsfHYf01oVtYbeeYiZTSChDWi9AZXqY33JOfpkVOfX3le/VL S9fhKyO4UER5wkOVk12Pu2J09WdMiEZG3L2ao5tzA9d1Vn5KoCAvM30Xllm1sYRE3OLg s0BBToeDN0WBXwKa5fmO7wb/3fFbcf87GE8zHh7mki+hR1lbN4rwOKa2utagsI17PFJp nr0atfWuK+RFtDt0hMZh9FDLyS8QfU/rqmhQKhlHBTAgVEU0wBQZ5Ww7pYtgulgY6lEW ve6A== X-Forwarded-Encrypted: i=1; AJvYcCUcAxhhobPeEUk8fkRqnuYP2D/vi/9MGcpwwGsPmuDJHHap5GpF+RS7vTzKH8HBaL6WmCsrGSP8oUyu9KLGJ+s=@vger.kernel.org, AJvYcCV5MCOpC36iGf6JKBz0AV2m9Um3UaB/a1LZKb79Ftrd0YOGAKvM2o1WSkx0vspne/h1pMszpFi6vBp7@vger.kernel.org X-Gm-Message-State: AOJu0Ywor8k2nmvBQubdFM0INfs4alk3Xy6BfThPomy0IlCDxdfOueWJ aNEU6loIVrSKECxj40TIIzno/UdDGtzlW+a4z1SAj3clUR6ceJpcPSfw X-Gm-Gg: ASbGncsSId0y2LDRB2zwccey3phDNHz/k6mzlxKepTkCfwK9bM6Ts6SnBL1k7BukVq6 o9ebt/k4uVfI9y3wh5OuGMjFx0r3e4HCdgk7hssZP1p9TIIak/yI/IF1SOvYeI0+mEYn3LtihPN 2uWlAGCF46TnVW6gSXRIPE0qSKRi/wih0JcxIJUqZGDIIlDhUDNCmosSVgNTiB0wBh2qpYkVyq+ QcgvyWR8iPt8pZdU6VbPMl+0K1S1dTNSRBEn5uUoL9NxL4D6P/pRoIQenOhZGFcw03fml5yBgi/ Re5v6iK+96AdheWr6A6KlhiREpyiGBN5mHkN/pXbSvCIw6+7plgnfkxX7QjRiUgO8vopzoXQKPO VLz1+SfKcpgiup7UezlRyjduPRdjMgbqpnrmhIiK8WLFMOh/lzAhzaLtf5krcl3U= X-Google-Smtp-Source: AGHT+IE8tPZMqE69NFvrAIW1je5hQqrEOgjBcNp5sZnw9s555Jd87fOsPrYBBFfrv14daEqRamAneQ== X-Received: by 2002:a05:6214:1c48:b0:6fb:51c:395 with SMTP id 6a1803df08f44-70498224156mr17953176d6.41.1752127265034; Wed, 09 Jul 2025 23:01:05 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7dcdbb1dc41sm62661885a.4.2025.07.09.23.01.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jul 2025 23:01:04 -0700 (PDT) Received: from phl-compute-08.internal (phl-compute-08.phl.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id 066ABF4006C; Thu, 10 Jul 2025 02:01:04 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-08.internal (MEProxy); Thu, 10 Jul 2025 02:01:04 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:01:03 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations Date: Wed, 9 Jul 2025 23:00:48 -0700 Message-Id: <20250710060052.11955-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" xchg() and cmpxchg() are basic operations on atomic. Provide these based on C APIs. Note that cmpxchg() use the similar function signature as compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means the operation succeeds and `Err(old)` means the operation fails. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/generic.rs | 170 +++++++++++++++++++++++++++++ 1 file changed, 170 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index e044fe21b128..1beb802843ee 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -287,3 +287,173 @@ pub fn store(&self, v: T,= _: Ordering) { }; } } + +impl Atomic +where + T::Repr: AtomicHasXchgOps, +{ + /// Atomic exchange. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.xchg(52, Acquire)); + /// assert_eq!(52, x.load(Relaxed)); + /// ``` + #[doc(alias("atomic_xchg", "atomic64_xchg", "swap"))] + #[inline(always)] + pub fn xchg(&self, v: T, _: Ordering) -> T { + let v =3D into_repr(v); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is also a + // valid pointer of `T::Repr`. + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_xchg*() function: + // - `a` is a valid pointer for the function per the CAST justif= ication above. + // - Per the type guarantees, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr()`: + // - Atomic operations are used here. + // - For the bit validity of `Atomic`: + // - `v` is a valid bit pattern of `T`, so it's sound to store i= t in an `Atomic`. + let ret =3D unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_xchg(a, v), + OrderingType::Acquire =3D> T::Repr::atomic_xchg_acquire(a,= v), + OrderingType::Release =3D> T::Repr::atomic_xchg_release(a,= v), + OrderingType::Relaxed =3D> T::Repr::atomic_xchg_relaxed(a,= v), + } + }; + + // SAFETY: The atomic variable holds a valid `T`, so `ret` is a va= lid bit pattern of `T`, + // therefore it's safe to call `from_repr()`. + unsafe { from_repr(ret) } + } + + /// Atomic compare and exchange. + /// + /// Compare: The comparison is done via the byte level comparison betw= een the atomic variables + /// with the `old` value. + /// + /// Ordering: When succeeds, provides the corresponding ordering as th= e `Ordering` type + /// parameter indicates, and a failed one doesn't provide any ordering= , the read part of a + /// failed cmpxchg should be treated as a relaxed read. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed= to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the at= omic variable when + /// cmpxchg was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// // Checks whether cmpxchg succeeded. + /// let success =3D x.cmpxchg(52, 64, Relaxed).is_ok(); + /// # assert!(!success); + /// + /// // Checks whether cmpxchg failed. + /// let failure =3D x.cmpxchg(52, 64, Relaxed).is_err(); + /// # assert!(failure); + /// + /// // Uses the old value if failed, probably re-try cmpxchg. + /// match x.cmpxchg(52, 64, Relaxed) { + /// Ok(_) =3D> { }, + /// Err(old) =3D> { + /// // do something with `old`. + /// # assert_eq!(old, 42); + /// } + /// } + /// + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in= C. + /// let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); + /// # assert_eq!(42, latest); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[doc(alias( + "atomic_cmpxchg", + "atomic64_cmpxchg", + "atomic_try_cmpxchg", + "atomic64_try_cmpxchg", + "compare_exchange" + ))] + #[inline(always)] + pub fn cmpxchg(&self, mut old: T, new: T, o: Ordering) = -> Result { + // Note on code generation: + // + // try_cmpxchg() is used to implement cmpxchg(), and if the helper= functions are inlined, + // the compiler is able to figure out that branch is not needed if= the users don't care + // about whether the operation succeeds or not. One exception is o= n x86, due to commit + // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semanti= cs"), the + // atomic_try_cmpxchg() on x86 has a branch even if the caller doe= sn't care about the + // success of cmpxchg and only wants to use the old value. For exa= mple, for code like: + // + // let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old|= old); + // + // It will still generate code: + // + // movl $0x40, %ecx + // movl $0x34, %eax + // lock + // cmpxchgl %ecx, 0x4(%rsp) + // jne 1f + // 2: + // ... + // 1: movl %eax, %ecx + // jmp 2b + // + // This might be "fixed" by introducing a try_cmpxchg_exclusive() = that knows the "*old" + // location in the C function is always safe to write. + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } + + /// Atomic compare and exchange and returns whether the operation succ= eeds. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`= ]. + /// + /// Returns `true` means the cmpxchg succeeds otherwise returns `false= ` with `old` updated to + /// the value of the atomic variable when cmpxchg was happening. + #[inline(always)] + fn try_cmpxchg(&self, old: &mut T, new: T, _: Ordering)= -> bool { + let mut old_tmp =3D into_repr(*old); + let oldp =3D &raw mut old_tmp; + let new =3D into_repr(new); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is also a + // valid pointer of `T::Repr`. + let a =3D self.0.get().cast::(); + + // SAFETY: + // - For calling the atomic_try_cmpxchg*() function: + // - `a` is a valid pointer for the function per the CAST justif= ication above. + // - `oldp` is a valid pointer for the function. + // - Per the type guarantees, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr()`: + // - Atomic operations are used here. + // - For the bit validity of `Atomic`: + // - `new` is a valid bit pattern of `T`, so it's sound to store= it in an `Atomic`. + let ret =3D unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_try_cmpxchg(a, old= p, new), + OrderingType::Acquire =3D> T::Repr::atomic_try_cmpxchg_acq= uire(a, oldp, new), + OrderingType::Release =3D> T::Repr::atomic_try_cmpxchg_rel= ease(a, oldp, new), + OrderingType::Relaxed =3D> T::Repr::atomic_try_cmpxchg_rel= axed(a, oldp, new), + } + }; + + // SAFETY: The atomic variable holds a valid `T`, so `old_tmp` is = a valid bit pattern of + // `T`, therefore it's safe to call `from_repr()`. + *old =3D unsafe { from_repr(old_tmp) }; + + ret + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 11:45:07 2025 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC56A2BF00E; Thu, 10 Jul 2025 06:01:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127270; cv=none; b=Wo4OYX0x6s9CT0+LiVzhsDQy1OAp/pp4ZLEZROOcHqFvbbJgKZEW7sbcbvmOQINuQsAdBcx3OMcKzzCRwMXrmsTBFDgYKrmFQ2k+gljjf9gV0Xcbr7oQNe7r2NETerCsv8ETByW2BVjoNvlC/1h9krGhHoqlDkwce7AjgfsCRg4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127270; c=relaxed/simple; bh=3aDT8xc/mZaqD3R6xW0byPl+1OOqZB7b2MyrXdrjcFU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mhVOWEt9mx28zeDseFe/3vKZ92yRoz+PQcClC3YMAefnMnzxYSetiDDr/0x7diH5HOfMTfq8PTkOD/NNOGNahB0yG3y3DflzME5HlZR5u8WYha+x57SSc2hxZfUpdP5qsSCSbnrOd8jgK4wBbXLVEd0zGKH4w4syVJQuxDP9KIo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=IEXy79CR; arc=none smtp.client-ip=209.85.219.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IEXy79CR" Received: by mail-qv1-f54.google.com with SMTP id 6a1803df08f44-702b5559bbdso3287716d6.3; Wed, 09 Jul 2025 23:01:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752127267; x=1752732067; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=NVWTVCvJevxqd+qs8pw1lqVsFxJYqEuOxZQt2VZuh4s=; b=IEXy79CRZBwE5iubAeB5tjqO6GZVf1l3JahjzhHzSDyzrZMewg8+dbrhG+QvpkdF7Z MbVP3MR5S8SuWDlAEGa2owSCGF4dzVL4Wat5uMBAjPk19zOO+dIZoWKCRHJ8z2xC9MXI 0E2iqGXVlOwIL1lu+XK+lo9ntT8788WU1/HqIQ+j8EM4Q/j3tqrwJPYPKZEcZwyzvA8Z af5b6vvmwDdJVL9SkzthUinEecgcSWeG6G0rDIgxRZWv3J5+Ef9/Quyd1RVoEvoVPAYG 68QPu1tV6w2U+Dg/E5+A/Ak+ycXTwk6T7cPIKOEuyfIT4WWhiMWdAEPlRTwTQ5b88d03 +oFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752127267; x=1752732067; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=NVWTVCvJevxqd+qs8pw1lqVsFxJYqEuOxZQt2VZuh4s=; b=fSknNXgv1glJBhzlyPLZxobQWoFd7plCQ33TavSn/VDOAg4XEo1+dBACNAcT5a7PVb j6arAC/FnWMsVhxNJO2koqcsSz/h2fub5PRNO0sNRjJt5zTrLpDqarxg9yFU0n6F+aBJ 5cXWoarxitt+YYuhATwpl3E+hTmgkD4icwnn2fmrQ+3UXVBQutCJUHM2/DFUZlNsQelr 7y+VR1bzU8a+j1SzyxHI422f8LLbIrhPObd9O22/wHLXWMrgGj+TwdXf+jyyl3WDB5hQ AZJbTE4b4fxpgGdwxLYwkY4gwbbfEdjnREC5I7uU6WE4/W0IbKUlzb97UjEBj67o+gIn hf5w== X-Forwarded-Encrypted: i=1; AJvYcCUsyQ7Nua+JIVPfXX/XoJbBvHkBje5tFLpZzV9NW2Hl1DjujnZ4AvofM2BHMwtIzUzuBthJgMk5jMum@vger.kernel.org, AJvYcCWhEgqpGTgBcGTSc/GnhV1JzdoLnHMkZs6YXLW2ahDh1D24iDizFysgw8J1wP1LJmC3M5b2mua0oorzg3UBhnw=@vger.kernel.org X-Gm-Message-State: AOJu0YzREHf6X/RD3CXCR4dZmaBNhlTexDnsU6dunVV57XRb90VMJqjs ihhQDVkMTPNdP7Kk4CkUsD8JcsMRGJ+wfX35wKwAzPYbw+OiclbACOGS X-Gm-Gg: ASbGncsvAYLn8ffw8/s9CjIl1duBs2FVGD0ivXPMf+wcme+6p/R16Afk4gTwGdhmRvV Bxo5qLXwjJu3YQ0EMG8A2r0pxm7LJBqmw+tX164SHc2Rk/j0cTuw4674gH0ew8MSZwNCHI3yCQq Kl45pmuTc/YIexXbcbXCwfNYWlMCEhpklCFk5014tjeIjYqlbJQxac+CZvOurE8vuLMUPJSOZkb jr98RRf43SDQAA6tSd7D2AEaVrW1yPR5dKLJ9Pf2+khxo+wJ2U8oJqrqHUVdHqhY8RA2+ALz5Fl FwgL4mjqL8OcBAW9e3S0/HXBh+UrgQYj3CY+VxEYJpzh9HNKiDUt6YGrFVpmNIGf8TINvNKKPW/ M5M/qE7cOPLFni4DRyDvARlBCKGpALp5dz3Nh1GXgoudovHl//gVp X-Google-Smtp-Source: AGHT+IF7AxMRMuyHEY/+wMkbbSv8WJlfegxN+AvN/57DcubPXOhYQzZaON2scCs7KvKMKjAZwGo6Wg== X-Received: by 2002:a05:6214:29e3:b0:702:bd47:c83b with SMTP id 6a1803df08f44-704982503ffmr15889926d6.45.1752127266602; Wed, 09 Jul 2025 23:01:06 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-70497d3940asm5051006d6.73.2025.07.09.23.01.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jul 2025 23:01:06 -0700 (PDT) Received: from phl-compute-07.internal (phl-compute-07.phl.internal [10.202.2.47]) by mailfauth.phl.internal (Postfix) with ESMTP id 90EF4F4006C; Thu, 10 Jul 2025 02:01:05 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-07.internal (MEProxy); Thu, 10 Jul 2025 02:01:05 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:01:04 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 6/9] rust: sync: atomic: Add the framework of arithmetic operations Date: Wed, 9 Jul 2025 23:00:49 -0700 Message-Id: <20250710060052.11955-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" One important set of atomic operations is the arithmetic operations, i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not make senses for all the types that `AllowAtomic` to have arithmetic operations, for example a `Foo(u32)` may not have a reasonable add() or sub(), plus subword types (`u8` and `u16`) currently don't have atomic arithmetic operations even on C side and might not have them in the future in Rust (because they are usually suboptimal on a few architecures). Therefore add a subtrait of `AllowAtomic` describing which types have and can do atomic arithemtic operations. Trait `AllowAtomicArithmetic` has an associate type `Delta` instead of using `AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`) may not wants an `add(&self, i32)`, but an `add(&self, u32)`. Only add() and fetch_add() are added. The rest will be added in the future. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 18 +++++ rust/kernel/sync/atomic/generic.rs | 108 +++++++++++++++++++++++++++++ 2 files changed, 126 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index c5193c1c90fe..26f66cccd4e0 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -29,8 +29,26 @@ unsafe impl generic::AllowAtomic for i32 { type Repr =3D i32; } =20 +// SAFETY: `i32` is always sound to transmute back to itself. +unsafe impl generic::AllowAtomicArithmetic for i32 { + type Delta =3D i32; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d + } +} + // SAFETY: `i64` has the same size and alignment with itself, and is round= -trip transmutable to // itself. unsafe impl generic::AllowAtomic for i64 { type Repr =3D i64; } + +// SAFETY: `i64` is always sound to transmute back to itself. +unsafe impl generic::AllowAtomicArithmetic for i64 { + type Delta =3D i64; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d + } +} diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index 1beb802843ee..412a2c811c3d 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -111,6 +111,20 @@ const fn into_repr(v: T) -> T::Repr { unsafe { core::mem::transmute_copy(&r) } } =20 +/// Atomics that allows arithmetic operations with an integer type. +/// +/// # Safety +/// +/// Implementers must guarantee [`Self::Repr`] can always soundly [`transm= ute()`] to [`Self`] after +/// arithmetic operations. +pub unsafe trait AllowAtomicArithmetic: AllowAtomic { + /// The delta types for arithmetic operations. + type Delta; + + /// Converts [`Self::Delta`] into the representation of the atomic typ= e. + fn delta_into_repr(d: Self::Delta) -> Self::Repr; +} + impl Atomic { /// Creates a new atomic. pub const fn new(v: T) -> Self { @@ -457,3 +471,97 @@ fn try_cmpxchg(&self, old: &mut T, new:= T, _: Ordering) -> bool { ret } } + +impl Atomic +where + T::Repr: AtomicHasArithmeticOps, +{ + /// Atomic add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.add(12, Relaxed); + /// + /// assert_eq!(54, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn add(&self, v: T::Delta, _: Ordering) { + let v =3D T::delta_into_repr(v); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is also a + // valid pointer of `T::Repr`. + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_add() function: + // - `a` is a valid pointer for the function per the CAST justif= ication above. + // - Per the type guarantees, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr()`: + // - Atomic operations are used here. + // - For the bit validity of `Atomic`: + // - `T: AllowAtomicArithmetic` guarantees the arithmetic operat= ion result is sound to + // stored in an `Atomic`. + unsafe { + T::Repr::atomic_add(a, v); + } + } + + /// Atomic fetch and add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) }); + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } ); + /// ``` + #[inline(always)] + pub fn fetch_add(&self, v: T::Delta, _: Ordering) -> T { + let v =3D T::delta_into_repr(v); + // CAST: Per the safety requirement of `AllowAtomic`, a valid poin= ter of `T` is also a + // valid pointer of `T::Repr`. + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_fetch_add*() function: + // - `a` is a valid pointer for the function per the CAST justif= ication above. + // - Per the type guarantees, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr()`: + // - Atomic operations are used here. + // - For the bit validity of `Atomic`: + // - `T: AllowAtomicArithmetic` guarantees the arithmetic operat= ion result is sound to + // stored in an `Atomic`. + let ret =3D unsafe { + match Ordering::TYPE { + OrderingType::Full =3D> T::Repr::atomic_fetch_add(a, v), + OrderingType::Acquire =3D> T::Repr::atomic_fetch_add_acqui= re(a, v), + OrderingType::Release =3D> T::Repr::atomic_fetch_add_relea= se(a, v), + OrderingType::Relaxed =3D> T::Repr::atomic_fetch_add_relax= ed(a, v), + } + }; + + // SAFETY: Per safety requirement of `AllowAtomicArithmetic`, `ret= ` is a valid bit pattern + // of `T`. + unsafe { from_repr(ret) } + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 11:45:07 2025 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5839E2C15A6; Thu, 10 Jul 2025 06:01:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127271; cv=none; b=DZK4+/oq+y3Q7AMWJSrz/qghxU/H6S8KOGZMlw7o9/Tz5tcSabRTuLT5RicEPYB72pyaInu3kidmpGqds6b4cD5cEbyMRFqCxp8iaqgiOD/8pGInVt6vWZZqO54eQ5qTwftLBXzL8oizNKX6gbq7rnXk/p23jAjhd0cWmJC/Rsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127271; c=relaxed/simple; bh=Hfyk+YB11j3QGic3cuIdSw0oHyEEnTvIKxG+qiLIAyk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AnhLMiw3Pvv0isytA9oFlLnXeVh7GhM00nwxW6ZQgZto4VvFp986bIDUg7zKLZ/kReWjvLAuYB2ZmmP65kFjMaku7jD6i3PwUf7eXXgMWgkjPJ5vreinuDcrpSZE+vq2JS6lEpn9BsO4bFCq2LzNKNgPa+A3aqRYHPdaoa4ApPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UGTTnFKS; arc=none smtp.client-ip=209.85.160.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UGTTnFKS" Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-4a43972dcd7so7190671cf.3; Wed, 09 Jul 2025 23:01:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752127268; x=1752732068; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=qCTul36LoYeO61iHyu+jO7qVkK0zfTJ8jaG4JYpY7YY=; b=UGTTnFKSlmd0tx3Q+yWX7KNtMpE29ZW8xrQxHokMyiEkub6eo5fSiatEiezBmCx5Jx 4+RbN8TbnN2Xx4cBg56bCwTyqbgsnZAFRzbqhZ+Y1IGq+ck2Q50wcc73+ZdFIzFbNjEy vx0Lno2NDiCiDfGje1wigEafBfxS06rTRs2u1s37cBYc2P66JJQve56eR20TIp9+Npq+ F10FE7I0VYDtu3QGrYG9bS7V2/WTfAPdNpHAIwIxlXjvGTvsSQB48Rz+/wcZ+yOW4mby RbLOkcYAuyW8fQcYwNPbwyVwBsk2I8qaY5gbzWfE3duK/4TmzBc+spV/orNPH3joeVQB HdGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752127268; x=1752732068; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=qCTul36LoYeO61iHyu+jO7qVkK0zfTJ8jaG4JYpY7YY=; b=eeJ9UqgQe7pbwqgVhnSJnBUeWzZdfFGicJ8V3aJf3JoRSybAqh5CbRWjfS+aPZkrx5 ppL3rK6WqMPW9kK0bCqEMrcpxhJ5fK0+QNQiLVAUYTnod0HuXPY8d/M3aUbN2cwy/wK2 azHcSWThbBfDNQ4sXl+dfWfNxwNCnRyw8ZODqxRbhEf1OXTugXmJY/ueoN+XxwOBBf+/ nN2FF0uN2+sgKP7eCp53I9wcWgd3nvWJZEPgDrkuRXCAO53XKuJwpzsEnurAwOPoTsox OfG/0eeC7uVi5zRTduykMVYo0V/8d3VNKdb8dyMqxHcAXF5nILUTG6qWeleOJWN7UsQl yJQw== X-Forwarded-Encrypted: i=1; AJvYcCX86TY+u47i4YYIAVpHdbfMfi2j51C79aLgy3uuUFwwdh0rlIDu47duRA5DOZkqOWxK7BZC0mqE/VSI@vger.kernel.org, AJvYcCXQMJ4EuRoxkC8h5ePycC4vbeUVKJWOKkpGJPXL98blTEL0vuJ4rIs2gLmdwHZ7GAglWbhN3Jukl7TJO7AAb94=@vger.kernel.org X-Gm-Message-State: AOJu0YzUTReTGKhP2Ad0cDqp+wHDw4a0vDV6XD+RMSrF2G65zwgC9hKP joi2uwWSz8M+8gw1rNJDVBF4wKRkmx7t5Mqc9SZkV/8FRqPH2BqMo2Gd5DG3OA== X-Gm-Gg: ASbGnctwQD4ZFZVSc2i0MIgW+wUhmR7WQSmOELp/daDWnuWMJtwvV/epvATIgFm4NCd Ig5p3cnLEWvE8a0LUdI3jyhiQnayJJs/Rk0T3AVFHpp+JY0cSK+IiNoRsISeaFdKJDspXWk4hhf 72ndzgEDmenNz2P3wfjPpInYDjDQRntUMwwCJI6v7CkbSvGJAc7o7Yp1Zh1UamHHllbXbkKlp/k gu/dKaKolYzPtee+IX4qLWH9QdBp75/sQ3Xs5l7hMyp2vurLRUoajfZO1V2hxGTldPbr/3QB5k7 ZZ582KGfhrKXeXN7iYIfWzTAdlyqcYnvev4fjsh2RAD0ZoQGvw1Rvy+eK4BJfzcirwAqCBaLYxq 6lXePo81/F/t3OhyY1DxCMOJNl7u8NyXeTr68IEC2JV9uPyzlwGM2 X-Google-Smtp-Source: AGHT+IHdAf4ekh0wIYWPLlJGBcN1jwPbal29cw38Chcz4Tlb7KgoYWolNrVn3J5kyGDUbF7TWRRYFw== X-Received: by 2002:a05:622a:992:b0:4a3:4d46:c2a6 with SMTP id d75a77b69052e-4a9ec719caemr18734411cf.7.1752127268127; Wed, 09 Jul 2025 23:01:08 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4a9edc5b4bdsm5620471cf.25.2025.07.09.23.01.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jul 2025 23:01:07 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 0E303F4006C; Thu, 10 Jul 2025 02:01:07 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-04.internal (MEProxy); Thu, 10 Jul 2025 02:01:07 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:01:06 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 7/9] rust: sync: atomic: Add Atomic Date: Wed, 9 Jul 2025 23:00:50 -0700 Message-Id: <20250710060052.11955-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for basic unsigned types that have an `AtomicImpl` with the same size and alignment. Unit tests are added including Atomic and Atomic. Reviewed-by: Alice Ryhl Reviewed-by: Andreas Hindborg Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin --- rust/kernel/sync/atomic.rs | 99 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 99 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 26f66cccd4e0..e676bc7d9275 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -52,3 +52,102 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { d } } + +// SAFETY: `u32` and `i32` has the same size and alignment, and `u32` is r= ound-trip transmutable to +// `i32`. +unsafe impl generic::AllowAtomic for u32 { + type Repr =3D i32; +} + +// SAFETY: `i32` is always sound to transmute back to `u32`. +unsafe impl generic::AllowAtomicArithmetic for u32 { + type Delta =3D u32; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as Self::Repr + } +} + +// SAFETY: `u64` and `i64` has the same size and alignment, and `u64` is r= ound-trip transmutable to +// `i64`. +unsafe impl generic::AllowAtomic for u64 { + type Repr =3D i64; +} + +// SAFETY: `i64` is always sound to transmute back to `u64`. +unsafe impl generic::AllowAtomicArithmetic for u64 { + type Delta =3D u64; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as Self::Repr + } +} + +use crate::macros::kunit_tests; + +#[kunit_tests(rust_atomics)] +mod tests { + use super::*; + + // Call $fn($val) with each $type of $val. + macro_rules! for_each_type { + ($val:literal in [$($type:ty),*] $fn:expr) =3D> { + $({ + let v: $type =3D $val; + + $fn(v); + })* + } + } + + #[test] + fn atomic_basic_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + assert_eq!(v, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_xchg_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + let old =3D v; + let new =3D v + 1; + + assert_eq!(old, x.xchg(new, Full)); + assert_eq!(new, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_cmpxchg_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + let old =3D v; + let new =3D v + 1; + + assert_eq!(Err(old), x.cmpxchg(new, new, Full)); + assert_eq!(old, x.load(Relaxed)); + assert_eq!(Ok(old), x.cmpxchg(old, new, Relaxed)); + assert_eq!(new, x.load(Relaxed)); + }); + } + + #[test] + fn atomic_arithmetic_tests() { + for_each_type!(42 in [i32, i64, u32, u64] |v| { + let x =3D Atomic::new(v); + + assert_eq!(v, x.fetch_add(12, Full)); + assert_eq!(v + 12, x.load(Relaxed)); + + x.add(13, Relaxed); + + assert_eq!(v + 25, x.load(Relaxed)); + }); + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 11:45:07 2025 Received: from mail-qk1-f180.google.com (mail-qk1-f180.google.com [209.85.222.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C725A2BE7DA; Thu, 10 Jul 2025 06:01:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127273; cv=none; b=H5GGN8cbvHsZlrs2e5uX5U4m4prNGXcMc7p/u2zIoPbrLF8CyUsrgzax815wSQ0gyFH27j69NcHkCtu3m/pUQWMuIy17hgwG/RmTPG/vrETL4S6IOpJUeBchPcRgYIgcDW3C1VvbKOOtzhRs/JSmLpK7JQK3XWZbWL4xVDZWexo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127273; c=relaxed/simple; bh=ozeoBnOqwbQq17fV9bnwWSOJc5c/pE7wVstv10Q3mtw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=T3MUmC9ky4bqtyIQrLGSrp8tmnrMSY5BeOQMhOTagJBhviPzAHcwVDUX/ny28VtvQd4mKIMUWl2SY9DtTp8adP1ucsBPyf7ECgy5azaV2zzntVtwxULjpACGFKtrdEGiAw5Qkm2bOAxC5yKcJUlAqWKOHVtN+kF7c0ibtd7A/ys= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=D4SeyLt+; arc=none smtp.client-ip=209.85.222.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="D4SeyLt+" Received: by mail-qk1-f180.google.com with SMTP id af79cd13be357-7d5d0ea6c8dso47476785a.1; Wed, 09 Jul 2025 23:01:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752127270; x=1752732070; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=gZcVZieU5Z+uvN44hThFdBaHpSKgklWNBMovXp1yeb4=; b=D4SeyLt+Nv4xKqADnWJc8ZCRZ0NcRpEfEErrVrcmEbqh0gUUuXdAY8jSgFShyIw4BJ FfD2L+JFwOcA/z5fQcRGL1JF2D21T2MgCBqncd+cXlapP1ZT1Poe51Lf4uDvdh0g67QR B4zONGpsZQm3Xs1PVLNi7fJfPDz5eB/mWKRS0Oo8EN85LCYbZ9UZ50ksG6AL9MG7h0Wq WEH1C7L3+SUJBj2ks7xbTF0vHFzFZvst6t+epS5Id5POL9GF3yKfmEOaPqrncofidbDR aguCS64kO0L40jTHwCTin2a+Rdwi8f9Cs0v4deNDPLtTUAXaPp8DjGC5zw8TGS+afAP/ wDCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752127270; x=1752732070; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=gZcVZieU5Z+uvN44hThFdBaHpSKgklWNBMovXp1yeb4=; b=Pe3p7mAFG0KmY/XMg/4DgSabLoMbtkWWUxR2tggW/KkhAl9I64Md0LXY+xvmvFmY9X +GkF0FQQ5VkWcSAQ3wVfcGIp3bLwt11bK4w8rLQ3jemmMa6qyfoM24jGWfg26x1LXZ4M U6xmJyeaOztZWM6PUu+2Rc7oVEdqIS9SGHKLZTRncZOFpykuaBFprM1x61+PUXh62naq EPZTFCATaCpNBAnUVuMtXND3IPRKYk93IyWkXoo1amK1eLCHVHZHSscDVBLjZJ9yuaUw TQMZ4bqHtH+8/8FysVAmoyQ97C0HBZZMIdDy1g5HzhtH1az459T0cldoMPQjGZi6Adgi wo6w== X-Forwarded-Encrypted: i=1; AJvYcCUMGem6GtATJBgSfPCsUbXOiCpXOR7l98ts71PqcKh267kQjvebJseGKzM8ARPqjhhDOGF+I7vA/AMC@vger.kernel.org, AJvYcCXA+p5jG0wBZZ628S87lvElVQ5KZgRzjv7lv07bdf2Dmygy+F4wVPVwddC2f1AzhoT51YcUSsS34jWoqzmqIr4=@vger.kernel.org X-Gm-Message-State: AOJu0YzIPqzG5VmTVUwCy023hFHAOgPxoGUmY1oaKio1isthsaDj3fF8 y6572zf9jayIcvL0pF1D2H1YEPNo3P7VjP/7pdoyMi9QpSKsO8gyu+5o X-Gm-Gg: ASbGncsEdEIi7drzExL8EGdJJ0waI57rgJ2OgDAp4Ld8M9IEKzcJbyOdkwtxdihjVe/ HReysmwTG6OW4fP9RyDAsqedZNpHz6TNdZma4J+d3Ro0DJsSh6qvMwbVZh0JJvBvA4H9MSzw25h t+lw/liC0tzXmdkJKK8b7D3m1Pk+/tNZwgeq2XMDlSp94iG+OaomPzt9jlP7/tXM3+94UGfTI+c B+yxlP2ikd83ZhE8soXu2h81nxuqKEH7hqE9+G32O5ocZpweel51utjnwMg4EHYyvqfsZ1EfoFQ m2zqzmpyFhfAccEyHiy+Jw2rq98f7TtJyYqjD2uc4wZvf1WCMWjVuPzE4KSJCRUCYS8OLV6v0ts h53Vs7DOAlx9zoclMQfFRVQ0YSXb3n6ItZq1in87qZW+H4aflxh83YwpW9rEH7yE= X-Google-Smtp-Source: AGHT+IGcYnLsbKloZsHICdUNilWsHX+3z30OZqCHiQbTiaU+c2oJUDnyuAP2Fw5NufpnpZLid0bQoQ== X-Received: by 2002:a05:620a:8908:b0:7c5:65ab:5001 with SMTP id af79cd13be357-7dccc583ad0mr193807685a.39.1752127269531; Wed, 09 Jul 2025 23:01:09 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-704979e45c5sm5242826d6.51.2025.07.09.23.01.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jul 2025 23:01:09 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id 8EBE7F4006C; Thu, 10 Jul 2025 02:01:08 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-10.internal (MEProxy); Thu, 10 Jul 2025 02:01:08 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:01:07 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 8/9] rust: sync: Add memory barriers Date: Wed, 9 Jul 2025 23:00:51 -0700 Message-Id: <20250710060052.11955-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Memory barriers are building blocks for concurrent code, hence provide a minimal set of them. The compiler barrier, barrier(), is implemented in inline asm instead of using core::sync::atomic::compiler_fence() because memory models are different: kernel's atomics are implemented in inline asm therefore the compiler barrier should be implemented in inline asm as well. Also it's currently only public to the kernel crate until there's a reasonable driver usage. Reviewed-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/helpers/barrier.c | 18 ++++++++++ rust/helpers/helpers.c | 1 + rust/kernel/sync.rs | 1 + rust/kernel/sync/barrier.rs | 65 +++++++++++++++++++++++++++++++++++++ 4 files changed, 85 insertions(+) create mode 100644 rust/helpers/barrier.c create mode 100644 rust/kernel/sync/barrier.rs diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c new file mode 100644 index 000000000000..cdf28ce8e511 --- /dev/null +++ b/rust/helpers/barrier.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +void rust_helper_smp_mb(void) +{ + smp_mb(); +} + +void rust_helper_smp_wmb(void) +{ + smp_wmb(); +} + +void rust_helper_smp_rmb(void) +{ + smp_rmb(); +} diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 83e89f6a68fb..8ddfc8f84e87 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -9,6 +9,7 @@ =20 #include "atomic.c" #include "auxiliary.c" +#include "barrier.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index b620027e0641..c7c0e552bafe 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -11,6 +11,7 @@ =20 mod arc; pub mod atomic; +pub mod barrier; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs new file mode 100644 index 000000000000..df4015221503 --- /dev/null +++ b/rust/kernel/sync/barrier.rs @@ -0,0 +1,65 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory barriers. +//! +//! These primitives have the same semantics as their C counterparts: and = the precise definitions +//! of semantics can be found at [`LKMM`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ + +/// A compiler barrier. +/// +/// A barrier that prevents compiler from reordering memory accesses acros= s the barrier. +pub(crate) fn barrier() { + // By default, Rust inline asms are treated as being able to access an= y memory or flags, hence + // it suffices as a compiler barrier. + // + // SAFETY: An empty asm block should be safe. + unsafe { + core::arch::asm!(""); + } +} + +/// A full memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory access= es across the barrier. +pub fn smp_mb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_mb()` is safe to call. + unsafe { + bindings::smp_mb(); + } + } else { + barrier(); + } +} + +/// A write-write memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory write = accesses across the +/// barrier. +pub fn smp_wmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_wmb()` is safe to call. + unsafe { + bindings::smp_wmb(); + } + } else { + barrier(); + } +} + +/// A read-read memory barrier. +/// +/// A barrier that prevents compiler and CPU from reordering memory read a= ccesses across the +/// barrier. +pub fn smp_rmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_rmb()` is safe to call. + unsafe { + bindings::smp_rmb(); + } + } else { + barrier(); + } +} --=20 2.39.5 (Apple Git-154) From nobody Tue Oct 7 11:45:07 2025 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F89C2D1F69; Thu, 10 Jul 2025 06:01:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127274; cv=none; b=gHLWkxYBX6J5hhS/8Pi7jKYawhZ+ldRDO8/5/7prxHdh/b12luIkY+UusyjMiKqgUNE8GMjtgldop5GJ/FkXXt4JTSTkzcppQoD6mb37U62ZT7qVYBH66qbwLcGiov8tkxIdfbxaXDSibzsFYqQuJW5b0XuYQH7xnfNfGb0Wc7I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752127274; c=relaxed/simple; bh=I5bn5qP7xSOtXY8A079H4Bc+7+COegZ1IBZqWTzUO1A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=B1byekw8DgQGIFkoZCWjBkvc6czs+LzegkmJBPIFReZqpYzVg9RYIMnUYimyZBknFXaVyv81JW8SW/UXfPDsU44s4L31rgV7QhKaLTGqkzDDDIBc/CD9Dra7FD/FJjhhu18J1K+8xXCUdlEtTg/5GcVZ+gcDtHQwMPLhzN5VJf0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mvTivo36; arc=none smtp.client-ip=209.85.219.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mvTivo36" Received: by mail-qv1-f54.google.com with SMTP id 6a1803df08f44-6fafdd322d3so6995876d6.3; Wed, 09 Jul 2025 23:01:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752127271; x=1752732071; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=2Ob5dAYv6h4uYB8Uy4Mcs5XI6WfB0FBUZ85NYAMmg8E=; b=mvTivo36bMZ975kSB4VdEgvJ9U5VAHcGjBk9IaIpHSRkDP7mfIjnFB0KYk9B4MOBjm MPz1FtEZSkBK/bTN5v9q46adGiJWCIoYQFsFsHVwN7yoW5+c2OtzBL00NhkCbJD4votd 4clIaC1eI+Bj6Z8Q3MyeRmYfEZfIS1uFukuqeVIq61B7d7H1LOp7W98xt5m3CP5OLVP9 ReQBkw8gJzG90snPzHBUYxEo6st1pDASWMhq6eyfSPhXERO1sapaaSclpE+CF6r/Inz9 0T1yymnGpsbY8Ia1ATcWQK64jXGg8uvU+Boh82zHauFLQdSxGI08dNgExiip+HYmGrhb v06A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752127271; x=1752732071; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=2Ob5dAYv6h4uYB8Uy4Mcs5XI6WfB0FBUZ85NYAMmg8E=; b=gfjk1UTClGMP/CJI9V1obo2Fp4Brik1joPy1/oYhMRbh/eS9N9n9acZopfJ2juLpIY UR3otCupNNaCXY4/oGihrlpon439xK/tn0gnvNymWOxYGHTRhgg0VBl9j53fDHhml64D qG+eSXE3PlXeE5n3JFD05MPGKZG3hALSzjIoyMpXUBpCyqyWoirI1NJVoMZ6Q9XM0Yj5 87yAK5MJ57Sa+j255iq8V8PtuhNsHdQhLXYamivJj8q4E+pYr88Cctr2kM8oIl8qPaW/ OhENshDy1+YuW+egPTiQGNlF/VnSohQil4xriTqn02gaykHMBWjMJpwEzbtGAlxapNpd ygHg== X-Forwarded-Encrypted: i=1; AJvYcCX3GDEkm2o6+v9UZDOBGDbFSA8y9CcbxwYZHGE4cmrwbcJztNauTWq1si0AT3/4UfgnvulksDK/8vgyQ3lrz3Y=@vger.kernel.org, AJvYcCXdLiN143lqoWlhl/qzTdwYLkuZGPFzn8WwTrGIkpc+JjY0FIvL3d9twK7q+d8ZgVeDVkIOkRKuZyxn@vger.kernel.org X-Gm-Message-State: AOJu0YybEGO33EEaxnKeQIMCMtwkZEgJXs/g59bNUFjBX2UL7WemkfuI LsKyPNz8cPhi5fMX00zR2Zg7KwQ4jJO+O3WfxU7omVBaLFIgOMgZ1Mw1 X-Gm-Gg: ASbGncsoFknMKn3R7mwiivlj3sdJaWJ7Xg7602gSil+X/R/AAOqi34PBvCrBuwqCgp9 RMqiuNoyw+pZUzWzFrsRNRaMXPa1f7tXB6wppNmRwk3WvDvzwCxtEn+4wjv4byGvVnJJ4hCt9zB +iaMSqikZy6JYd7viXXM5n02XRFHQFRe6rhhRDC+h9Up4I3bYFZxNbYZpsAqb0lpAu34gFHPxaX jLhmhuKn1Baz4mddxco/B4tPEsKIPaQ8OmOWdlfk8KSR43Dt9byA3aR3lrhnRiVhIqxwtu1WLTE dk09CZY+zU2c1lYutl4V/qyow2XSKw/sEdvAtyHlb0JG9eTdw08LI4BXsqt6rUyyfKv9qadHaC9 ceO7tHAWYT4j5DD/+wxzWNmvoYd3i7nIIEJwJubnin+RicdtPHw9m X-Google-Smtp-Source: AGHT+IE9w/ELvtmgq0Wl+HmrVt63xyjSLjpZHrqKDNbh9jcyRRGILMKadm6VJspuHyORAx4nst+m1g== X-Received: by 2002:a05:6214:401c:b0:702:ca9e:dba6 with SMTP id 6a1803df08f44-70498081a37mr17837936d6.16.1752127271002; Wed, 09 Jul 2025 23:01:11 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-704979e3146sm5190806d6.41.2025.07.09.23.01.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jul 2025 23:01:10 -0700 (PDT) Received: from phl-compute-05.internal (phl-compute-05.phl.internal [10.202.2.45]) by mailfauth.phl.internal (Postfix) with ESMTP id 0CF3AF4006C; Thu, 10 Jul 2025 02:01:10 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-05.internal (MEProxy); Thu, 10 Jul 2025 02:01:10 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdefleeijecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhnucfh vghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrthhtvg hrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeggefg necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqh hunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddu jeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvg drnhgrmhgvpdhnsggprhgtphhtthhopedvjedpmhhouggvpehsmhhtphhouhhtpdhrtghp thhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtg hpthhtoheprhhushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhg pdhrtghpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtoh eplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohep ohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorh esghhmrghilhdrtghomhdprhgtphhtthhopegsohhquhhnrdhfvghnghesghhmrghilhdr tghomhdprhgtphhtthhopehgrghrhiesghgrrhihghhuohdrnhgvthdprhgtphhtthhope gsjhhorhhnfegpghhhsehprhhothhonhhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 10 Jul 2025 02:01:09 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, lkmm@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , "Benno Lossin" , "Andreas Hindborg" , "Alice Ryhl" , "Trevor Gross" , "Danilo Krummrich" , "Will Deacon" , "Peter Zijlstra" , "Mark Rutland" , "Wedson Almeida Filho" , "Viresh Kumar" , "Lyude Paul" , "Ingo Molnar" , "Mitchell Levy" , "Paul E. McKenney" , "Greg Kroah-Hartman" , "Linus Torvalds" , "Thomas Gleixner" , Alan Stern Subject: [PATCH v6 9/9] rust: sync: atomic: Add Atomic<{usize,isize}> Date: Wed, 9 Jul 2025 23:00:52 -0700 Message-Id: <20250710060052.11955-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250710060052.11955-1-boqun.feng@gmail.com> References: <20250710060052.11955-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add generic atomic support for `usize` and `isize`. Note that instead of mapping directly to `atomic_long_t`, the represention type (`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces the necessity of creating `atomic_long_*` helpers, which could save the binary size of kernel if inline helpers are not available. Reviewed-by: Alice Ryhl Reviewed-by: Andreas Hindborg Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin --- rust/kernel/sync/atomic.rs | 48 ++++++++++++++++++++++++++++++++++---- 1 file changed, 44 insertions(+), 4 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index e676bc7d9275..e1e40757d7b5 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -53,6 +53,26 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { } } =20 +// SAFETY: For 32bit kernel, `isize` has the same size and alignment with = `i32` and is round-trip +// transmutable to it, for 64bit kernel `isize` has the same size and alig= nment with `i64` and is +// round-trip transmutable to it. +unsafe impl generic::AllowAtomic for isize { + #[cfg(not(CONFIG_64BIT))] + type Repr =3D i32; + #[cfg(CONFIG_64BIT)] + type Repr =3D i64; +} + +// SAFETY: `isize` is always sound to transmute back from `i32` or `i64` w= hen their sizes are the +// same. +unsafe impl generic::AllowAtomicArithmetic for isize { + type Delta =3D Self; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as Self::Repr + } +} + // SAFETY: `u32` and `i32` has the same size and alignment, and `u32` is r= ound-trip transmutable to // `i32`. unsafe impl generic::AllowAtomic for u32 { @@ -83,6 +103,26 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { } } =20 +// SAFETY: For 32bit kernel, `usize` has the same size and alignment with = `i32` and is round-trip +// transmutable to it, for 64bit kernel `usize` has the same size and alig= nment with `i64` and is +// round-trip transmutable to it. +unsafe impl generic::AllowAtomic for usize { + #[cfg(not(CONFIG_64BIT))] + type Repr =3D i32; + #[cfg(CONFIG_64BIT)] + type Repr =3D i64; +} + +// SAFETY: `usize` is always sound to transmute back from `i32` or `i64` w= hen their sizes are the +// same. +unsafe impl generic::AllowAtomicArithmetic for usize { + type Delta =3D Self; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as Self::Repr + } +} + use crate::macros::kunit_tests; =20 #[kunit_tests(rust_atomics)] @@ -102,7 +142,7 @@ macro_rules! for_each_type { =20 #[test] fn atomic_basic_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 assert_eq!(v, x.load(Relaxed)); @@ -111,7 +151,7 @@ fn atomic_basic_tests() { =20 #[test] fn atomic_xchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 let old =3D v; @@ -124,7 +164,7 @@ fn atomic_xchg_tests() { =20 #[test] fn atomic_cmpxchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 let old =3D v; @@ -139,7 +179,7 @@ fn atomic_cmpxchg_tests() { =20 #[test] fn atomic_arithmetic_tests() { - for_each_type!(42 in [i32, i64, u32, u64] |v| { + for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { let x =3D Atomic::new(v); =20 assert_eq!(v, x.fetch_add(12, Full)); --=20 2.39.5 (Apple Git-154)