From nobody Sat Feb 7 23:22:54 2026 Received: from mail-qk1-f195.google.com (mail-qk1-f195.google.com [209.85.222.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A550043E48E for ; Tue, 20 Jan 2026 14:05:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.195 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768917927; cv=none; b=KvmmqFaGGb1g14JcNkP+sDulAI3NC8heuwrhCP09f3Ov7TC3U09o8oT8JEjfX8Hu/sBsuAY1iXNlYEIfIuNbDSrC7hVzSypgRhd7pvHkMxZ2zl/JUG9K8dxpesg92XCeaiqiJcoZDObbchmoiTKOjWIgT+LqujM+sSv5nZu5eNc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768917927; c=relaxed/simple; bh=lKY1eJ8/q2YuwAvqzjqDdSke3x4ANRWo9Ml+P+k2QDw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tI9heZMILDPIBZ9F+GJf0iC2utNZ6big7A1LvIYwkylsiwbN7Oe97Mp8HiVjGN0p58G0bfACdTVp8AoGPuq30riQRsGnZsncHmi+H3U+XQd5xaG9fzKRo41yA04e5g0pD2VGfGxqae60LQZ9enehSoTpsOD0tPxyz9q9h80DjYw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FioJT63G; arc=none smtp.client-ip=209.85.222.195 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FioJT63G" Received: by mail-qk1-f195.google.com with SMTP id af79cd13be357-8c5265d06c3so634173085a.1 for ; Tue, 20 Jan 2026 06:05:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768917923; x=1769522723; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=oGl+JawNZxbkW6YBThH+A9Q3ruFvPDTeQ6pPsaxwQ2U=; b=FioJT63GjGrH9PRXnpoZyRnZbXhQfbjyCSpUbHUNJzlLVjyk8jHr1jCJKLLswLmzso zbbaJNgAmKfKej283dhIgyxm13kwlJjl4wLXcqI0A0weOqg3mj0rB8fhqpzSSLorump3 AB9ovQfNf8Zbz3i86xqx5l6kWWWPf0gSao+1n+nPpDbnV5f7mRpiPjR9FqUcJ+5wOLaM uJx/WFr/ubI/Ts7mLvYOa7oGw9Z2zEsoSw52kMcm/zln8ShYrTPEsmheukA9rnaNNWBQ wxH/4KyQinZOeRJ3Mj2VQv66lhZOryl1Cl3p0hQeHjathrxmr8f1lLXNNc2qxx7s+BI9 3Czw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768917923; x=1769522723; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oGl+JawNZxbkW6YBThH+A9Q3ruFvPDTeQ6pPsaxwQ2U=; b=sNDTW7Dxspw15T38I/9wDCN5t4NDzeIyTfxUyuuk0CcjPh5lJMa2TDtaBZSYGueEAd 2k2CBkTcKfMDAIhe4UIBFWL6J4umKGOwId85wdiJQPS5luhq7dqC1f6XgnRrZR8iKX5m pQBC6/dCWvmwOGzktdLl4DZLrI3ShEsKB3ypmVVrySZ2yy2XBWpCyxec9EECqnTtmeQE cssJiUCXB0FvynXgXgt59NEVYbv5JMSa45WfoXfq8PtGsyJhE83TWX5RpcS+i5hAcLlZ 4vTl9UX0YV8r2Qq9DpKGPZn/OnuTyz2OJigv2Z+AWu5chXN0BSvRgIOW8qaqkEk6O/vA OW+A== X-Forwarded-Encrypted: i=1; AJvYcCXSPzYM3a4yiTtzA+HuBpmc6iLmWEibQfr0MVel+8gZ+ewBSo++8Ziq2Ms7GqROdbB7xxyPWA/f6S3lXo0=@vger.kernel.org X-Gm-Message-State: AOJu0Ywg0ov0GLkjy9sagSB20xxMSR3xyJ8tW+reMxZCIpNbZI1ZnR66 CntJ1gSRvq6igXOvGsDe58ObuK8wxTh4s4P8lYblcK2Tjp2fbgA2W3xM X-Gm-Gg: AY/fxX4wxEVx689wsM+uc0iyH//p0GQUFOIse79GKFSaGxIaOe8qo2tZCG9ZoqYGajp UoEjpiaep9Gu1n0oekDnfZfr3yoZdU93WmVMjIFxK9M2fVZFpJ5OKN9fXF/JDvrYtxagcFrvNmG avWzosRHsH4+BNc9C/d9HVkglgZbfXMYaWlbQ3go/M/kMKDfKFXAnoOZQfwxrX22Rv2fNRUi5e5 7peCr2/QKEHn+kRT99zodXfIAguSGyn5AB1id4oEcj5q1XOmJcr/WbqRiwS48zDQJF06KK+dYi3 /M7HYTSnKBweAw4Dm6WtxSaeQ3RAxJMIZ3sHuRbO0I05t0nobQEJuzxjYyC51UyxsgoGA0FGDYD 2jeCo2Hj3d738BF7cwGX2f7nBnhr0hQrGC9Il19cW+5II/RZko65XkM0hSp8/deSy8QRTNs69EO rxdAyHy9kEjH+p+/v/ZdYgczHhIzHiApaM/RUorRVAQRel7HdtdWYOfWNfFausr3EOKlFZTJbDv JsWZ6Tc75xwrNg= X-Received: by 2002:a05:620a:199e:b0:8b2:eb79:d37d with SMTP id af79cd13be357-8c6a645ca97mr1691655185a.4.1768917919581; Tue, 20 Jan 2026 06:05:19 -0800 (PST) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id af79cd13be357-8c6af506829sm813177485a.37.2026.01.20.06.05.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jan 2026 06:05:18 -0800 (PST) Received: from phl-compute-01.internal (phl-compute-01.internal [10.202.2.41]) by mailfauth.phl.internal (Postfix) with ESMTP id 4BD42F40068; Tue, 20 Jan 2026 09:05:17 -0500 (EST) Received: from phl-frontend-03 ([10.202.2.162]) by phl-compute-01.internal (MEProxy); Tue, 20 Jan 2026 09:05:17 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddugedtiedtucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhunhcu hfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrghtth gvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueehgeeg gfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpegsoh hquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdeigedq udejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfihigmh gvrdhnrghmvgdpnhgspghrtghpthhtohepvdekpdhmohguvgepshhmthhpohhuthdprhgt phhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvghrrdhkvghrnhgvlhdrohhrgh dprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhr ghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoh epohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepsghoqhhunhdrfhgvnhhg sehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguhhordhnvghtpd hrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgtohhmpdhrtghp thhtoheplhhoshhsihhnsehkvghrnhgvlhdrohhrghdprhgtphhtthhopegrrdhhihhnug gsohhrgheskhgvrhhnvghlrdhorhhg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 20 Jan 2026 09:05:16 -0500 (EST) From: Boqun Feng To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Miguel Ojeda , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , "Paul E. McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , FUJITA Tomonori , Dirk Behme , Dirk Behme Subject: [PATCH v2 1/2] rust: sync: atomic: Clarify the need of CONFIG_ARCH_SUPPORTS_ATOMIC_RMW Date: Tue, 20 Jan 2026 22:05:02 +0800 Message-ID: <20260120140503.62804-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260120140503.62804-1-boqun.feng@gmail.com> References: <20260120140503.62804-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, since all the architectures that support Rust all have CONFIG_ARCH_SUPPORTS_ATOMIC_RMW selected, the helpers of atomic load/store on i8 and i16 relies on CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=3Dy. It's generally fine since most of architectures support that. The plan for CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=3Dn architectures is adding their (probably lock-based) atomic load/store for i8 and i16 as their atomic_{read,set}() and atomic64_{read,set}() counterpart when they plans to support Rust. Hence use a statis_assert!() to check this and remind the future us the need of the helpers. This is more clear than the #[cfg] on impl blocks of i8 and i16. Suggested-by: Dirk Behme Suggested-by: Benno Lossin Signed-off-by: Boqun Feng Reviewed-by: Gary Guo --- rust/kernel/sync/atomic/internal.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/= internal.rs index 0dac58bca2b3..ef516bcb02ee 100644 --- a/rust/kernel/sync/atomic/internal.rs +++ b/rust/kernel/sync/atomic/internal.rs @@ -37,16 +37,23 @@ pub trait AtomicImpl: Sized + Send + Copy + private::Se= aled { type Delta; } =20 -// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the = atomicity is only -// guaranteed against read-modify-write operations if the architecture sup= ports native atomic RmW. -#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)] +// The current helpers of load/store of atomic `i8` and `i16` use `{WRITE,= READ}_ONCE()` hence the +// atomicity is only guaranteed against read-modify-write operations if th= e architecture supports +// native atomic RmW. +// +// In the future when a CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=3Dn architecture p= lans to support Rust, the +// load/store helpers that guarantee atomicity against RmW operations (usu= ally via a lock) need to +// be added. +crate::static_assert!( + cfg!(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW), + "The current implementation of atomic i8/i16/ptr relies on the archite= cure being \ + ARCH_SUPPORTS_ATOMIC_RMW" +); + impl AtomicImpl for i8 { type Delta =3D Self; } =20 -// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the = atomicity is only -// guaranteed against read-modify-write operations if the architecture sup= ports native atomic RmW. -#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)] impl AtomicImpl for i16 { type Delta =3D Self; } --=20 2.51.0 From nobody Sat Feb 7 23:22:54 2026 Received: from mail-qk1-f195.google.com (mail-qk1-f195.google.com [209.85.222.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B37943E484 for ; Tue, 20 Jan 2026 14:05:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.195 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768917926; cv=none; b=cCilDsx2HG/0R6BUWbxIEr7JF2TR3+NNL4Kwhhzr6ZIzUslCT9X6KiTo+9nkzbRTJZw1Fdz7/qj1xIP1N5S/tsLajsTouItpuXFHhcpwV7l5TaOCEi/oWVZ4aC7Pe0GDCddW7ndXyCTyhKQxV7phb7Sw+ip6dY9AKWPBo71saKU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768917926; c=relaxed/simple; bh=BKLHL6GosVcMSmMVR2siGs9GnPyNEBiirPveW3YqweY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GJaqwTd+7/bPAkPldCc4pL57Hvg+LHh+1R5nGskJBeWSyhXxAmYiRNA+HSwumdpwxxVlshH9A2ugpT18udOu8cUvxCVbST85Pz3SSWQZdykYhZR2GHIwJhH2uZnG/5eXz5EIcYdGPuLLHzGncK6RpIO9KEkh8jReXLdQPK397KU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ejZEvzdt; arc=none smtp.client-ip=209.85.222.195 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ejZEvzdt" Received: by mail-qk1-f195.google.com with SMTP id af79cd13be357-8c07bc2ad13so365686385a.2 for ; Tue, 20 Jan 2026 06:05:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768917923; x=1769522723; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=cdZCtc/SzUUU469DlUjjz+ivRYHRKqFHgnur+s05bgo=; b=ejZEvzdtgykt2uVi8cWMlNEO1ulIUU9VWAA9gkVh9a+4u/6HU+m7k8mMUJAWjuVjsm RaWcF80mMY1bl9ceouXFvkdG0zTBLlt8MPsFQeyRjT4KkYwfVPWD2OEcrnPhtvw92SXD j9yUtVvO0othFpHPwdyd2wRpcxGsM44jzgjkMXWrbvlSfJMgbCN5XDjr9vFCtRmB93CV mGJpG8zG3oddW7QUy3qSlW3Hlkj5BsneBL2X2A5vs5AmHT3oNEWKD/RxkQm2c5A6+CmK +SUqzKUYOtvFmPl4aaUF+cXODK6zTbeq4is2gWqPVpNPRvqn0GKOpfgjtD/rJ1M4a/jw 94Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768917923; x=1769522723; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cdZCtc/SzUUU469DlUjjz+ivRYHRKqFHgnur+s05bgo=; b=Xt2JdDIoIBZEHYpQJXjWQlX6/qu8C5FXBSPKHuIlBMa1J9VAjXYd+cmzQYPcJDpwXg lXL9lULg2Z0kKXeXe+phN2GxQUUPLsxVs26T7iOS+cxUumwqc8y5oMUF4v/bVhxDDkO4 znkU08v2VrFwupc0Zds4nPDB7a3Epf7o/POQSEjuuwcA1UXZKUCbeK/7uZ0YKQPkrjYW 0ww/0RioPscfD/n0aJKEPC9z7nLgz3o8tmgoyh4txfUDn4a22PxkMTro3I01wQ1Uz9FW f5On7ZOY7A/dT5fRV1dWSyMmS/O+3VJMGXV9yfqG4/cJWIjXbU4j3ac397k1RcNK7PVd qHCw== X-Forwarded-Encrypted: i=1; AJvYcCUiVbi0/WmG5kn7CA+iVNQc42ncudqWzTHTZtHdK+gX4YGA0EjG2emXnV76GazfqcT7PkYQNx1XloU5hdk=@vger.kernel.org X-Gm-Message-State: AOJu0YwUZw/dfgJ8vAiqyx7f9Uonz925CL9+Lf4DfKPUyOp/UbeVB6xu AzZjz7GXZAHJ4ufLZJ0U2Wa5P1lxsa+BSZATqQUR8qbCb1jSrh+uVfOR X-Gm-Gg: AY/fxX6NndDX7O6Sw+3GU36gUDnVA0cqkovTfGdDb2R5hRi8a1WVV3b2+tCL+ntXVeA pcRUoTnQC+yEoa5nY8UrnLZzmhDAmmdnMGfWVhru6BWPlrLzoiDBJ72L/DFSLwcQpEljmFhWPh8 BO5TuIUgnIU2a8NtCNMCJNPvmjQ0INF26k/XioO/zc6z9qKWLZ5sXO64wz1sLTT7HGsP/btNlIM 0RWWvNXryk6pgGO96tOxMyEKC3FyN3ZdKJMdgNa9ionRJdseqe640Np5Io2zEfNdLGZbDYFTw3p CS9NRDu6A29REAq+Y6RzJaONg976UA8L3n7y+3GrrfTmEXy1JvzDu6A8jdwmP8s8SsSalqlYLd+ vmid0rq8m5+kfYpRO/22VjfLFShgRpuv5IYit7uKx+And2bCxZ65IFyf5Zvx9Sd1PNKZhDLDFhs s27pCW9iwrs+c8gktnVNBIFJlmG/puhSI9QKHu7IwOq370qsHdGexyzGCNhfHSNp2aPoCfkimg4 QOI5XQu9/hmw/g= X-Received: by 2002:a05:620a:4404:b0:8c6:afa6:270f with SMTP id af79cd13be357-8c6ccf36a4fmr204945785a.76.1768917922310; Tue, 20 Jan 2026 06:05:22 -0800 (PST) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id af79cd13be357-8c6a7298472sm1043200285a.53.2026.01.20.06.05.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jan 2026 06:05:21 -0800 (PST) Received: from phl-compute-03.internal (phl-compute-03.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id 6932EF40069; Tue, 20 Jan 2026 09:05:20 -0500 (EST) Received: from phl-frontend-03 ([10.202.2.162]) by phl-compute-03.internal (MEProxy); Tue, 20 Jan 2026 09:05:20 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddugedtiedtucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhunhcu hfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrghtth gvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueehgeeg gfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpegsoh hquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdeigedq udejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfihigmh gvrdhnrghmvgdpnhgspghrtghpthhtohepvdejpdhmohguvgepshhmthhpohhuthdprhgt phhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvghrrdhkvghrnhgvlhdrohhrgh dprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhr ghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoh epohhjvggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepsghoqhhunhdrfhgvnhhg sehgmhgrihhlrdgtohhmpdhrtghpthhtohepghgrrhihsehgrghrhihguhhordhnvghtpd hrtghpthhtohepsghjohhrnhefpghghhesphhrohhtohhnmhgrihhlrdgtohhmpdhrtghp thhtoheplhhoshhsihhnsehkvghrnhgvlhdrohhrghdprhgtphhtthhopegrrdhhihhnug gsohhrgheskhgvrhhnvghlrdhorhhg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 20 Jan 2026 09:05:19 -0500 (EST) From: Boqun Feng To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Miguel Ojeda , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Will Deacon , Peter Zijlstra , Mark Rutland , "Paul E. McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , FUJITA Tomonori , Dirk Behme Subject: [PATCH v2 2/2] rust: sync: atomic: Add Atomic<*{mut,const} T> support Date: Tue, 20 Jan 2026 22:05:03 +0800 Message-ID: <20260120140503.62804-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260120140503.62804-1-boqun.feng@gmail.com> References: <20260120140503.62804-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Atomic pointer support is an important piece of synchronization algorithm, e.g. RCU, hence provide the support for that. Note that instead of relying on atomic_long or the implementation of `Atomic`, a new set of helpers (atomic_ptr_*) is introduced for atomic pointer specifically, this is because ptr2int casting would lose the provenance of a pointer and even though in theory there are a few tricks the provenance can be restored, it'll still be a simpler implementation if C could provide atomic pointers directly. The side effects of this approach are: we don't have the arithmetic and logical operations for pointers yet and the current implementation only works on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation issues and can be added later. Reviewed-by: Gary Guo Reviewed-by: FUJITA Tomonori Signed-off-by: Boqun Feng --- rust/helpers/atomic_ext.c | 3 ++ rust/kernel/sync/atomic.rs | 12 +++++++- rust/kernel/sync/atomic/internal.rs | 24 +++++++++------ rust/kernel/sync/atomic/predefine.rs | 46 ++++++++++++++++++++++++++++ 4 files changed, 75 insertions(+), 10 deletions(-) diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c index 240218e2e708..c267d5190529 100644 --- a/rust/helpers/atomic_ext.c +++ b/rust/helpers/atomic_ext.c @@ -36,6 +36,7 @@ __rust_helper void rust_helper_atomic_##tname##_set_relea= se(type *ptr, type val) =20 GEN_READ_SET_HELPERS(i8, s8) GEN_READ_SET_HELPERS(i16, s16) +GEN_READ_SET_HELPERS(ptr, const void *) =20 /* * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the @@ -59,6 +60,7 @@ rust_helper_atomic_##tname##_xchg##suffix(type *ptr, type= new) \ =20 GEN_XCHG_HELPERS(i8, s8) GEN_XCHG_HELPERS(i16, s16) +GEN_XCHG_HELPERS(ptr, const void *) =20 /* * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the @@ -82,3 +84,4 @@ rust_helper_atomic_##tname##_try_cmpxchg##suffix(type *pt= r, type *old, type new) =20 GEN_TRY_CMPXCHG_HELPERS(i8, s8) GEN_TRY_CMPXCHG_HELPERS(i16, s16) +GEN_TRY_CMPXCHG_HELPERS(ptr, const void *) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 53177052b4b2..7ed8febe4a58 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -52,6 +52,10 @@ #[repr(transparent)] pub struct Atomic(AtomicRepr); =20 +// SAFETY: `Atomic` is safe to transfer between execution contexts beca= use of the safety +// requirement of `AtomicType`. +unsafe impl Send for Atomic {} + // SAFETY: `Atomic` is safe to share among execution contexts because a= ll accesses are atomic. unsafe impl Sync for Atomic {} =20 @@ -69,6 +73,11 @@ unsafe impl Sync for Atomic {} /// /// - [`Self`] must have the same size and alignment as [`Self::Repr`]. /// - [`Self`] must be [round-trip transmutable] to [`Self::Repr`]. +/// - [`Self`] must be safe to transfer between execution contexts, if it'= s [`Send`], this is +/// automatically satisfied. The exception is pointer types that are eve= n though marked as +/// `!Send` (e.g. raw pointers and [`NonNull`]) but requiring `unsafe= ` to do anything +/// meaningful on them. This is because transferring pointer values betw= een execution contexts is +/// safe as long as the actual `unsafe` dereferencing is justified. /// /// Note that this is more relaxed than requiring the bi-directional trans= mutability (i.e. /// [`transmute()`] is always sound between `U` and `T`) because of the su= pport for atomic @@ -109,7 +118,8 @@ unsafe impl Sync for Atomic {} /// [`transmute()`]: core::mem::transmute /// [round-trip transmutable]: AtomicType#round-trip-transmutability /// [Examples]: AtomicType#examples -pub unsafe trait AtomicType: Sized + Send + Copy { +/// [`NonNull`]: core::ptr::NonNull +pub unsafe trait AtomicType: Sized + Copy { /// The backing atomic implementation type. type Repr: AtomicImpl; } diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/= internal.rs index ef516bcb02ee..e301db4eaf91 100644 --- a/rust/kernel/sync/atomic/internal.rs +++ b/rust/kernel/sync/atomic/internal.rs @@ -7,6 +7,7 @@ use crate::bindings; use crate::macros::paste; use core::cell::UnsafeCell; +use ffi::c_void; =20 mod private { /// Sealed trait marker to disable customized impls on atomic implemen= tation traits. @@ -14,10 +15,11 @@ pub trait Sealed {} } =20 // The C side supports atomic primitives only for `i32` and `i64` (`atomic= _t` and `atomic64_t`), -// while the Rust side also layers provides atomic support for `i8` and `i= 16` -// on top of lower-level C primitives. +// while the Rust side also provides atomic support for `i8`, `i16` and `*= const c_void` on top of +// lower-level C primitives. impl private::Sealed for i8 {} impl private::Sealed for i16 {} +impl private::Sealed for *const c_void {} impl private::Sealed for i32 {} impl private::Sealed for i64 {} =20 @@ -26,10 +28,10 @@ impl private::Sealed for i64 {} /// This trait is sealed, and only types that map directly to the C side a= tomics /// or can be implemented with lower-level C primitives are allowed to imp= lement this: /// -/// - `i8` and `i16` are implemented with lower-level C primitives. +/// - `i8`, `i16` and `*const c_void` are implemented with lower-level C p= rimitives. /// - `i32` map to `atomic_t` /// - `i64` map to `atomic64_t` -pub trait AtomicImpl: Sized + Send + Copy + private::Sealed { +pub trait AtomicImpl: Sized + Copy + private::Sealed { /// The type of the delta in arithmetic or logical operations. /// /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usuall= y it's the same type of @@ -37,9 +39,9 @@ pub trait AtomicImpl: Sized + Send + Copy + private::Seal= ed { type Delta; } =20 -// The current helpers of load/store of atomic `i8` and `i16` use `{WRITE,= READ}_ONCE()` hence the -// atomicity is only guaranteed against read-modify-write operations if th= e architecture supports -// native atomic RmW. +// The current helpers of load/store of atomic `i8`, `i16` and pointers us= e `{WRITE,READ}_ONCE()` +// hence the atomicity is only guaranteed against read-modify-write operat= ions if the architecture +// supports native atomic RmW. // // In the future when a CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=3Dn architecture p= lans to support Rust, the // load/store helpers that guarantee atomicity against RmW operations (usu= ally via a lock) need to @@ -58,6 +60,10 @@ impl AtomicImpl for i16 { type Delta =3D Self; } =20 +impl AtomicImpl for *const c_void { + type Delta =3D isize; +} + // `atomic_t` implements atomic operations on `i32`. impl AtomicImpl for i32 { type Delta =3D Self; @@ -269,7 +275,7 @@ macro_rules! declare_and_impl_atomic_methods { } =20 declare_and_impl_atomic_methods!( - [ i8 =3D> atomic_i8, i16 =3D> atomic_i16, i32 =3D> atomic, i64 =3D> at= omic64 ] + [ i8 =3D> atomic_i8, i16 =3D> atomic_i16, *const c_void =3D> atomic_pt= r, i32 =3D> atomic, i64 =3D> atomic64 ] /// Basic atomic operations pub trait AtomicBasicOps { /// Atomic read (load). @@ -287,7 +293,7 @@ fn set[release](a: &AtomicRepr, v: Self) { ); =20 declare_and_impl_atomic_methods!( - [ i8 =3D> atomic_i8, i16 =3D> atomic_i16, i32 =3D> atomic, i64 =3D> at= omic64 ] + [ i8 =3D> atomic_i8, i16 =3D> atomic_i16, *const c_void =3D> atomic_pt= r, i32 =3D> atomic, i64 =3D> atomic64 ] /// Exchange and compare-and-exchange atomic operations pub trait AtomicExchangeOps { /// Atomic exchange. diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic= /predefine.rs index e04268379157..a560518e96ee 100644 --- a/rust/kernel/sync/atomic/predefine.rs +++ b/rust/kernel/sync/atomic/predefine.rs @@ -4,6 +4,7 @@ =20 use crate::static_assert; use core::mem::{align_of, size_of}; +use ffi::c_void; =20 // Ensure size and alignment requirements are checked. static_assert!(size_of::() =3D=3D size_of::()); @@ -28,6 +29,26 @@ unsafe impl super::AtomicType for i16 { type Repr =3D i16; } =20 +// SAFETY: +// +// - `*mut T` has the same size and alignment with `*const c_void`, and is= round-trip +// transmutable to `*const c_void`. +// - `*mut T` is safe to transfer between execution contexts. See the safe= ty requirement of +// [`AtomicType`]. +unsafe impl super::AtomicType for *mut T { + type Repr =3D *const c_void; +} + +// SAFETY: +// +// - `*const T` has the same size and alignment with `*const c_void`, and = is round-trip +// transmutable to `*const c_void`. +// - `*const T` is safe to transfer between execution contexts. See the sa= fety requirement of +// [`AtomicType`]. +unsafe impl super::AtomicType for *const T { + type Repr =3D *const c_void; +} + // SAFETY: `i32` has the same size and alignment with itself, and is round= -trip transmutable to // itself. unsafe impl super::AtomicType for i32 { @@ -339,4 +360,29 @@ fn atomic_bool_tests() { assert_eq!(false, x.load(Relaxed)); assert_eq!(Ok(false), x.cmpxchg(false, true, Full)); } + + #[test] + fn atomic_ptr_tests() { + let mut v =3D 42; + let mut u =3D 43; + let x =3D Atomic::new(&raw mut v); + + assert_eq!(x.load(Acquire), &raw mut v); + assert_eq!(x.cmpxchg(&raw mut u, &raw mut u, Relaxed), Err(&raw mu= t v)); + assert_eq!(x.cmpxchg(&raw mut v, &raw mut u, Relaxed), Ok(&raw mut= v)); + assert_eq!(x.load(Relaxed), &raw mut u); + + let x =3D Atomic::new(&raw const v); + + assert_eq!(x.load(Acquire), &raw const v); + assert_eq!( + x.cmpxchg(&raw const u, &raw const u, Relaxed), + Err(&raw const v) + ); + assert_eq!( + x.cmpxchg(&raw const v, &raw const u, Relaxed), + Ok(&raw const v) + ); + assert_eq!(x.load(Relaxed), &raw const u); + } } --=20 2.51.0