From nobody Mon Nov 25 02:24:43 2024 Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 544C8153BC1; Fri, 1 Nov 2024 06:04:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441044; cv=none; b=qd5LRgIbgSdkImcb9tER7Pk90RA/YPgcEx2nS3x/2/ILGtdTwoOZ203SOd9NdIKwwFnyTa7Ueh1XDF96mDNvoIeFb9Bry9jmGDqQuiAJwUCmSypXZTVjOidqRdoyhHuybjSctzfamRSXrh5Zg6AyEB+Dl343jmeqoOC7/7a7feM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441044; c=relaxed/simple; bh=AULQsJAX54VanQqjNTQit54J/T5nh/au0w7mRidEmgM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ap9UvG7ipkagX1MKuaysGl99bgsaQ+2pSZsBy9Unv3W1BzT1wCgWGimjgWVzbbbxvdGZEofACFZiVP0eEYEEwCbXdWCMBh0T9/fuxoSO6uWr+8djq9rqOh6PYUL74rQlugoQAML3Gns0kLqldFtAL2sSTSW1Qr4Sb7OFT21I+X8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=b3bka8nE; arc=none smtp.client-ip=209.85.222.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b3bka8nE" Received: by mail-qk1-f172.google.com with SMTP id af79cd13be357-7b14443a71eso130453085a.1; Thu, 31 Oct 2024 23:04:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441040; x=1731045840; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ur8oT6TSfzITnezWCsfVdDRwD9KFpZB0nobhPnadT3A=; b=b3bka8nE1nj+6qzXZ0ko2P6rcoIHWJJuwJHtWjlZakC+ndWtoeRdFSsW4C6BqUwINc WXeqnY1rSCKlTD1NyMgzi9pEsog6h5lISjkEKuV4xc6+EueNp6A60H9imCzHTFbFYMxT 7LiGdRYdcKGLpMKEGgekylWQBp3sFq/gw1ASXm8EnZiJOhrMRKJ08jPkpTOTwQhwPtOu VdlBoDBkcRwT6peiaU3ElPrceSs7i94oPPNLQnH3bA2Y8wmP4XR39PqEatx4E+hcfiy2 vwBva4kpBKfrySC1y6iYpFTjGDCW9ZpS6tioB0a1kuLarEARuNlcLVboHIrU3vDpD7my z5MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441040; x=1731045840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ur8oT6TSfzITnezWCsfVdDRwD9KFpZB0nobhPnadT3A=; b=aqM9BqS6eXHC7yP4fr6urid05KwIOR8fNWcyvZnRrUb9d9uGDUiZNJqpgRvOBb+0+1 6kmlbzSGqpm0vBL5QthMT22LbncqsopDezh3C2YiikUL98CuWJZcj2PpLGq8ngaDLfdd gqjsGiE9wiDg268dhb2n5LTFLBPT7AcHsckGaXKW2uPsgGOq49pua8rfHQ9NtgHU1l+G C6Og9JnSvvyaHEGc1Y5d63NCeEJiJthUX8y+TH+Tq7QOI3vodtBLwKMI00PJUY1EhlVW cE5g0dkFmdCjGG0PCliJKjFpar6nFypGpM7CwbO+Lwzzu+0Du699inKhaagH6g0zC4k6 xvZg== X-Forwarded-Encrypted: i=1; AJvYcCUQoM9pIE7Bd5D6rrx2wCF8Wpf4jgYqLoxgDW6+RL5QNKyXYGdD63UrfSSUTYOiY2wrRGgfuRP6pb9toopodg==@vger.kernel.org, AJvYcCUdwCxTknogsIAuk7bVBRXvKOi118+6Aoeh1QMFY29rJ4l9QOx6vN2N1sFUDdgiBxjdYOiiXSYUA4MYSGPP@vger.kernel.org, AJvYcCV53Urzc7rXqSPz7sSWip4r4cqdfLg7pAZgJXFvE+nSTdLqkVvacqWf++GxIcrNGeDycdn8@vger.kernel.org, AJvYcCV9YCf8TrvFqa0jua2MZEWRKo4p1MoTXLDsS3NVdXCM4PlazhqCnLJf34gpgB24xHEKhDIPG+WX093f@vger.kernel.org X-Gm-Message-State: AOJu0YzmU0VbjF5z7DQUQM8lAWik3JkaKB6m5vGjPAdM7/ThgboS2cAl 90q6Sy3K6VLRiMg8O01iPvMELUV1JApw7fM4wHWYmSDciqDkKIKdeqenya2l X-Google-Smtp-Source: AGHT+IGn7Fnf4JT4SwGlNPdIbNgp1gAaBz5TefgXj4tVD56scmLw0kxKdLNV3Y09a04dPX67p+HI/Q== X-Received: by 2002:a05:6214:2d41:b0:6cb:fac2:82d with SMTP id 6a1803df08f44-6d35c165436mr24296566d6.30.1730441040151; Thu, 31 Oct 2024 23:04:00 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d35415b1d6sm15724766d6.76.2024.10.31.23.03.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:59 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id 27C0D1200043; Fri, 1 Nov 2024 02:03:59 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-12.internal (MEProxy); Fri, 01 Nov 2024 02:03:59 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:58 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 05/13] rust: sync: atomic: Add atomic {cmp,}xchg operations Date: Thu, 31 Oct 2024 23:02:28 -0700 Message-ID: <20241101060237.1185533-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" xchg() and cmpxchg() are basic operations on atomic. Provide these based on C APIs. Note that cmpxchg() use the similar function signature as compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means the operation succeeds and `Err(old)` means the operation fails. With the compiler optimization and inline helpers, it should provides the same efficient code generation as using atomic_try_cmpxchg() or atomic_cmpxchg() correctly. Except it's not! Because of commit 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the success of cmpxchg and only wants to use the old value. For example, for code like: // Uses the latest value regardlessly, same as atomic_cmpxchg() in C. let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); It will still generate code: movl $0x40, %ecx movl $0x34, %eax lock cmpxchgl %ecx, 0x4(%rsp) jne 1f 2: ... 1: movl %eax, %ecx jmp 2b Attempting to write an x86 try_cmpxchg_exclusive() for Rust use only, because the Rust function takes a `&mut` for old pointer, which must be exclusive to the function, therefore it's unsafe to use some shared pointer. But maybe I'm missing something? Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/generic.rs | 151 +++++++++++++++++++++++++++++ 1 file changed, 151 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/g= eneric.rs index 204da38e2691..bfccc4336c75 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -251,3 +251,154 @@ pub fn store(&self, v: T,= _: Ordering) { }; } } + +impl Atomic +where + T::Repr: AtomicHasXchgOps, +{ + /// Atomic exchange. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// assert_eq!(42, x.xchg(52, Acquire)); + /// assert_eq!(52, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn xchg(&self, v: T, _: Ordering) -> T { + let v =3D T::into_repr(v); + let a =3D self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_xchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + let ret =3D unsafe { + match Ordering::ORDER { + OrderingDesc::Full =3D> T::Repr::atomic_xchg(a, v), + OrderingDesc::Acquire =3D> T::Repr::atomic_xchg_acquire(a,= v), + OrderingDesc::Release =3D> T::Repr::atomic_xchg_release(a,= v), + OrderingDesc::Relaxed =3D> T::Repr::atomic_xchg_relaxed(a,= v), + } + }; + + T::from_repr(ret) + } + + /// Atomic compare and exchange. + /// + /// Compare: The comparison is done via the byte level comparison betw= een the atomic variables + /// with the `old` value. + /// + /// Ordering: A failed compare and exchange does provide anything, the= read part of a failed + /// cmpxchg should be treated as a relaxed read. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed= to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the at= omic variable when cmpxchg + /// was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Full, Relaxed}; + /// + /// let x =3D Atomic::new(42); + /// + /// // Checks whether cmpxchg succeeded. + /// let success =3D x.cmpxchg(52, 64, Relaxed).is_ok(); + /// # assert!(!success); + /// + /// // Checks whether cmpxchg failed. + /// let failure =3D x.cmpxchg(52, 64, Relaxed).is_err(); + /// # assert!(failure); + /// + /// // Uses the old value if failed, probably re-try cmpxchg. + /// match x.cmpxchg(52, 64, Relaxed) { + /// Ok(_) =3D> { }, + /// Err(old) =3D> { + /// // do something with `old`. + /// # assert_eq!(old, 42); + /// } + /// } + /// + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in= C. + /// let latest =3D x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); + /// # assert_eq!(42, latest); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn cmpxchg(&self, mut old: T, new: T, o: Ordering) = -> Result { + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } + + /// Atomic compare and exchange and returns whether the operation succ= eeds. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg`]. + /// + /// Returns `true` means the cmpxchg succeeds otherwise returns `false= ` with `old` updated to + /// the value of the atomic variable when cmpxchg was happening. + #[inline(always)] + fn try_cmpxchg(&self, old: &mut T, new: T, _: Ordering)= -> bool { + let old =3D (old as *mut T).cast::(); + let new =3D T::into_repr(new); + let a =3D self.0.get().cast::(); + + // SAFETY: + // - For calling the atomic_try_cmpchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requ= irement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a v= alid pointer, + // - per the type invariants, the following atomic operation won= 't cause data races. + // - `old` is a valid pointer to write because it comes from a m= utable reference. + // - For extra safety requirement of usage on pointers returned by= `self.as_ptr(): + // - atomic operations are used here. + unsafe { + match Ordering::ORDER { + OrderingDesc::Full =3D> T::Repr::atomic_try_cmpxchg(a, old= , new), + OrderingDesc::Acquire =3D> T::Repr::atomic_try_cmpxchg_acq= uire(a, old, new), + OrderingDesc::Release =3D> T::Repr::atomic_try_cmpxchg_rel= ease(a, old, new), + OrderingDesc::Relaxed =3D> T::Repr::atomic_try_cmpxchg_rel= axed(a, old, new), + } + } + } + + /// Atomic compare and exchange and return the [`Result`]. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg`]. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed= to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the at= omic variable when cmpxchg + /// was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x =3D Atomic::new(42i32); + /// + /// assert!(x.compare_exchange(52, 64, Acquire).is_err()); + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert!(x.compare_exchange(42, 64, Acquire).is_ok()); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn compare_exchange(&self, mut old: T, new: T, o: O= rdering) -> Result { + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } +} --=20 2.45.2