From nobody Mon Feb 9 14:14:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F2CBC7EE2C for ; Mon, 5 Jun 2023 07:01:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230187AbjFEHBz (ORCPT ); Mon, 5 Jun 2023 03:01:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230149AbjFEHBk (ORCPT ); Mon, 5 Jun 2023 03:01:40 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ED8BC9F; Mon, 5 Jun 2023 00:01:38 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 459C615A1; Mon, 5 Jun 2023 00:02:24 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0103D3F793; Mon, 5 Jun 2023 00:01:36 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: akiyks@gmail.com, boqun.feng@gmail.com, corbet@lwn.net, keescook@chromium.org, linux@armlinux.org.uk, linux-doc@vger.kernel.org, mark.rutland@arm.com, mchehab@kernel.org, paulmck@kernel.org, peterz@infradead.org, rdunlap@infradead.org, sstabellini@kernel.org, will@kernel.org Subject: [PATCH v2 03/27] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg Date: Mon, 5 Jun 2023 08:01:00 +0100 Message-Id: <20230605070124.3741859-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230605070124.3741859-1-mark.rutland@arm.com> References: <20230605070124.3741859-1-mark.rutland@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Hexagon's implementation of arch_atomic_cmpxchg() is identical to its implementation of arch_cmpxchg(). Have it define arch_atomic_cmpxchg() in terms of arch_cmpxchg(), matching what it does for arch_atomic_xchg() and arch_xchg(). At the same time, remove the kerneldoc comments for hexagon's arch_atomic_xchg() and arch_atomic_cmpxchg(). The arch_atomic_*() namespace is shared by all architectures and the API should be documented centrally, and the comments aren't all that helpful as-is. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kees Cook Cc: Boqun Feng Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Will Deacon --- arch/hexagon/include/asm/atomic.h | 46 +++---------------------------- 1 file changed, 4 insertions(+), 42 deletions(-) diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/a= tomic.h index 6e94f8d04146f..738857e10d6ec 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -36,49 +36,11 @@ static inline void arch_atomic_set(atomic_t *v, int new) */ #define arch_atomic_read(v) READ_ONCE((v)->counter) =20 -/** - * arch_atomic_xchg - atomic - * @v: pointer to memory to change - * @new: new value (technically passed in a register -- see xchg) - */ -#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new))) - - -/** - * arch_atomic_cmpxchg - atomic compare-and-exchange values - * @v: pointer to value to change - * @old: desired old value to match - * @new: new value to put in - * - * Parameters are then pointer, value-in-register, value-in-register, - * and the output is the old value. - * - * Apparently this is complicated for archs that don't support - * the memw_locked like we do (or it's broken or whatever). - * - * Kind of the lynchpin of the rest of the generically defined routines. - * Remember V2 had that bug with dotnew predicate set by memw_locked. - * - * "old" is "expected" old val, __oldval is actual old value - */ -static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) -{ - int __oldval; +#define arch_atomic_xchg(v, new) \ + (arch_xchg(&((v)->counter), (new))) =20 - asm volatile( - "1: %0 =3D memw_locked(%1);\n" - " { P0 =3D cmp.eq(%0,%2);\n" - " if (!P0.new) jump:nt 2f; }\n" - " memw_locked(%1,P0) =3D %3;\n" - " if (!P0) jump 1b;\n" - "2:\n" - : "=3D&r" (__oldval) - : "r" (&v->counter), "r" (old), "r" (new) - : "memory", "p0" - ); - - return __oldval; -} +#define arch_atomic_cmpxchg(v, old, new) \ + (arch_cmpxchg(&((v)->counter), (old), (new))) =20 #define ATOMIC_OP(op) \ static inline void arch_atomic_##op(int i, atomic_t *v) \ --=20 2.30.2