From nobody Tue Feb 10 12:39:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42FE6C77B7D for ; Wed, 10 May 2023 18:18:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236102AbjEJSSS (ORCPT ); Wed, 10 May 2023 14:18:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236089AbjEJSSE (ORCPT ); Wed, 10 May 2023 14:18:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 722B57A8A; Wed, 10 May 2023 11:17:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CB5D163F7A; Wed, 10 May 2023 18:17:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32FCFC433D2; Wed, 10 May 2023 18:17:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742639; bh=KB8lXc7zZ3+DrRqeJZSACdw3EUil7+a97nQI2II+M98=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AD+barTaVX8y+DNIVRHjyZVk6aMS9U9wGnwdUs+i6L7QVRCp2GNxW0J4DvKl8xUNp bef+WyWRqxQ2qNXW0T4LmPXu0Tb/ybPtFbl+QQpTI1slzRMZRQszn8vuuhXH/Zr/Z3 WGIICvcEU0TWzGd5ZK76KKHSI2Yu/RyGaXDFWRZrKjuOwaOCsBrB4nle1fUDxbdL+L zSBm6cWODgesfyDPnvqjTkKlymAb4LaVjjTQ0bNTlKYK/28BkVarf4OtUL5sBx1xZC IwFBwpnaBe1ltvrGZxViCB1Tpmx9dF1IE8bHX8muehsIXUXqQtxJPkYkuB+562ZEst BCQf7NrxipUUw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id D024CCE120B; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 01/19] locking/atomic: Fix fetch_add_unless missing-period typo Date: Wed, 10 May 2023 11:16:59 -0700 Message-Id: <20230510181717.2200934-1-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The fetch_add_unless() kernel-doc header is missing a period ("."). Therefore, add it. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 6 +++--- scripts/atomic/fallbacks/fetch_add_unless | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index a6e4437c5f36..c4087c32fb0e 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1351,7 +1351,7 @@ arch_atomic_add_negative(int i, atomic_t *v) * @u: ...unless v is equal to u. * * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v + * Returns original value of @v. */ static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) @@ -2567,7 +2567,7 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) * @u: ...unless v is equal to u. * * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v + * Returns original value of @v. */ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) @@ -2668,4 +2668,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// ad2e2b4d168dbc60a73922616047a9bfa446af36 +// 201cc01b616875888e0b2c79965c569a89c0edcd diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fal= lbacks/fetch_add_unless index 68ce13c8b9da..a1692df0d514 100755 --- a/scripts/atomic/fallbacks/fetch_add_unless +++ b/scripts/atomic/fallbacks/fetch_add_unless @@ -6,7 +6,7 @@ cat << EOF * @u: ...unless v is equal to u. * * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v + * Returns original value of @v. */ static __always_inline ${int} arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u) --=20 2.40.1 From nobody Tue Feb 10 12:39:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F287C77B7D for ; Wed, 10 May 2023 18:18:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236178AbjEJSSV (ORCPT ); Wed, 10 May 2023 14:18:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236092AbjEJSSF (ORCPT ); Wed, 10 May 2023 14:18:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7F9E7AAE; Wed, 10 May 2023 11:17:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F291563EE6; Wed, 10 May 2023 18:17:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4AD45C433EF; Wed, 10 May 2023 18:17:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742639; bh=36kMQE5a3mUqBnp2yXq6LdyUZdUoydgWI1lu5/K5gTw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RJdm728OldYRMOkLRNzQ5FI7eRVArdLmRojLBRIw/3w316Tm8cEfVRjCTi6+XsdB3 rsfpXso8EI7pyIDJlMfe7zbCQpd5s6fVcumn3e2xPoEXuWy1jSJWVq8QUYHUX98/0P ju7Xj4Xur9Xu3vYcP0zeMZpbCikL7BwGlC095+KqyXP+7dkZPq4dawTBDN1PG24B0Q RGltBOKBc9qdA887lnd0OmopaQUDDVHo6bPiidSwzhANNp9FrjJC1tIEa84aOukw3Y bGOzYKkQx6XOJ4+Qytdwy8u8qs6aVXuM4uBC95d3e1nPTXP0wOOJwOTOmjZLPL391M g5MJXe9Af31aQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id D37EDCE134A; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 02/19] locking/atomic: Add "@" before "true" and "false" for fallback templates Date: Wed, 10 May 2023 11:17:00 -0700 Message-Id: <20230510181717.2200934-2-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Fix up kernel-doc pretty-printing by adding "@" before "true" and "false" for atomic-operation fallback scripts lacking them. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 54 ++++++++++----------- scripts/atomic/fallbacks/add_negative | 4 +- scripts/atomic/fallbacks/add_unless | 2 +- scripts/atomic/fallbacks/dec_and_test | 2 +- scripts/atomic/fallbacks/inc_and_test | 2 +- scripts/atomic/fallbacks/inc_not_zero | 2 +- scripts/atomic/fallbacks/sub_and_test | 2 +- 7 files changed, 34 insertions(+), 34 deletions(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index c4087c32fb0e..606be9d3aa22 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1185,7 +1185,7 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int ne= w) * @v: pointer of type atomic_t * * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all + * @true if the result is zero, or @false for all * other cases. */ static __always_inline bool @@ -1202,7 +1202,7 @@ arch_atomic_sub_and_test(int i, atomic_t *v) * @v: pointer of type atomic_t * * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other + * returns @true if the result is 0, or @false for all other * cases. */ static __always_inline bool @@ -1219,7 +1219,7 @@ arch_atomic_dec_and_test(atomic_t *v) * @v: pointer of type atomic_t * * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all + * and returns @true if the result is zero, or @false for all * other cases. */ static __always_inline bool @@ -1243,8 +1243,8 @@ arch_atomic_inc_and_test(atomic_t *v) * @i: integer value to add * @v: pointer of type atomic_t * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. + * Atomically adds @i to @v and returns @true if the result is negative, + * or @false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) @@ -1260,8 +1260,8 @@ arch_atomic_add_negative(int i, atomic_t *v) * @i: integer value to add * @v: pointer of type atomic_t * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. + * Atomically adds @i to @v and returns @true if the result is negative, + * or @false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic_add_negative_acquire(int i, atomic_t *v) @@ -1277,8 +1277,8 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v) * @i: integer value to add * @v: pointer of type atomic_t * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. + * Atomically adds @i to @v and returns @true if the result is negative, + * or @false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic_add_negative_release(int i, atomic_t *v) @@ -1294,8 +1294,8 @@ arch_atomic_add_negative_release(int i, atomic_t *v) * @i: integer value to add * @v: pointer of type atomic_t * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. + * Atomically adds @i to @v and returns @true if the result is negative, + * or @false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic_add_negative_relaxed(int i, atomic_t *v) @@ -1376,7 +1376,7 @@ arch_atomic_fetch_add_unless(atomic_t *v, int a, int = u) * @u: ...unless v is equal to u. * * Atomically adds @a to @v, if @v was not already @u. - * Returns true if the addition was done. + * Returns @true if the addition was done. */ static __always_inline bool arch_atomic_add_unless(atomic_t *v, int a, int u) @@ -1392,7 +1392,7 @@ arch_atomic_add_unless(atomic_t *v, int a, int u) * @v: pointer of type atomic_t * * Atomically increments @v by 1, if @v is non-zero. - * Returns true if the increment was done. + * Returns @true if the increment was done. */ static __always_inline bool arch_atomic_inc_not_zero(atomic_t *v) @@ -2401,7 +2401,7 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s6= 4 new) * @v: pointer of type atomic64_t * * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all + * @true if the result is zero, or @false for all * other cases. */ static __always_inline bool @@ -2418,7 +2418,7 @@ arch_atomic64_sub_and_test(s64 i, atomic64_t *v) * @v: pointer of type atomic64_t * * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other + * returns @true if the result is 0, or @false for all other * cases. */ static __always_inline bool @@ -2435,7 +2435,7 @@ arch_atomic64_dec_and_test(atomic64_t *v) * @v: pointer of type atomic64_t * * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all + * and returns @true if the result is zero, or @false for all * other cases. */ static __always_inline bool @@ -2459,8 +2459,8 @@ arch_atomic64_inc_and_test(atomic64_t *v) * @i: integer value to add * @v: pointer of type atomic64_t * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. + * Atomically adds @i to @v and returns @true if the result is negative, + * or @false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) @@ -2476,8 +2476,8 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) * @i: integer value to add * @v: pointer of type atomic64_t * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. + * Atomically adds @i to @v and returns @true if the result is negative, + * or @false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) @@ -2493,8 +2493,8 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t = *v) * @i: integer value to add * @v: pointer of type atomic64_t * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. + * Atomically adds @i to @v and returns @true if the result is negative, + * or @false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic64_add_negative_release(s64 i, atomic64_t *v) @@ -2510,8 +2510,8 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t = *v) * @i: integer value to add * @v: pointer of type atomic64_t * - * Atomically adds @i to @v and returns true if the result is negative, - * or false when the result is greater than or equal to zero. + * Atomically adds @i to @v and returns @true if the result is negative, + * or @false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) @@ -2592,7 +2592,7 @@ arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, = s64 u) * @u: ...unless v is equal to u. * * Atomically adds @a to @v, if @v was not already @u. - * Returns true if the addition was done. + * Returns @true if the addition was done. */ static __always_inline bool arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) @@ -2608,7 +2608,7 @@ arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) * @v: pointer of type atomic64_t * * Atomically increments @v by 1, if @v is non-zero. - * Returns true if the increment was done. + * Returns @true if the increment was done. */ static __always_inline bool arch_atomic64_inc_not_zero(atomic64_t *v) @@ -2668,4 +2668,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 201cc01b616875888e0b2c79965c569a89c0edcd +// e914194a1a82dfbc39d4d1c79ce1f59f64fb37da diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbac= ks/add_negative index e5980abf5904..c032e8bec6e2 100755 --- a/scripts/atomic/fallbacks/add_negative +++ b/scripts/atomic/fallbacks/add_negative @@ -4,8 +4,8 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3750DC7EE22 for ; Wed, 10 May 2023 18:18:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236295AbjEJSSd (ORCPT ); Wed, 10 May 2023 14:18:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229902AbjEJSSG (ORCPT ); Wed, 10 May 2023 14:18:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7406B7A9D; Wed, 10 May 2023 11:17:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0366663F81; Wed, 10 May 2023 18:17:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50F60C4339B; Wed, 10 May 2023 18:17:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742639; bh=UwZIwbcSgLfsaScEbsCZq/hnSjYCEy/ueiOBHUYJX6U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GhQTAUXKMTyOnQFt7BEN9Y2D0WN99QxH3SqWLZWZzfHfGsk3XDp4e8r63Wmoaf4La x2aFHC2HX7iLHEFsERqXgtBPRRlezKgbsZQRQZQOrHB9FSA+9ol9nCiwyveyQFoz18 v3bRFf0Nx6taPwGFpCi0ZMOp0ogDYD5wufl7qPFZdKJfKbJDy2QMieXjywzo18k9nb imVCnaSdzav9fssB0HmPsPtASXkH32U0roh6B/uq8Wkjv/wSBkjSE0YUfWk/SzqAsj 4rBuhwAEyoKhit2rNb1+kb+Z6/n7kxT4TzKYqDHb2pfYutJFXdFH9hpZIVHoXlewH6 M8ai/dtpBfUzA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id D603DCE134D; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 03/19] locking/atomic: Add kernel-doc and docbook_oldnew variables for headers Date: Wed, 10 May 2023 11:17:01 -0700 Message-Id: <20230510181717.2200934-3-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The andnot, dec, inc, and try_cmpxchg files in the scripts/atomic/fallbacks directory do not supply kernel-doc headers. One reason for this is that there is currently no reasonably way to document either the ordering or whether the old or the new value is returned. Therefore, supply docbook_order and docbook_oldnew sh variables that contain the needed information. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- scripts/atomic/gen-atomic-fallback.sh | 17 +++++++++++++++++ scripts/atomic/gen-atomic-instrumented.sh | 17 +++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-ato= mic-fallback.sh index 6e853f0dad8d..697da5f16f98 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -24,6 +24,23 @@ gen_template_fallback() local params=3D"$(gen_params "${int}" "${atomic}" "$@")" local args=3D"$(gen_args "$@")" =20 + local docbook_order=3Dfull + if test "${order}" =3D "_relaxed" + then + local docbook_order=3Dno + elif test -n "${order}" + then + docbook_order=3D"`echo $order | sed -e 's/_//'`" + fi + local docbook_oldnew=3D"new" + if test "${pfx}" =3D "fetch_" + then + docbook_oldnew=3D"old" + elif test "${sfx}" !=3D "_return" + then + docbook_oldnew=3D"no" + fi + if [ ! -z "${template}" ]; then printf "#ifndef ${atomicname}\n" . ${template} diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen= -atomic-instrumented.sh index d9ffd74f73ca..99c72393d362 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -68,6 +68,23 @@ gen_proto_order_variant() local args=3D"$(gen_args "$@")" local retstmt=3D"$(gen_ret_stmt "${meta}")" =20 + local docbook_order=3Dfull + if test "${order}" =3D "_relaxed" + then + local docbook_order=3Dno + elif test -n "${order}" + then + docbook_order=3D"`echo $order | sed -e 's/_//'`" + fi + local docbook_oldnew=3D"new" + if test "${pfx}" =3D "fetch_" + then + docbook_oldnew=3D"old" + elif test "${sfx}" !=3D "_return" + then + docbook_oldnew=3D"no" + fi + cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B186FC77B7D for ; Wed, 10 May 2023 18:17:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235776AbjEJSR5 (ORCPT ); Wed, 10 May 2023 14:17:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232680AbjEJSRr (ORCPT ); Wed, 10 May 2023 14:17:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 450C87698; Wed, 10 May 2023 11:17:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1468F63F84; Wed, 10 May 2023 18:17:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B39EC4339E; Wed, 10 May 2023 18:17:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742639; bh=zZ3XxzsBzG+T4eWG6R7cIa9XtqUZca0NtqY/PyOE+Z4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=swv48Cj1DjA/SiutaZI0kAPe3JBqdZVJj69UGOZftIExWF3qhIZqkZZOAWh83vSO5 a9VP3DvFqvvpszxK0GiRuAGbaYBGKDXTI1NgoT2E0evA4PrbQiG1ozJMO7Ep402ILQ /QThqrF6oNUKKHgIuk/NNVpJNaRir0ME6MHVpGF47m3LQOs3pnJdATnP++cBUELZ5D 5kt8v2dA4BU7ImSHvZBuG9jafj6ActIPAMgD2tFopWFNI9178lIUnQulcvNbqjK379 L3G303Nx+P6o8NADwMIOlb38RgN00GmSYIJOQxdVhChEdTQ/cqg/mPojGhXax7TtNG d8eQSA+Nxmv2A== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id D89F7CE1400; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 04/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}inc${sfx}${order} Date: Wed, 10 May 2023 11:17:02 -0700 Message-Id: <20230510181717.2200934-4-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_${pfx}inc${sfx}${order} function family. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 128 +++++++++++++++++++- scripts/atomic/fallbacks/inc | 7 ++ 2 files changed, 134 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index 606be9d3aa22..e7e83f18d192 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -440,6 +440,13 @@ arch_atomic_fetch_sub(int i, atomic_t *v) #endif /* arch_atomic_fetch_sub_relaxed */ =20 #ifndef arch_atomic_inc +/** + * arch_atomic_inc - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with full ordering, + * returning no value. + */ static __always_inline void arch_atomic_inc(atomic_t *v) { @@ -456,6 +463,13 @@ arch_atomic_inc(atomic_t *v) #endif /* arch_atomic_inc_return */ =20 #ifndef arch_atomic_inc_return +/** + * arch_atomic_inc_return - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with full ordering, + * returning new value. + */ static __always_inline int arch_atomic_inc_return(atomic_t *v) { @@ -465,6 +479,13 @@ arch_atomic_inc_return(atomic_t *v) #endif =20 #ifndef arch_atomic_inc_return_acquire +/** + * arch_atomic_inc_return_acquire - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with acquire ordering, + * returning new value. + */ static __always_inline int arch_atomic_inc_return_acquire(atomic_t *v) { @@ -474,6 +495,13 @@ arch_atomic_inc_return_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_inc_return_release +/** + * arch_atomic_inc_return_release - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with release ordering, + * returning new value. + */ static __always_inline int arch_atomic_inc_return_release(atomic_t *v) { @@ -483,6 +511,13 @@ arch_atomic_inc_return_release(atomic_t *v) #endif =20 #ifndef arch_atomic_inc_return_relaxed +/** + * arch_atomic_inc_return_relaxed - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with no ordering, + * returning new value. + */ static __always_inline int arch_atomic_inc_return_relaxed(atomic_t *v) { @@ -537,6 +572,13 @@ arch_atomic_inc_return(atomic_t *v) #endif /* arch_atomic_fetch_inc */ =20 #ifndef arch_atomic_fetch_inc +/** + * arch_atomic_fetch_inc - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with full ordering, + * returning old value. + */ static __always_inline int arch_atomic_fetch_inc(atomic_t *v) { @@ -546,6 +588,13 @@ arch_atomic_fetch_inc(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_inc_acquire +/** + * arch_atomic_fetch_inc_acquire - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with acquire ordering, + * returning old value. + */ static __always_inline int arch_atomic_fetch_inc_acquire(atomic_t *v) { @@ -555,6 +604,13 @@ arch_atomic_fetch_inc_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_inc_release +/** + * arch_atomic_fetch_inc_release - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with release ordering, + * returning old value. + */ static __always_inline int arch_atomic_fetch_inc_release(atomic_t *v) { @@ -564,6 +620,13 @@ arch_atomic_fetch_inc_release(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_inc_relaxed +/** + * arch_atomic_fetch_inc_relaxed - Atomic increment + * @v: pointer of type atomic_t + * + * Atomically increment @v with no ordering, + * returning old value. + */ static __always_inline int arch_atomic_fetch_inc_relaxed(atomic_t *v) { @@ -1656,6 +1719,13 @@ arch_atomic64_fetch_sub(s64 i, atomic64_t *v) #endif /* arch_atomic64_fetch_sub_relaxed */ =20 #ifndef arch_atomic64_inc +/** + * arch_atomic64_inc - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with full ordering, + * returning no value. + */ static __always_inline void arch_atomic64_inc(atomic64_t *v) { @@ -1672,6 +1742,13 @@ arch_atomic64_inc(atomic64_t *v) #endif /* arch_atomic64_inc_return */ =20 #ifndef arch_atomic64_inc_return +/** + * arch_atomic64_inc_return - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with full ordering, + * returning new value. + */ static __always_inline s64 arch_atomic64_inc_return(atomic64_t *v) { @@ -1681,6 +1758,13 @@ arch_atomic64_inc_return(atomic64_t *v) #endif =20 #ifndef arch_atomic64_inc_return_acquire +/** + * arch_atomic64_inc_return_acquire - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with acquire ordering, + * returning new value. + */ static __always_inline s64 arch_atomic64_inc_return_acquire(atomic64_t *v) { @@ -1690,6 +1774,13 @@ arch_atomic64_inc_return_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_inc_return_release +/** + * arch_atomic64_inc_return_release - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with release ordering, + * returning new value. + */ static __always_inline s64 arch_atomic64_inc_return_release(atomic64_t *v) { @@ -1699,6 +1790,13 @@ arch_atomic64_inc_return_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_inc_return_relaxed +/** + * arch_atomic64_inc_return_relaxed - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with no ordering, + * returning new value. + */ static __always_inline s64 arch_atomic64_inc_return_relaxed(atomic64_t *v) { @@ -1753,6 +1851,13 @@ arch_atomic64_inc_return(atomic64_t *v) #endif /* arch_atomic64_fetch_inc */ =20 #ifndef arch_atomic64_fetch_inc +/** + * arch_atomic64_fetch_inc - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with full ordering, + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_inc(atomic64_t *v) { @@ -1762,6 +1867,13 @@ arch_atomic64_fetch_inc(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_inc_acquire +/** + * arch_atomic64_fetch_inc_acquire - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with acquire ordering, + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_inc_acquire(atomic64_t *v) { @@ -1771,6 +1883,13 @@ arch_atomic64_fetch_inc_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_inc_release +/** + * arch_atomic64_fetch_inc_release - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with release ordering, + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_inc_release(atomic64_t *v) { @@ -1780,6 +1899,13 @@ arch_atomic64_fetch_inc_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_inc_relaxed +/** + * arch_atomic64_fetch_inc_relaxed - Atomic increment + * @v: pointer of type atomic64_t + * + * Atomically increment @v with no ordering, + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_inc_relaxed(atomic64_t *v) { @@ -2668,4 +2794,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// e914194a1a82dfbc39d4d1c79ce1f59f64fb37da +// 17cefb0ff9b450685d4072202d4a1c309b0606c2 diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc index 3c2c3739169e..3f2c0730cd0c 100755 --- a/scripts/atomic/fallbacks/inc +++ b/scripts/atomic/fallbacks/inc @@ -1,4 +1,11 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D41EDC7EE24 for ; Wed, 10 May 2023 18:18:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229711AbjEJSSY (ORCPT ); Wed, 10 May 2023 14:18:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236114AbjEJSSF (ORCPT ); Wed, 10 May 2023 14:18:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7D2B7AA7; Wed, 10 May 2023 11:17:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1CFA663F85; Wed, 10 May 2023 18:17:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72391C4339C; Wed, 10 May 2023 18:17:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742639; bh=iiJ/VzQpstXqdm3RN63oyTIvoY5ZmUM0BoWf7nHGo3s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pGXkvifBANBwFxv+VhyFewAoIZARO3vmZjp8qrln5HSEa39f99X5TL1K1btzJ+wze bH/wE4vffWqeAnEDdXwkuIYxOc3oyZCvOkYoXrz/BWKTjQswIn0Q1Jgv5RkwZ4CFBI xjvsZpoMkPBB1AwXZ8LXp8aa5SSh1TaU85MGp5mK4FNcOMKsuaMAQme9SxNvoND7q7 8M+I+qLP4mtAdh7By0Hgbr5cEw8Yd5QRa2E6MjaxwE6WcAwvO6rAzOpYzNPtbxvKDD DMahCot6e0rdpS9+VfermloCH/UpqRfK1l79qyTdddGG9O8Ho+hgQzNj/xmOIrFol7 beH4ALztdlpeA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id DB181CE1434; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 05/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}dec${sfx}${order} Date: Wed, 10 May 2023 11:17:03 -0700 Message-Id: <20230510181717.2200934-5-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_${pfx}dec${sfx}${order} function family. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 128 +++++++++++++++++++- scripts/atomic/fallbacks/dec | 7 ++ 2 files changed, 134 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index e7e83f18d192..41e43e8dff8d 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -674,6 +674,13 @@ arch_atomic_fetch_inc(atomic_t *v) #endif /* arch_atomic_fetch_inc_relaxed */ =20 #ifndef arch_atomic_dec +/** + * arch_atomic_dec - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with full ordering, + * returning no value. + */ static __always_inline void arch_atomic_dec(atomic_t *v) { @@ -690,6 +697,13 @@ arch_atomic_dec(atomic_t *v) #endif /* arch_atomic_dec_return */ =20 #ifndef arch_atomic_dec_return +/** + * arch_atomic_dec_return - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with full ordering, + * returning new value. + */ static __always_inline int arch_atomic_dec_return(atomic_t *v) { @@ -699,6 +713,13 @@ arch_atomic_dec_return(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_return_acquire +/** + * arch_atomic_dec_return_acquire - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with acquire ordering, + * returning new value. + */ static __always_inline int arch_atomic_dec_return_acquire(atomic_t *v) { @@ -708,6 +729,13 @@ arch_atomic_dec_return_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_return_release +/** + * arch_atomic_dec_return_release - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with release ordering, + * returning new value. + */ static __always_inline int arch_atomic_dec_return_release(atomic_t *v) { @@ -717,6 +745,13 @@ arch_atomic_dec_return_release(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_return_relaxed +/** + * arch_atomic_dec_return_relaxed - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with no ordering, + * returning new value. + */ static __always_inline int arch_atomic_dec_return_relaxed(atomic_t *v) { @@ -771,6 +806,13 @@ arch_atomic_dec_return(atomic_t *v) #endif /* arch_atomic_fetch_dec */ =20 #ifndef arch_atomic_fetch_dec +/** + * arch_atomic_fetch_dec - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with full ordering, + * returning old value. + */ static __always_inline int arch_atomic_fetch_dec(atomic_t *v) { @@ -780,6 +822,13 @@ arch_atomic_fetch_dec(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_dec_acquire +/** + * arch_atomic_fetch_dec_acquire - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with acquire ordering, + * returning old value. + */ static __always_inline int arch_atomic_fetch_dec_acquire(atomic_t *v) { @@ -789,6 +838,13 @@ arch_atomic_fetch_dec_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_dec_release +/** + * arch_atomic_fetch_dec_release - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with release ordering, + * returning old value. + */ static __always_inline int arch_atomic_fetch_dec_release(atomic_t *v) { @@ -798,6 +854,13 @@ arch_atomic_fetch_dec_release(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_dec_relaxed +/** + * arch_atomic_fetch_dec_relaxed - Atomic decrement + * @v: pointer of type atomic_t + * + * Atomically decrement @v with no ordering, + * returning old value. + */ static __always_inline int arch_atomic_fetch_dec_relaxed(atomic_t *v) { @@ -1953,6 +2016,13 @@ arch_atomic64_fetch_inc(atomic64_t *v) #endif /* arch_atomic64_fetch_inc_relaxed */ =20 #ifndef arch_atomic64_dec +/** + * arch_atomic64_dec - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with full ordering, + * returning no value. + */ static __always_inline void arch_atomic64_dec(atomic64_t *v) { @@ -1969,6 +2039,13 @@ arch_atomic64_dec(atomic64_t *v) #endif /* arch_atomic64_dec_return */ =20 #ifndef arch_atomic64_dec_return +/** + * arch_atomic64_dec_return - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with full ordering, + * returning new value. + */ static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v) { @@ -1978,6 +2055,13 @@ arch_atomic64_dec_return(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_return_acquire +/** + * arch_atomic64_dec_return_acquire - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with acquire ordering, + * returning new value. + */ static __always_inline s64 arch_atomic64_dec_return_acquire(atomic64_t *v) { @@ -1987,6 +2071,13 @@ arch_atomic64_dec_return_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_return_release +/** + * arch_atomic64_dec_return_release - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with release ordering, + * returning new value. + */ static __always_inline s64 arch_atomic64_dec_return_release(atomic64_t *v) { @@ -1996,6 +2087,13 @@ arch_atomic64_dec_return_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_return_relaxed +/** + * arch_atomic64_dec_return_relaxed - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with no ordering, + * returning new value. + */ static __always_inline s64 arch_atomic64_dec_return_relaxed(atomic64_t *v) { @@ -2050,6 +2148,13 @@ arch_atomic64_dec_return(atomic64_t *v) #endif /* arch_atomic64_fetch_dec */ =20 #ifndef arch_atomic64_fetch_dec +/** + * arch_atomic64_fetch_dec - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with full ordering, + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_dec(atomic64_t *v) { @@ -2059,6 +2164,13 @@ arch_atomic64_fetch_dec(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_dec_acquire +/** + * arch_atomic64_fetch_dec_acquire - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with acquire ordering, + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_dec_acquire(atomic64_t *v) { @@ -2068,6 +2180,13 @@ arch_atomic64_fetch_dec_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_dec_release +/** + * arch_atomic64_fetch_dec_release - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with release ordering, + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_dec_release(atomic64_t *v) { @@ -2077,6 +2196,13 @@ arch_atomic64_fetch_dec_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_dec_relaxed +/** + * arch_atomic64_fetch_dec_relaxed - Atomic decrement + * @v: pointer of type atomic64_t + * + * Atomically decrement @v with no ordering, + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_dec_relaxed(atomic64_t *v) { @@ -2794,4 +2920,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 17cefb0ff9b450685d4072202d4a1c309b0606c2 +// 1a1d30491494653253bfe3b5d2e2c6583cb57473 diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec index 8c144c818e9e..e99c8edd36a3 100755 --- a/scripts/atomic/fallbacks/dec +++ b/scripts/atomic/fallbacks/dec @@ -1,4 +1,11 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 004F6C7EE2D for ; Wed, 10 May 2023 18:19:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236431AbjEJSTB (ORCPT ); Wed, 10 May 2023 14:19:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236076AbjEJSSQ (ORCPT ); Wed, 10 May 2023 14:18:16 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E76457ED6; Wed, 10 May 2023 11:18:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 317EB63F87; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 970B4C433D2; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=0NNr6pXpT4nPPvOEzgx3h6iYrhe2MKVVK49PaoO8bUE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PkJiGnQerc3TzX9FDzvkK0UUi8VoowqO17Bvc8Qkmk4MgxT0yW1ov0ieLq/k85k04 53uap/CnEtEHlOQDl4Nvh/O7z1phP+S2JjGGbRGPZIDCXLLywN3DUnQlpt/UXo330q niA/E173A+Fewiy5EMprfUqmqE1AGqkblzsb1a1SYhWOKjRJMmcF8Qx1A395qKvaGy Dg4uIZ97XwwFRs9hIbOqWEx92vvWSycBV+qrUnJpXKnjHp70Fx2ol4e1dQN8yOet0q lZDjjBKHNUIVPxvc6XSwKl/w3aX37wPfqX4XNNKf8HJIEIozqslPe5DM5HYjHYP//t FJ0D2H36Wxftw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id DD7FECE16A7; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 06/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}andnot${sfx}${order} Date: Wed, 10 May 2023 11:17:04 -0700 Message-Id: <20230510181717.2200934-6-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_${pfx}andnot${sfx}${order} function family. [ paulmck: Apply feedback from Akira Yokosawa. ] Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 82 ++++++++++++++++++++- scripts/atomic/fallbacks/andnot | 8 ++ 2 files changed, 89 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index 41e43e8dff8d..d5ff29a7128d 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -950,6 +950,14 @@ arch_atomic_fetch_and(int i, atomic_t *v) #endif /* arch_atomic_fetch_and_relaxed */ =20 #ifndef arch_atomic_andnot +/** + * arch_atomic_andnot - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic_t + * + * Atomically and-not @i with @v using full ordering. + * returning no value. + */ static __always_inline void arch_atomic_andnot(int i, atomic_t *v) { @@ -966,6 +974,14 @@ arch_atomic_andnot(int i, atomic_t *v) #endif /* arch_atomic_fetch_andnot */ =20 #ifndef arch_atomic_fetch_andnot +/** + * arch_atomic_fetch_andnot - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic_t + * + * Atomically and-not @i with @v using full ordering. + * returning old value. + */ static __always_inline int arch_atomic_fetch_andnot(int i, atomic_t *v) { @@ -975,6 +991,14 @@ arch_atomic_fetch_andnot(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_andnot_acquire +/** + * arch_atomic_fetch_andnot_acquire - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic_t + * + * Atomically and-not @i with @v using acquire ordering. + * returning old value. + */ static __always_inline int arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) { @@ -984,6 +1008,14 @@ arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_andnot_release +/** + * arch_atomic_fetch_andnot_release - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic_t + * + * Atomically and-not @i with @v using release ordering. + * returning old value. + */ static __always_inline int arch_atomic_fetch_andnot_release(int i, atomic_t *v) { @@ -993,6 +1025,14 @@ arch_atomic_fetch_andnot_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_andnot_relaxed +/** + * arch_atomic_fetch_andnot_relaxed - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic_t + * + * Atomically and-not @i with @v using no ordering. + * returning old value. + */ static __always_inline int arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v) { @@ -2292,6 +2332,14 @@ arch_atomic64_fetch_and(s64 i, atomic64_t *v) #endif /* arch_atomic64_fetch_and_relaxed */ =20 #ifndef arch_atomic64_andnot +/** + * arch_atomic64_andnot - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic64_t + * + * Atomically and-not @i with @v using full ordering. + * returning no value. + */ static __always_inline void arch_atomic64_andnot(s64 i, atomic64_t *v) { @@ -2308,6 +2356,14 @@ arch_atomic64_andnot(s64 i, atomic64_t *v) #endif /* arch_atomic64_fetch_andnot */ =20 #ifndef arch_atomic64_fetch_andnot +/** + * arch_atomic64_fetch_andnot - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic64_t + * + * Atomically and-not @i with @v using full ordering. + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) { @@ -2317,6 +2373,14 @@ arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_andnot_acquire +/** + * arch_atomic64_fetch_andnot_acquire - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic64_t + * + * Atomically and-not @i with @v using acquire ordering. + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { @@ -2326,6 +2390,14 @@ arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_fetch_andnot_release +/** + * arch_atomic64_fetch_andnot_release - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic64_t + * + * Atomically and-not @i with @v using release ordering. + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { @@ -2335,6 +2407,14 @@ arch_atomic64_fetch_andnot_release(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_fetch_andnot_relaxed +/** + * arch_atomic64_fetch_andnot_relaxed - Atomic and-not + * @i: the quantity to and-not with *@v + * @v: pointer of type atomic64_t + * + * Atomically and-not @i with @v using no ordering. + * returning old value. + */ static __always_inline s64 arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { @@ -2920,4 +3000,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 1a1d30491494653253bfe3b5d2e2c6583cb57473 +// e403f06ce98fe72ae0698e8f2c78f8a45894e465 diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/and= not index 5a42f54a3595..9fbc0ce75a7c 100755 --- a/scripts/atomic/fallbacks/andnot +++ b/scripts/atomic/fallbacks/andnot @@ -1,4 +1,12 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 730F9C77B7D for ; Wed, 10 May 2023 18:19:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236558AbjEJSTS (ORCPT ); Wed, 10 May 2023 14:19:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236117AbjEJSST (ORCPT ); Wed, 10 May 2023 14:18:19 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4C4D659D; Wed, 10 May 2023 11:18:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C27F063F92; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA228C433A7; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=cpvv3VEJW+5eargd3gZNCUUoeAAIBIzdoTFRowhV52A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TUE0DR8KKEgUfULoaQ+FPC48KAJ1L4SA47gpCRvQARqdCe61E9oPwdXH5TvmAMA1N Ximd1pFgunErS+/C2N7OswCuWRiBM2JrRGZrbVNKpa6jMch5KLANSc7n9Hz1opC1lg 7I9ym2zcUBki9qM2mDr23AcFxIdIQBqfAeDiCfQ9kDk0+gwy4YJBGjI+aaOC3XuhcN 2zMAHHHwArF12aGVqQCrGQ3PtH6JxeRAMBWQqJanuNjgpNq55KYM4jQeS/4oG5ichb B5O4g4J81HHy77Mrt2dUPkQn/Xd0Sq53vgt31PPCyeDqCiZ0l1MMq91uPqZ5gyGmgJ 2lmUW3agwjV+g== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E039ACE18F0; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 07/19] locking/atomic: Add kernel-doc header for arch_${atomic}_try_cmpxchg${order} Date: Wed, 10 May 2023 11:17:05 -0700 Message-Id: <20230510181717.2200934-7-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_try_cmpxchg${order} function family. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 82 ++++++++++++++++++++- scripts/atomic/fallbacks/try_cmpxchg | 10 +++ 2 files changed, 91 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index d5ff29a7128d..ed72d94346e9 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1255,6 +1255,16 @@ arch_atomic_cmpxchg(atomic_t *v, int old, int new) #endif /* arch_atomic_try_cmpxchg */ =20 #ifndef arch_atomic_try_cmpxchg +/** + * arch_atomic_try_cmpxchg - Atomic cmpxchg with bool return value + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, stores @new to *@v, + * providing full ordering. + * Returns @true if the cmpxchg operation succeeded, and false otherwise. + */ static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { @@ -1268,6 +1278,16 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int n= ew) #endif =20 #ifndef arch_atomic_try_cmpxchg_acquire +/** + * arch_atomic_try_cmpxchg_acquire - Atomic cmpxchg with bool return value + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, stores @new to *@v, + * providing acquire ordering. + * Returns @true if the cmpxchg operation succeeded, and false otherwise. + */ static __always_inline bool arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { @@ -1281,6 +1301,16 @@ arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *ol= d, int new) #endif =20 #ifndef arch_atomic_try_cmpxchg_release +/** + * arch_atomic_try_cmpxchg_release - Atomic cmpxchg with bool return value + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, stores @new to *@v, + * providing release ordering. + * Returns @true if the cmpxchg operation succeeded, and false otherwise. + */ static __always_inline bool arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { @@ -1294,6 +1324,16 @@ arch_atomic_try_cmpxchg_release(atomic_t *v, int *ol= d, int new) #endif =20 #ifndef arch_atomic_try_cmpxchg_relaxed +/** + * arch_atomic_try_cmpxchg_relaxed - Atomic cmpxchg with bool return value + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, stores @new to *@v, + * providing no ordering. + * Returns @true if the cmpxchg operation succeeded, and false otherwise. + */ static __always_inline bool arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { @@ -2637,6 +2677,16 @@ arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 ne= w) #endif /* arch_atomic64_try_cmpxchg */ =20 #ifndef arch_atomic64_try_cmpxchg +/** + * arch_atomic64_try_cmpxchg - Atomic cmpxchg with bool return value + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, stores @new to *@v, + * providing full ordering. + * Returns @true if the cmpxchg operation succeeded, and false otherwise. + */ static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { @@ -2650,6 +2700,16 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s= 64 new) #endif =20 #ifndef arch_atomic64_try_cmpxchg_acquire +/** + * arch_atomic64_try_cmpxchg_acquire - Atomic cmpxchg with bool return val= ue + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, stores @new to *@v, + * providing acquire ordering. + * Returns @true if the cmpxchg operation succeeded, and false otherwise. + */ static __always_inline bool arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { @@ -2663,6 +2723,16 @@ arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64= *old, s64 new) #endif =20 #ifndef arch_atomic64_try_cmpxchg_release +/** + * arch_atomic64_try_cmpxchg_release - Atomic cmpxchg with bool return val= ue + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, stores @new to *@v, + * providing release ordering. + * Returns @true if the cmpxchg operation succeeded, and false otherwise. + */ static __always_inline bool arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { @@ -2676,6 +2746,16 @@ arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64= *old, s64 new) #endif =20 #ifndef arch_atomic64_try_cmpxchg_relaxed +/** + * arch_atomic64_try_cmpxchg_relaxed - Atomic cmpxchg with bool return val= ue + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, stores @new to *@v, + * providing no ordering. + * Returns @true if the cmpxchg operation succeeded, and false otherwise. + */ static __always_inline bool arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { @@ -3000,4 +3080,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// e403f06ce98fe72ae0698e8f2c78f8a45894e465 +// 3b29d5595f48f921507f19bc794c91aecb782ad3 diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallback= s/try_cmpxchg index 890f850ede37..baf7412f9bf4 100755 --- a/scripts/atomic/fallbacks/try_cmpxchg +++ b/scripts/atomic/fallbacks/try_cmpxchg @@ -1,4 +1,14 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D60C1C77B7D for ; Wed, 10 May 2023 18:19:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236622AbjEJSTa (ORCPT ); Wed, 10 May 2023 14:19:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236175AbjEJSSV (ORCPT ); Wed, 10 May 2023 14:18:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D0AE8691; Wed, 10 May 2023 11:18:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B804263F8C; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7BE4C433A1; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=8scz+UOnvZTcnS7Ykx6CtnedJmiFq1UdjUUnARLlRyc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ikm2NLQzY332FjMNQM5/hRKYkRj1qvK9ycCfMuNvYarVrdkH7vu9gGrRyZzmY39W3 csCAizD3aExQlgib+ooni6h+XqWhfDENU0i4TX4ordvGKdI+LcmEEmHi8WcSw8BxeU KCyFlSqhuxoaEYD/zv9ggp7VUAMRCS+MtSMsNarZSe2eBE55lvF/rDi4EUSutdhjBm jod4i728JsV3IwNVAEZs9audTUe7WRlVUaj7DoqsgEGleeLf8InmphipjzJVtwiDIt xv9Xx2nhOLMQXNVBdpZp2G7CfiHA/p6V6B7sIhiJZV3jVWWOQssXQFBWqxj66ArXCb zU/CeO0M/1dTA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E2BB4CE1BC1; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 08/19] locking/atomic: Add kernel-doc header for arch_${atomic}_dec_if_positive Date: Wed, 10 May 2023 11:17:06 -0700 Message-Id: <20230510181717.2200934-8-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_dec_if_positive function family. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 22 ++++++++++++++++++++- scripts/atomic/fallbacks/dec_if_positive | 10 ++++++++++ 2 files changed, 31 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index ed72d94346e9..4d4d94925cb0 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1641,6 +1641,16 @@ arch_atomic_dec_unless_positive(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_if_positive +/** + * arch_atomic_dec_if_positive - Atomic decrement if old value is positive + * @v: pointer of type atomic_t + * + * Atomically decrement @v, but only if the original value is greater than= zero, + * returning new value. Note that the desired new value will be returned + * even if the decrement did not occur, so that if the old value is -3, th= en + * there @v will not be decremented, but -4 will be returned. As a result, + * if the return value is non-negative, then the value was in fact decreme= nted. + */ static __always_inline int arch_atomic_dec_if_positive(atomic_t *v) { @@ -3063,6 +3073,16 @@ arch_atomic64_dec_unless_positive(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_if_positive +/** + * arch_atomic64_dec_if_positive - Atomic decrement if old value is positi= ve + * @v: pointer of type atomic64_t + * + * Atomically decrement @v, but only if the original value is greater than= zero, + * returning new value. Note that the desired new value will be returned + * even if the decrement did not occur, so that if the old value is -3, th= en + * there @v will not be decremented, but -4 will be returned. As a result, + * if the return value is non-negative, then the value was in fact decreme= nted. + */ static __always_inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { @@ -3080,4 +3100,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 3b29d5595f48f921507f19bc794c91aecb782ad3 +// c7041896e7e66a52d8005ba021f3b3b05f99bcb3 diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fall= backs/dec_if_positive index 86bdced3428d..dedbdbc1487d 100755 --- a/scripts/atomic/fallbacks/dec_if_positive +++ b/scripts/atomic/fallbacks/dec_if_positive @@ -1,4 +1,14 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADBB5C7EE2D for ; Wed, 10 May 2023 18:18:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236049AbjEJSSO (ORCPT ); Wed, 10 May 2023 14:18:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236102AbjEJSSF (ORCPT ); Wed, 10 May 2023 14:18:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C76307AA5; Wed, 10 May 2023 11:17:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2CA5663F7D; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F1E7C4339C; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=FJU5RCiZpCrQOcJ2eziBcfOGyF3Kln3b8QgDAQjPb0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YsOXDi3wcldqoNlczsWnUe0VqUlSDba9kRlBkTjBNyD5GfWBuPsQ6GMqNIy3ZxO73 Fofi/sB282sOwraaAHmdPVfVYdRYGRJErdiEFiDEL7dus4SnzmP9C24g5YF10XAPc9 Q3zVOE9TdfIQDdH8pPduLG6Bk5uqP1evwK+U9fjrKdMW0oHUnS46EsbSt7z1XbaciT vW7LaqDpej4orDgfHVFIemJoyvJzWD74aTacJHOiZ7c8POxIpSqeugknoZSPD7ibt5 JoXltZkq1O2tmZIoUpH9RUSJJKdNmCY7q8pFu1pyY9EeGfZglkS60RYU7mD8JSnmrT pVP8EKLCbsziQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E537DCE1C80; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 09/19] locking/atomic: Add kernel-doc header for arch_${atomic}_dec_unless_positive Date: Wed, 10 May 2023 11:17:07 -0700 Message-Id: <20230510181717.2200934-9-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_dec_unless_positive function family. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 18 +++++++++++++++++- scripts/atomic/fallbacks/dec_unless_positive | 8 ++++++++ 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index 4d4d94925cb0..e6c7356d5dfc 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1625,6 +1625,14 @@ arch_atomic_inc_unless_negative(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_unless_positive +/** + * arch_atomic_dec_unless_positive - Atomic decrement if old value is non-= positive + * @v: pointer of type atomic_t + * + * Atomically decrement @v, but only if the original value is less + * than or equal to zero. Return @true if the decrement happened and + * @false otherwise. + */ static __always_inline bool arch_atomic_dec_unless_positive(atomic_t *v) { @@ -3057,6 +3065,14 @@ arch_atomic64_inc_unless_negative(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_unless_positive +/** + * arch_atomic64_dec_unless_positive - Atomic decrement if old value is no= n-positive + * @v: pointer of type atomic64_t + * + * Atomically decrement @v, but only if the original value is less + * than or equal to zero. Return @true if the decrement happened and + * @false otherwise. + */ static __always_inline bool arch_atomic64_dec_unless_positive(atomic64_t *v) { @@ -3100,4 +3116,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// c7041896e7e66a52d8005ba021f3b3b05f99bcb3 +// 225b2fe3eb6bbe34729abed7a856b91abc8d434e diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/= fallbacks/dec_unless_positive index c531d5afecc4..c3d01d201c63 100755 --- a/scripts/atomic/fallbacks/dec_unless_positive +++ b/scripts/atomic/fallbacks/dec_unless_positive @@ -1,4 +1,12 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F822C77B7D for ; Wed, 10 May 2023 18:19:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236332AbjEJSSh (ORCPT ); Wed, 10 May 2023 14:18:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236057AbjEJSSP (ORCPT ); Wed, 10 May 2023 14:18:15 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6EA07DBD; Wed, 10 May 2023 11:18:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 383C563F89; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0B34C433EF; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=5orfwLg08dIm14Vms6sjyzzLD3V/C5KXtMlKMkWcY24=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Em+tJutPuctz8SiLrvZzic/5pGf+w4amVFJR944sz9bai88jhwKqi+nkCtPNnUE4n gvJ7WNeAmDAecOWnnm9LAhq1pOA7s5sF+Aj1HfAG2b7ec6xio+WcM5tEVk6D9eVBAJ xaDu4viqi/AhOtY/Y7OAhfY+lsgFYFsk4Y19MQg8wh57lYQGCpXlpddJjK3q2TzDhj pOYVPPG5/nJgGCaZPksnNeYZYE4+Ek7oPmPbBmBtpy7wtFkFjLB09x1gJce76W7rpR qqFOumnKGEA+Fxy9CHm41NYF0K4a1/CpYljDtxu7CiuY4N2f/Cj6qJe7UvpYm69xX0 yGHqD5hGTjgrA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E8047CE1D7A; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 10/19] locking/atomic: Add kernel-doc header for arch_${atomic}_inc_unless_negative Date: Wed, 10 May 2023 11:17:08 -0700 Message-Id: <20230510181717.2200934-10-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_inc_unless_negative function family. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 18 +++++++++++++++++- scripts/atomic/fallbacks/inc_unless_negative | 8 ++++++++ 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index e6c7356d5dfc..a90ee496bb81 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1609,6 +1609,14 @@ arch_atomic_inc_not_zero(atomic_t *v) #endif =20 #ifndef arch_atomic_inc_unless_negative +/** + * arch_atomic_inc_unless_negative - Atomic increment if old value is non-= negative + * @v: pointer of type atomic_t + * + * Atomically increment @v, but only if the original value is greater + * than or equal to zero. Return @true if the increment happened and + * @false otherwise. + */ static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v) { @@ -3049,6 +3057,14 @@ arch_atomic64_inc_not_zero(atomic64_t *v) #endif =20 #ifndef arch_atomic64_inc_unless_negative +/** + * arch_atomic64_inc_unless_negative - Atomic increment if old value is no= n-negative + * @v: pointer of type atomic64_t + * + * Atomically increment @v, but only if the original value is greater + * than or equal to zero. Return @true if the increment happened and + * @false otherwise. + */ static __always_inline bool arch_atomic64_inc_unless_negative(atomic64_t *v) { @@ -3116,4 +3132,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 225b2fe3eb6bbe34729abed7a856b91abc8d434e +// 3fd0ec685588b84c6428145b7628f79ce23a464c diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/= fallbacks/inc_unless_negative index 95d8ce48233f..98830b0dcdb1 100755 --- a/scripts/atomic/fallbacks/inc_unless_negative +++ b/scripts/atomic/fallbacks/inc_unless_negative @@ -1,4 +1,12 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1BDFC7EE2F for ; Wed, 10 May 2023 18:19:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236138AbjEJSTD (ORCPT ); Wed, 10 May 2023 14:19:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236086AbjEJSSQ (ORCPT ); Wed, 10 May 2023 14:18:16 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6097B35AB; Wed, 10 May 2023 11:18:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C262163F8B; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B494DC433A8; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=938G6zjstowu8jkLfQwOfPppGEFi6/h20gxxEv4pKds=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hrtuScckU1FqLRkp02MZHPh7X2V9uGIK3cXaf6OJl9tOUkIrNKx6WICOIN68mBOkT tcppCT7rbf2yDmS0eyK1RfcUdPLm/78ZtwryjthZRMyU2Ak51mhlpBsTBmrZEvMfpM vXksW7Iu/pE6tInVfjjwJF1hVnai5vO6dO7V1zZR1PPayYRPxLL5eAngyDYuJpUY7C mgkvebQHfLxC8SvO6ZcjE9KUdX9kQjP273voEX6fH/og2ODkfLOvqB5ih1yDJW0y44 FHQufgCMlP9nLVCUOrfCtV2GWCvFMF+WTtwKSicprf4gDVXLltL3pdhHReHTfRLRhX mZsRC/o1HcIew== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id EAA23CE1EA5; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 11/19] locking/atomic: Add kernel-doc header for arch_${atomic}_set_release Date: Wed, 10 May 2023 11:17:09 -0700 Message-Id: <20230510181717.2200934-11-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_set_release function family. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 16 +++++++++++++++- scripts/atomic/fallbacks/set_release | 7 +++++++ 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index a90ee496bb81..7ba75143c149 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -258,6 +258,13 @@ arch_atomic_read_acquire(const atomic_t *v) #endif =20 #ifndef arch_atomic_set_release +/** + * arch_atomic_set_release - Atomic store release + * @v: pointer of type atomic_t + * @i: value to store + * + * Atomically store @i into *@v with release ordering. + */ static __always_inline void arch_atomic_set_release(atomic_t *v, int i) { @@ -1706,6 +1713,13 @@ arch_atomic64_read_acquire(const atomic64_t *v) #endif =20 #ifndef arch_atomic64_set_release +/** + * arch_atomic64_set_release - Atomic store release + * @v: pointer of type atomic64_t + * @i: value to store + * + * Atomically store @i into *@v with release ordering. + */ static __always_inline void arch_atomic64_set_release(atomic64_t *v, s64 i) { @@ -3132,4 +3146,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 3fd0ec685588b84c6428145b7628f79ce23a464c +// c46693a9f3b3ceacef003cd42764251148e3457d diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallback= s/set_release index 05cdb7f42477..46effb6203e5 100755 --- a/scripts/atomic/fallbacks/set_release +++ b/scripts/atomic/fallbacks/set_release @@ -1,4 +1,11 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64B13C77B7D for ; Wed, 10 May 2023 18:19:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236603AbjEJST0 (ORCPT ); Wed, 10 May 2023 14:19:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236162AbjEJSSV (ORCPT ); Wed, 10 May 2023 14:18:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 176F883FE; Wed, 10 May 2023 11:18:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C6F0A63F93; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B0535C433A4; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=yeFIpFg/iKGrZfbrxdL351Lmjv0weFLzV3+gdTLgI4E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KczCrmqFj0Mj9lVEKpCkKbR+cm3lmfrnsq6ph/rLxQPUY+3KS99z2fYFnAaxrNXYt ewEw3q7zLrYqCXYAC267dJXzLZDHdxa2BlunY0o81j/Fv9e2EkrkLszKb4bOrVJXpG 1seEH1u0jnsKeAHY4/CjKJjlDjyhvM5gsTuDtRQs7+hMbcoSYH0CoLvXvCwzr2s1UZ JeJqhAdHm+MlXQ3v8z/HF7Bl7KS9DAxaahoQcpRQk+vQzg5WLDOYw0yosc8wdzYe66 b66ezsnYiWZLckNNlTA4tSy+ukcmlyrRsm7Y3IAquPvKL5u7PE+csZS2h6TspLUcSh YY4vkUIRp1NYQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id ED174CE1ED9; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 12/19] locking/atomic: Add kernel-doc header for arch_${atomic}_read_acquire Date: Wed, 10 May 2023 11:17:10 -0700 Message-Id: <20230510181717.2200934-12-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_read_acquire function family. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 16 +++++++++++++++- scripts/atomic/fallbacks/read_acquire | 7 +++++++ 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index 7ba75143c149..c3552b83bf49 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -240,6 +240,13 @@ #endif /* arch_try_cmpxchg64_local */ =20 #ifndef arch_atomic_read_acquire +/** + * arch_atomic_read_acquire - Atomic load acquire + * @v: pointer of type atomic_t + * + * Atomically load from *@v with acquire ordering, returning the value + * loaded. + */ static __always_inline int arch_atomic_read_acquire(const atomic_t *v) { @@ -1695,6 +1702,13 @@ arch_atomic_dec_if_positive(atomic_t *v) #endif =20 #ifndef arch_atomic64_read_acquire +/** + * arch_atomic64_read_acquire - Atomic load acquire + * @v: pointer of type atomic64_t + * + * Atomically load from *@v with acquire ordering, returning the value + * loaded. + */ static __always_inline s64 arch_atomic64_read_acquire(const atomic64_t *v) { @@ -3146,4 +3160,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// c46693a9f3b3ceacef003cd42764251148e3457d +// 96c8a3c4d13b12c9f3e0f715709c8af1653a7e79 diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbac= ks/read_acquire index a0ea1d26e6b2..779f40c07018 100755 --- a/scripts/atomic/fallbacks/read_acquire +++ b/scripts/atomic/fallbacks/read_acquire @@ -1,4 +1,11 @@ cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A37FCC77B7D for ; Wed, 10 May 2023 18:18:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236250AbjEJSS1 (ORCPT ); Wed, 10 May 2023 14:18:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235791AbjEJSSG (ORCPT ); Wed, 10 May 2023 14:18:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D99847ABB; Wed, 10 May 2023 11:17:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2CBD563F7E; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8EB2BC4339B; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=M24YCjX89zrGo9hYrVbnM541YXT/QdSaa04NmGeV1Kw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IwNrKLQiM4yEdY6dWQJDqXWcesIHahzZBEqAGGtQ447tYWiSNEApxDQwXUOjpEp6R J+wReUPEoXom8oYdwM6CqS1S2LdicSusW+2/fo373mB/XtrVjEnPSdLSbEDbfgHsbj GJMYhpQEDHxblTH31IUYtifLIMO/tsxJmH4Xjz5I+NvuMKDpu4iY6C7rOJQyR+vzvF dQbx0460xkB/3k7AvmWYCbbkBmUX4GyXue1UKOgK2xQyEdaKbNtUi6s9x6D2kWk8mE EBsBIyMdV+b//2hbywVmq7VxMgxiQvkqiQTTa+VYdP/OtkpzHMmD2JVy1UNjQT7z6C Qru4NcBSFkTWA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id EFA9CCE1F4C; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 13/19] locking/atomic: Script to auto-generate acquire, fence, and release headers Date: Wed, 10 May 2023 11:17:11 -0700 Message-Id: <20230510181717.2200934-13-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The scripts/atomic/fallbacks/{acquire,fence,release} scripts require almost identical scripting to automatically generated the required kernel-doci headers. Therefore, provide a single acqrel.sh script that does this work. This new script is to be invoked from each of those scripts using the "." command, and with the shell variable "acqrel" set to either "acquire", "full", or "release". Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- scripts/atomic/acqrel.sh | 67 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 scripts/atomic/acqrel.sh diff --git a/scripts/atomic/acqrel.sh b/scripts/atomic/acqrel.sh new file mode 100644 index 000000000000..5009a54fdac5 --- /dev/null +++ b/scripts/atomic/acqrel.sh @@ -0,0 +1,67 @@ +echo ${args} | tr -d ' ' | tr ',' '\012' | + awk -v atomic=3D${atomic} \ + -v name_op=3D${name} \ + -v ret=3D${ret} \ + -v oldnew=3D${docbook_oldnew} \ + -v acqrel=3D${acqrel} \ + -v basefuncname=3Darch_${atomic}_${pfx}${name}${sfx} ' + BEGIN { + print "/**"; + sfxord =3D "_" acqrel; + if (acqrel =3D=3D "full") + sfxord =3D ""; + print " * " basefuncname sfxord " - Atomic " name_op " with " acqrel " o= rdering"; + longname["add"] =3D "add"; + longname["sub"] =3D "subtract"; + longname["inc"] =3D "increment"; + longname["dec"] =3D "decrement"; + longname["and"] =3D "AND"; + longname["andnot"] =3D "complement then AND"; + longname["or"] =3D "OR"; + longname["xor"] =3D "XOR"; + longname["xchg"] =3D "exchange"; + longname["add_negative"] =3D "add"; + desc["i"] =3D "value to " longname[name_op]; + desc["v"] =3D "pointer of type " atomic "_t"; + desc["old"] =3D "desired old value to match"; + desc["new"] =3D "new value to put in"; + opmod =3D "with"; + if (name_op =3D=3D "add") + opmod =3D "to"; + else if (name_op =3D=3D "sub") + opmod =3D "from"; + } + + { + print " * @" $1 ": " desc[$1]; + have[$1] =3D 1; + } + + END { + print " *"; + if (name_op ~ /cmpxchg/) { + print " * Atomically compares @new to *@v, and if equal,"; + print " * stores @new to *@v, providing " acqrel " ordering."; + } else if (have["i"]) { + print " * Atomically " longname[name_op] " @i " opmod " @v using " acqr= el " ordering."; + } else { + print " * Atomically " longname[name_op] " @v using " acqrel " ordering= ."; + } + if (name_op ~ /cmpxchg/ && ret =3D=3D "bool") { + print " * Returns @true if the cmpxchg operation succeeded,"; + print " * and false otherwise. Either way, stores the old"; + print " * value of *@v to *@old."; + } else if (name_op =3D=3D "cmpxchg") { + print " * Returns the old value *@v regardless of the result of"; + print " * the comparison. Therefore, if the return value is not"; + print " * equal to @old, the cmpxchg operation failed."; + } else if (name_op =3D=3D "xchg") { + print " * Return old value."; + } else if (name_op =3D=3D "add_negative") { + print " * Return @true if the result is negative, or @false when" + print " * the result is greater than or equal to zero."; + } else { + print " * Return " oldnew " value."; + } + print " */"; + }' --=20 2.40.1 From nobody Tue Feb 10 12:39:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7693C77B7D for ; Wed, 10 May 2023 18:19:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236581AbjEJSTW (ORCPT ); Wed, 10 May 2023 14:19:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236163AbjEJSSV (ORCPT ); Wed, 10 May 2023 14:18:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1289B4; Wed, 10 May 2023 11:18:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3293D63F88; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A0B9C4339E; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=VbR1Phwuz3aHPV2NlTFLb/EDzs8Ul66u7yy6d5bFcic=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nguuQFdDUOXv21uxQD87l1plbiHh/tRqZwj0yCk7865P/p6EfkFlS+RuGkq8vOgZL 1Yc2kXzqoCdL0TlUSWCzDFK8x50N1TQm9Gi3zEP0QkRXhWVbXgd9rZoSVB+/Rl+j2+ RXhBV3Ls7SSn5rN4EZCYrjSYitwR75OzAZ2ygMqFJlYfnbs43iaZXBDSkldB5hr8eQ PgLErOpzvfty35MRYaHWqq+TmTAEXINl4x8Tygbr1dfbQtfQir5u9wR3ekGlK1X4pO KN7F6MZaXD6HZ1zZuqa50NQp8h9x/8csRLTMgIqWCOFfomcBrNstsnuIQ723DxqZjX uFq1tTfwV6khg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id F295CCE1F76; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 14/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}${name}${sfx}_acquire Date: Wed, 10 May 2023 11:17:12 -0700 Message-Id: <20230510181717.2200934-14-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_${pfx}${name}${sfx}_acqui= re function family with the help of my good friend awk, as encapsulated in acqrel.sh. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 268 +++++++++++++++++++- scripts/atomic/fallbacks/acquire | 4 +- 2 files changed, 270 insertions(+), 2 deletions(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index c3552b83bf49..fc80113ca60a 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -292,6 +292,14 @@ arch_atomic_set_release(atomic_t *v, int i) #else /* arch_atomic_add_return_relaxed */ =20 #ifndef arch_atomic_add_return_acquire +/** + * arch_atomic_add_return_acquire - Atomic add with acquire ordering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i to @v using acquire ordering. + * Return new value. + */ static __always_inline int arch_atomic_add_return_acquire(int i, atomic_t *v) { @@ -334,6 +342,14 @@ arch_atomic_add_return(int i, atomic_t *v) #else /* arch_atomic_fetch_add_relaxed */ =20 #ifndef arch_atomic_fetch_add_acquire +/** + * arch_atomic_fetch_add_acquire - Atomic add with acquire ordering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i to @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_add_acquire(int i, atomic_t *v) { @@ -376,6 +392,14 @@ arch_atomic_fetch_add(int i, atomic_t *v) #else /* arch_atomic_sub_return_relaxed */ =20 #ifndef arch_atomic_sub_return_acquire +/** + * arch_atomic_sub_return_acquire - Atomic sub with acquire ordering + * @i: value to subtract + * @v: pointer of type atomic_t + * + * Atomically subtract @i from @v using acquire ordering. + * Return new value. + */ static __always_inline int arch_atomic_sub_return_acquire(int i, atomic_t *v) { @@ -418,6 +442,14 @@ arch_atomic_sub_return(int i, atomic_t *v) #else /* arch_atomic_fetch_sub_relaxed */ =20 #ifndef arch_atomic_fetch_sub_acquire +/** + * arch_atomic_fetch_sub_acquire - Atomic sub with acquire ordering + * @i: value to subtract + * @v: pointer of type atomic_t + * + * Atomically subtract @i from @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_sub_acquire(int i, atomic_t *v) { @@ -543,6 +575,13 @@ arch_atomic_inc_return_relaxed(atomic_t *v) #else /* arch_atomic_inc_return_relaxed */ =20 #ifndef arch_atomic_inc_return_acquire +/** + * arch_atomic_inc_return_acquire - Atomic inc with acquire ordering + * @v: pointer of type atomic_t + * + * Atomically increment @v using acquire ordering. + * Return new value. + */ static __always_inline int arch_atomic_inc_return_acquire(atomic_t *v) { @@ -652,6 +691,13 @@ arch_atomic_fetch_inc_relaxed(atomic_t *v) #else /* arch_atomic_fetch_inc_relaxed */ =20 #ifndef arch_atomic_fetch_inc_acquire +/** + * arch_atomic_fetch_inc_acquire - Atomic inc with acquire ordering + * @v: pointer of type atomic_t + * + * Atomically increment @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_inc_acquire(atomic_t *v) { @@ -777,6 +823,13 @@ arch_atomic_dec_return_relaxed(atomic_t *v) #else /* arch_atomic_dec_return_relaxed */ =20 #ifndef arch_atomic_dec_return_acquire +/** + * arch_atomic_dec_return_acquire - Atomic dec with acquire ordering + * @v: pointer of type atomic_t + * + * Atomically decrement @v using acquire ordering. + * Return new value. + */ static __always_inline int arch_atomic_dec_return_acquire(atomic_t *v) { @@ -886,6 +939,13 @@ arch_atomic_fetch_dec_relaxed(atomic_t *v) #else /* arch_atomic_fetch_dec_relaxed */ =20 #ifndef arch_atomic_fetch_dec_acquire +/** + * arch_atomic_fetch_dec_acquire - Atomic dec with acquire ordering + * @v: pointer of type atomic_t + * + * Atomically decrement @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_dec_acquire(atomic_t *v) { @@ -928,6 +988,14 @@ arch_atomic_fetch_dec(atomic_t *v) #else /* arch_atomic_fetch_and_relaxed */ =20 #ifndef arch_atomic_fetch_and_acquire +/** + * arch_atomic_fetch_and_acquire - Atomic and with acquire ordering + * @i: value to AND + * @v: pointer of type atomic_t + * + * Atomically AND @i with @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_and_acquire(int i, atomic_t *v) { @@ -1058,6 +1126,14 @@ arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v) #else /* arch_atomic_fetch_andnot_relaxed */ =20 #ifndef arch_atomic_fetch_andnot_acquire +/** + * arch_atomic_fetch_andnot_acquire - Atomic andnot with acquire ordering + * @i: value to complement then AND + * @v: pointer of type atomic_t + * + * Atomically complement then AND @i with @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) { @@ -1100,6 +1176,14 @@ arch_atomic_fetch_andnot(int i, atomic_t *v) #else /* arch_atomic_fetch_or_relaxed */ =20 #ifndef arch_atomic_fetch_or_acquire +/** + * arch_atomic_fetch_or_acquire - Atomic or with acquire ordering + * @i: value to OR + * @v: pointer of type atomic_t + * + * Atomically OR @i with @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_or_acquire(int i, atomic_t *v) { @@ -1142,6 +1226,14 @@ arch_atomic_fetch_or(int i, atomic_t *v) #else /* arch_atomic_fetch_xor_relaxed */ =20 #ifndef arch_atomic_fetch_xor_acquire +/** + * arch_atomic_fetch_xor_acquire - Atomic xor with acquire ordering + * @i: value to XOR + * @v: pointer of type atomic_t + * + * Atomically XOR @i with @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_xor_acquire(int i, atomic_t *v) { @@ -1184,6 +1276,14 @@ arch_atomic_fetch_xor(int i, atomic_t *v) #else /* arch_atomic_xchg_relaxed */ =20 #ifndef arch_atomic_xchg_acquire +/** + * arch_atomic_xchg_acquire - Atomic xchg with acquire ordering + * @v: pointer of type atomic_t + * @i: value to exchange + * + * Atomically exchange @i with @v using acquire ordering. + * Return old value. + */ static __always_inline int arch_atomic_xchg_acquire(atomic_t *v, int i) { @@ -1226,6 +1326,18 @@ arch_atomic_xchg(atomic_t *v, int i) #else /* arch_atomic_cmpxchg_relaxed */ =20 #ifndef arch_atomic_cmpxchg_acquire +/** + * arch_atomic_cmpxchg_acquire - Atomic cmpxchg with acquire ordering + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing acquire ordering. + * Returns the old value *@v regardless of the result of + * the comparison. Therefore, if the return value is not + * equal to @old, the cmpxchg operation failed. + */ static __always_inline int arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { @@ -1363,6 +1475,18 @@ arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *ol= d, int new) #else /* arch_atomic_try_cmpxchg_relaxed */ =20 #ifndef arch_atomic_try_cmpxchg_acquire +/** + * arch_atomic_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire order= ing + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing acquire ordering. + * Returns @true if the cmpxchg operation succeeded, + * and false otherwise. Either way, stores the old + * value of *@v to *@old. + */ static __always_inline bool arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { @@ -1528,6 +1652,15 @@ arch_atomic_add_negative_relaxed(int i, atomic_t *v) #else /* arch_atomic_add_negative_relaxed */ =20 #ifndef arch_atomic_add_negative_acquire +/** + * arch_atomic_add_negative_acquire - Atomic add_negative with acquire ord= ering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i with @v using acquire ordering. + * Return @true if the result is negative, or @false when + * the result is greater than or equal to zero. + */ static __always_inline bool arch_atomic_add_negative_acquire(int i, atomic_t *v) { @@ -1754,6 +1887,14 @@ arch_atomic64_set_release(atomic64_t *v, s64 i) #else /* arch_atomic64_add_return_relaxed */ =20 #ifndef arch_atomic64_add_return_acquire +/** + * arch_atomic64_add_return_acquire - Atomic add with acquire ordering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i to @v using acquire ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_add_return_acquire(s64 i, atomic64_t *v) { @@ -1796,6 +1937,14 @@ arch_atomic64_add_return(s64 i, atomic64_t *v) #else /* arch_atomic64_fetch_add_relaxed */ =20 #ifndef arch_atomic64_fetch_add_acquire +/** + * arch_atomic64_fetch_add_acquire - Atomic add with acquire ordering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i to @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { @@ -1838,6 +1987,14 @@ arch_atomic64_fetch_add(s64 i, atomic64_t *v) #else /* arch_atomic64_sub_return_relaxed */ =20 #ifndef arch_atomic64_sub_return_acquire +/** + * arch_atomic64_sub_return_acquire - Atomic sub with acquire ordering + * @i: value to subtract + * @v: pointer of type atomic64_t + * + * Atomically subtract @i from @v using acquire ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_sub_return_acquire(s64 i, atomic64_t *v) { @@ -1880,6 +2037,14 @@ arch_atomic64_sub_return(s64 i, atomic64_t *v) #else /* arch_atomic64_fetch_sub_relaxed */ =20 #ifndef arch_atomic64_fetch_sub_acquire +/** + * arch_atomic64_fetch_sub_acquire - Atomic sub with acquire ordering + * @i: value to subtract + * @v: pointer of type atomic64_t + * + * Atomically subtract @i from @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { @@ -2005,6 +2170,13 @@ arch_atomic64_inc_return_relaxed(atomic64_t *v) #else /* arch_atomic64_inc_return_relaxed */ =20 #ifndef arch_atomic64_inc_return_acquire +/** + * arch_atomic64_inc_return_acquire - Atomic inc with acquire ordering + * @v: pointer of type atomic64_t + * + * Atomically increment @v using acquire ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_inc_return_acquire(atomic64_t *v) { @@ -2114,6 +2286,13 @@ arch_atomic64_fetch_inc_relaxed(atomic64_t *v) #else /* arch_atomic64_fetch_inc_relaxed */ =20 #ifndef arch_atomic64_fetch_inc_acquire +/** + * arch_atomic64_fetch_inc_acquire - Atomic inc with acquire ordering + * @v: pointer of type atomic64_t + * + * Atomically increment @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_inc_acquire(atomic64_t *v) { @@ -2239,6 +2418,13 @@ arch_atomic64_dec_return_relaxed(atomic64_t *v) #else /* arch_atomic64_dec_return_relaxed */ =20 #ifndef arch_atomic64_dec_return_acquire +/** + * arch_atomic64_dec_return_acquire - Atomic dec with acquire ordering + * @v: pointer of type atomic64_t + * + * Atomically decrement @v using acquire ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_dec_return_acquire(atomic64_t *v) { @@ -2348,6 +2534,13 @@ arch_atomic64_fetch_dec_relaxed(atomic64_t *v) #else /* arch_atomic64_fetch_dec_relaxed */ =20 #ifndef arch_atomic64_fetch_dec_acquire +/** + * arch_atomic64_fetch_dec_acquire - Atomic dec with acquire ordering + * @v: pointer of type atomic64_t + * + * Atomically decrement @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_dec_acquire(atomic64_t *v) { @@ -2390,6 +2583,14 @@ arch_atomic64_fetch_dec(atomic64_t *v) #else /* arch_atomic64_fetch_and_relaxed */ =20 #ifndef arch_atomic64_fetch_and_acquire +/** + * arch_atomic64_fetch_and_acquire - Atomic and with acquire ordering + * @i: value to AND + * @v: pointer of type atomic64_t + * + * Atomically AND @i with @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { @@ -2520,6 +2721,14 @@ arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t= *v) #else /* arch_atomic64_fetch_andnot_relaxed */ =20 #ifndef arch_atomic64_fetch_andnot_acquire +/** + * arch_atomic64_fetch_andnot_acquire - Atomic andnot with acquire ordering + * @i: value to complement then AND + * @v: pointer of type atomic64_t + * + * Atomically complement then AND @i with @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { @@ -2562,6 +2771,14 @@ arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) #else /* arch_atomic64_fetch_or_relaxed */ =20 #ifndef arch_atomic64_fetch_or_acquire +/** + * arch_atomic64_fetch_or_acquire - Atomic or with acquire ordering + * @i: value to OR + * @v: pointer of type atomic64_t + * + * Atomically OR @i with @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { @@ -2604,6 +2821,14 @@ arch_atomic64_fetch_or(s64 i, atomic64_t *v) #else /* arch_atomic64_fetch_xor_relaxed */ =20 #ifndef arch_atomic64_fetch_xor_acquire +/** + * arch_atomic64_fetch_xor_acquire - Atomic xor with acquire ordering + * @i: value to XOR + * @v: pointer of type atomic64_t + * + * Atomically XOR @i with @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { @@ -2646,6 +2871,14 @@ arch_atomic64_fetch_xor(s64 i, atomic64_t *v) #else /* arch_atomic64_xchg_relaxed */ =20 #ifndef arch_atomic64_xchg_acquire +/** + * arch_atomic64_xchg_acquire - Atomic xchg with acquire ordering + * @v: pointer of type atomic64_t + * @i: value to exchange + * + * Atomically exchange @i with @v using acquire ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_xchg_acquire(atomic64_t *v, s64 i) { @@ -2688,6 +2921,18 @@ arch_atomic64_xchg(atomic64_t *v, s64 i) #else /* arch_atomic64_cmpxchg_relaxed */ =20 #ifndef arch_atomic64_cmpxchg_acquire +/** + * arch_atomic64_cmpxchg_acquire - Atomic cmpxchg with acquire ordering + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing acquire ordering. + * Returns the old value *@v regardless of the result of + * the comparison. Therefore, if the return value is not + * equal to @old, the cmpxchg operation failed. + */ static __always_inline s64 arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { @@ -2825,6 +3070,18 @@ arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64= *old, s64 new) #else /* arch_atomic64_try_cmpxchg_relaxed */ =20 #ifndef arch_atomic64_try_cmpxchg_acquire +/** + * arch_atomic64_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ord= ering + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing acquire ordering. + * Returns @true if the cmpxchg operation succeeded, + * and false otherwise. Either way, stores the old + * value of *@v to *@old. + */ static __always_inline bool arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { @@ -2990,6 +3247,15 @@ arch_atomic64_add_negative_relaxed(s64 i, atomic64_t= *v) #else /* arch_atomic64_add_negative_relaxed */ =20 #ifndef arch_atomic64_add_negative_acquire +/** + * arch_atomic64_add_negative_acquire - Atomic add_negative with acquire o= rdering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i with @v using acquire ordering. + * Return @true if the result is negative, or @false when + * the result is greater than or equal to zero. + */ static __always_inline bool arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -3160,4 +3426,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 96c8a3c4d13b12c9f3e0f715709c8af1653a7e79 +// a7944792460cf5adb72d49025850800d2cd178be diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/ac= quire index ef764085c79a..08fc6c30a9ef 100755 --- a/scripts/atomic/fallbacks/acquire +++ b/scripts/atomic/fallbacks/acquire @@ -1,4 +1,6 @@ -cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 270A5C77B7D for ; Wed, 10 May 2023 18:19:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236547AbjEJSTQ (ORCPT ); Wed, 10 May 2023 14:19:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236145AbjEJSSU (ORCPT ); Wed, 10 May 2023 14:18:20 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D54BC6A78; Wed, 10 May 2023 11:18:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B805863F8D; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A544AC433A0; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=7habnDi2108xRXlvaQYflFtpvA2CnKXlKIA8xuT2KGI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LR+TmNQNnhVZZ8MbTBFbBRWP0+QTufKpQ6+gR5YylKgr31b6xj270/v70vNdk1RAH KR6I4sNhSj+o28mu240yw8sG8I2NE8GfIMa13RuQ14IG476ycX2PQ7W2uxN092Hne1 q7ZLFydJmhMFl2lh9EIombYLjFqINHOlDXWIMhRLUCGQ6Oh+4uTjLmley+HBCcRk7z b8F/U/+G23TCIOJlM0aYAwbAJCzodYHVnfrHu7yo4SenldBgfLGbTdaPYJzQLy7ZA/ KMEDBTYPuzDtuTgNnnb6vFYPmvdjPYG7cd5KouuddaijUBg7WlVxLMt29j6fUV7WUz 7NDuqTg4Yy8oQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 00EC3CE21C0; Wed, 10 May 2023 11:17:18 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 15/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}${name}${sfx}_release Date: Wed, 10 May 2023 11:17:13 -0700 Message-Id: <20230510181717.2200934-15-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_${pfx}${name}${sfx}_relea= se function family with the help of my good friend awk, as encapsulated in acqrel.sh. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 268 +++++++++++++++++++- scripts/atomic/fallbacks/release | 2 + 2 files changed, 269 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index fc80113ca60a..ec6821b4bbc1 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -311,6 +311,14 @@ arch_atomic_add_return_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_add_return_release +/** + * arch_atomic_add_return_release - Atomic add with release ordering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i to @v using release ordering. + * Return new value. + */ static __always_inline int arch_atomic_add_return_release(int i, atomic_t *v) { @@ -361,6 +369,14 @@ arch_atomic_fetch_add_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_add_release +/** + * arch_atomic_fetch_add_release - Atomic add with release ordering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i to @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_add_release(int i, atomic_t *v) { @@ -411,6 +427,14 @@ arch_atomic_sub_return_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_sub_return_release +/** + * arch_atomic_sub_return_release - Atomic sub with release ordering + * @i: value to subtract + * @v: pointer of type atomic_t + * + * Atomically subtract @i from @v using release ordering. + * Return new value. + */ static __always_inline int arch_atomic_sub_return_release(int i, atomic_t *v) { @@ -461,6 +485,14 @@ arch_atomic_fetch_sub_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_sub_release +/** + * arch_atomic_fetch_sub_release - Atomic sub with release ordering + * @i: value to subtract + * @v: pointer of type atomic_t + * + * Atomically subtract @i from @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_sub_release(int i, atomic_t *v) { @@ -593,6 +625,13 @@ arch_atomic_inc_return_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_inc_return_release +/** + * arch_atomic_inc_return_release - Atomic inc with release ordering + * @v: pointer of type atomic_t + * + * Atomically increment @v using release ordering. + * Return new value. + */ static __always_inline int arch_atomic_inc_return_release(atomic_t *v) { @@ -709,6 +748,13 @@ arch_atomic_fetch_inc_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_inc_release +/** + * arch_atomic_fetch_inc_release - Atomic inc with release ordering + * @v: pointer of type atomic_t + * + * Atomically increment @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_inc_release(atomic_t *v) { @@ -841,6 +887,13 @@ arch_atomic_dec_return_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_return_release +/** + * arch_atomic_dec_return_release - Atomic dec with release ordering + * @v: pointer of type atomic_t + * + * Atomically decrement @v using release ordering. + * Return new value. + */ static __always_inline int arch_atomic_dec_return_release(atomic_t *v) { @@ -957,6 +1010,13 @@ arch_atomic_fetch_dec_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_dec_release +/** + * arch_atomic_fetch_dec_release - Atomic dec with release ordering + * @v: pointer of type atomic_t + * + * Atomically decrement @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_dec_release(atomic_t *v) { @@ -1007,6 +1067,14 @@ arch_atomic_fetch_and_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_and_release +/** + * arch_atomic_fetch_and_release - Atomic and with release ordering + * @i: value to AND + * @v: pointer of type atomic_t + * + * Atomically AND @i with @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_and_release(int i, atomic_t *v) { @@ -1145,6 +1213,14 @@ arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_andnot_release +/** + * arch_atomic_fetch_andnot_release - Atomic andnot with release ordering + * @i: value to complement then AND + * @v: pointer of type atomic_t + * + * Atomically complement then AND @i with @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_andnot_release(int i, atomic_t *v) { @@ -1195,6 +1271,14 @@ arch_atomic_fetch_or_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_or_release +/** + * arch_atomic_fetch_or_release - Atomic or with release ordering + * @i: value to OR + * @v: pointer of type atomic_t + * + * Atomically OR @i with @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_or_release(int i, atomic_t *v) { @@ -1245,6 +1329,14 @@ arch_atomic_fetch_xor_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_xor_release +/** + * arch_atomic_fetch_xor_release - Atomic xor with release ordering + * @i: value to XOR + * @v: pointer of type atomic_t + * + * Atomically XOR @i with @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_xor_release(int i, atomic_t *v) { @@ -1295,6 +1387,14 @@ arch_atomic_xchg_acquire(atomic_t *v, int i) #endif =20 #ifndef arch_atomic_xchg_release +/** + * arch_atomic_xchg_release - Atomic xchg with release ordering + * @v: pointer of type atomic_t + * @i: value to exchange + * + * Atomically exchange @i with @v using release ordering. + * Return old value. + */ static __always_inline int arch_atomic_xchg_release(atomic_t *v, int i) { @@ -1349,6 +1449,18 @@ arch_atomic_cmpxchg_acquire(atomic_t *v, int old, in= t new) #endif =20 #ifndef arch_atomic_cmpxchg_release +/** + * arch_atomic_cmpxchg_release - Atomic cmpxchg with release ordering + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing release ordering. + * Returns the old value *@v regardless of the result of + * the comparison. Therefore, if the return value is not + * equal to @old, the cmpxchg operation failed. + */ static __always_inline int arch_atomic_cmpxchg_release(atomic_t *v, int old, int new) { @@ -1498,6 +1610,18 @@ arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *ol= d, int new) #endif =20 #ifndef arch_atomic_try_cmpxchg_release +/** + * arch_atomic_try_cmpxchg_release - Atomic try_cmpxchg with release order= ing + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing release ordering. + * Returns @true if the cmpxchg operation succeeded, + * and false otherwise. Either way, stores the old + * value of *@v to *@old. + */ static __always_inline bool arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { @@ -1672,6 +1796,15 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_add_negative_release +/** + * arch_atomic_add_negative_release - Atomic add_negative with release ord= ering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i with @v using release ordering. + * Return @true if the result is negative, or @false when + * the result is greater than or equal to zero. + */ static __always_inline bool arch_atomic_add_negative_release(int i, atomic_t *v) { @@ -1906,6 +2039,14 @@ arch_atomic64_add_return_acquire(s64 i, atomic64_t *= v) #endif =20 #ifndef arch_atomic64_add_return_release +/** + * arch_atomic64_add_return_release - Atomic add with release ordering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i to @v using release ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_add_return_release(s64 i, atomic64_t *v) { @@ -1956,6 +2097,14 @@ arch_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_add_release +/** + * arch_atomic64_fetch_add_release - Atomic add with release ordering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i to @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_add_release(s64 i, atomic64_t *v) { @@ -2006,6 +2155,14 @@ arch_atomic64_sub_return_acquire(s64 i, atomic64_t *= v) #endif =20 #ifndef arch_atomic64_sub_return_release +/** + * arch_atomic64_sub_return_release - Atomic sub with release ordering + * @i: value to subtract + * @v: pointer of type atomic64_t + * + * Atomically subtract @i from @v using release ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_sub_return_release(s64 i, atomic64_t *v) { @@ -2056,6 +2213,14 @@ arch_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_sub_release +/** + * arch_atomic64_fetch_sub_release - Atomic sub with release ordering + * @i: value to subtract + * @v: pointer of type atomic64_t + * + * Atomically subtract @i from @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_sub_release(s64 i, atomic64_t *v) { @@ -2188,6 +2353,13 @@ arch_atomic64_inc_return_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_inc_return_release +/** + * arch_atomic64_inc_return_release - Atomic inc with release ordering + * @v: pointer of type atomic64_t + * + * Atomically increment @v using release ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_inc_return_release(atomic64_t *v) { @@ -2304,6 +2476,13 @@ arch_atomic64_fetch_inc_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_inc_release +/** + * arch_atomic64_fetch_inc_release - Atomic inc with release ordering + * @v: pointer of type atomic64_t + * + * Atomically increment @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_inc_release(atomic64_t *v) { @@ -2436,6 +2615,13 @@ arch_atomic64_dec_return_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_return_release +/** + * arch_atomic64_dec_return_release - Atomic dec with release ordering + * @v: pointer of type atomic64_t + * + * Atomically decrement @v using release ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_dec_return_release(atomic64_t *v) { @@ -2552,6 +2738,13 @@ arch_atomic64_fetch_dec_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_dec_release +/** + * arch_atomic64_fetch_dec_release - Atomic dec with release ordering + * @v: pointer of type atomic64_t + * + * Atomically decrement @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_dec_release(atomic64_t *v) { @@ -2602,6 +2795,14 @@ arch_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_and_release +/** + * arch_atomic64_fetch_and_release - Atomic and with release ordering + * @i: value to AND + * @v: pointer of type atomic64_t + * + * Atomically AND @i with @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_and_release(s64 i, atomic64_t *v) { @@ -2740,6 +2941,14 @@ arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_fetch_andnot_release +/** + * arch_atomic64_fetch_andnot_release - Atomic andnot with release ordering + * @i: value to complement then AND + * @v: pointer of type atomic64_t + * + * Atomically complement then AND @i with @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { @@ -2790,6 +2999,14 @@ arch_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_or_release +/** + * arch_atomic64_fetch_or_release - Atomic or with release ordering + * @i: value to OR + * @v: pointer of type atomic64_t + * + * Atomically OR @i with @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_or_release(s64 i, atomic64_t *v) { @@ -2840,6 +3057,14 @@ arch_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_xor_release +/** + * arch_atomic64_fetch_xor_release - Atomic xor with release ordering + * @i: value to XOR + * @v: pointer of type atomic64_t + * + * Atomically XOR @i with @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_xor_release(s64 i, atomic64_t *v) { @@ -2890,6 +3115,14 @@ arch_atomic64_xchg_acquire(atomic64_t *v, s64 i) #endif =20 #ifndef arch_atomic64_xchg_release +/** + * arch_atomic64_xchg_release - Atomic xchg with release ordering + * @v: pointer of type atomic64_t + * @i: value to exchange + * + * Atomically exchange @i with @v using release ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_xchg_release(atomic64_t *v, s64 i) { @@ -2944,6 +3177,18 @@ arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old= , s64 new) #endif =20 #ifndef arch_atomic64_cmpxchg_release +/** + * arch_atomic64_cmpxchg_release - Atomic cmpxchg with release ordering + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing release ordering. + * Returns the old value *@v regardless of the result of + * the comparison. Therefore, if the return value is not + * equal to @old, the cmpxchg operation failed. + */ static __always_inline s64 arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { @@ -3093,6 +3338,18 @@ arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64= *old, s64 new) #endif =20 #ifndef arch_atomic64_try_cmpxchg_release +/** + * arch_atomic64_try_cmpxchg_release - Atomic try_cmpxchg with release ord= ering + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing release ordering. + * Returns @true if the cmpxchg operation succeeded, + * and false otherwise. Either way, stores the old + * value of *@v to *@old. + */ static __always_inline bool arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { @@ -3267,6 +3524,15 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_add_negative_release +/** + * arch_atomic64_add_negative_release - Atomic add_negative with release o= rdering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i with @v using release ordering. + * Return @true if the result is negative, or @false when + * the result is greater than or equal to zero. + */ static __always_inline bool arch_atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -3426,4 +3692,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// a7944792460cf5adb72d49025850800d2cd178be +// 2caf9e8360f71f3841789431533b32b620a12c1e diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/re= lease index b46feb56d69c..bce3a1cbd497 100755 --- a/scripts/atomic/fallbacks/release +++ b/scripts/atomic/fallbacks/release @@ -1,3 +1,5 @@ +acqrel=3Drelease +. ${ATOMICDIR}/acqrel.sh cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCDE2C7EE22 for ; Wed, 10 May 2023 18:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236631AbjEJSTd (ORCPT ); Wed, 10 May 2023 14:19:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235203AbjEJSSY (ORCPT ); Wed, 10 May 2023 14:18:24 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31D3365BD; Wed, 10 May 2023 11:18:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D56A463F9A; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1356C433AA; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=nBj3d4LCCUMW5/TfWa3VcbQzjTyzaZ4UhHxKnXGM9cY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MN2a7GB4jBlLpLdWwglaVMtLRtX0PlxxY2MebnjJWSUHXCE2Gb5AgMNmkp/Sf9Zxs cVEZydotkLzbTjB8d6sjgVj2Hd7mwa1t0D/sk4jSn/eVU7YA6zICa29gp+4USaAkqD jov0LW49r4S5ibs1FJa4zAwVIMcwsi2zmFVP/cscRnm0BJ8fbwYqwdhhA27QVe8W2J 00RUTqy1EiZ3h0iqTs5DKqCtYW0j/ypp3X0cYr7iWA8e3wIV/XaHB6pYk7Gq6xrR31 y4y60InlwF8j8J0HyqgdC5mX7R5Wy3ctHISfsq1LHH+uS2EdWX1kKvfAZy8GgrenpL +uGU+UZ5A5Mng== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 03C35CE21C8; Wed, 10 May 2023 11:17:19 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 16/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}${name}${sfx} Date: Wed, 10 May 2023 11:17:14 -0700 Message-Id: <20230510181717.2200934-16-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kernel-doc header template for arch_${atomic}_${pfx}${name}${sfx} function family with the help of my good friend awk, as encapsulated in acqrel.sh. Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 268 +++++++++++++++++++- scripts/atomic/fallbacks/fence | 2 + 2 files changed, 269 insertions(+), 1 deletion(-) diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index ec6821b4bbc1..41aa94f0aacd 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -329,6 +329,14 @@ arch_atomic_add_return_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_add_return +/** + * arch_atomic_add_return - Atomic add with full ordering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i to @v using full ordering. + * Return new value. + */ static __always_inline int arch_atomic_add_return(int i, atomic_t *v) { @@ -387,6 +395,14 @@ arch_atomic_fetch_add_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_add +/** + * arch_atomic_fetch_add - Atomic add with full ordering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i to @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_add(int i, atomic_t *v) { @@ -445,6 +461,14 @@ arch_atomic_sub_return_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_sub_return +/** + * arch_atomic_sub_return - Atomic sub with full ordering + * @i: value to subtract + * @v: pointer of type atomic_t + * + * Atomically subtract @i from @v using full ordering. + * Return new value. + */ static __always_inline int arch_atomic_sub_return(int i, atomic_t *v) { @@ -503,6 +527,14 @@ arch_atomic_fetch_sub_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_sub +/** + * arch_atomic_fetch_sub - Atomic sub with full ordering + * @i: value to subtract + * @v: pointer of type atomic_t + * + * Atomically subtract @i from @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_sub(int i, atomic_t *v) { @@ -642,6 +674,13 @@ arch_atomic_inc_return_release(atomic_t *v) #endif =20 #ifndef arch_atomic_inc_return +/** + * arch_atomic_inc_return - Atomic inc with full ordering + * @v: pointer of type atomic_t + * + * Atomically increment @v using full ordering. + * Return new value. + */ static __always_inline int arch_atomic_inc_return(atomic_t *v) { @@ -765,6 +804,13 @@ arch_atomic_fetch_inc_release(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_inc +/** + * arch_atomic_fetch_inc - Atomic inc with full ordering + * @v: pointer of type atomic_t + * + * Atomically increment @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_inc(atomic_t *v) { @@ -904,6 +950,13 @@ arch_atomic_dec_return_release(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_return +/** + * arch_atomic_dec_return - Atomic dec with full ordering + * @v: pointer of type atomic_t + * + * Atomically decrement @v using full ordering. + * Return new value. + */ static __always_inline int arch_atomic_dec_return(atomic_t *v) { @@ -1027,6 +1080,13 @@ arch_atomic_fetch_dec_release(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_dec +/** + * arch_atomic_fetch_dec - Atomic dec with full ordering + * @v: pointer of type atomic_t + * + * Atomically decrement @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_dec(atomic_t *v) { @@ -1085,6 +1145,14 @@ arch_atomic_fetch_and_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_and +/** + * arch_atomic_fetch_and - Atomic and with full ordering + * @i: value to AND + * @v: pointer of type atomic_t + * + * Atomically AND @i with @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_and(int i, atomic_t *v) { @@ -1231,6 +1299,14 @@ arch_atomic_fetch_andnot_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_andnot +/** + * arch_atomic_fetch_andnot - Atomic andnot with full ordering + * @i: value to complement then AND + * @v: pointer of type atomic_t + * + * Atomically complement then AND @i with @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_andnot(int i, atomic_t *v) { @@ -1289,6 +1365,14 @@ arch_atomic_fetch_or_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_or +/** + * arch_atomic_fetch_or - Atomic or with full ordering + * @i: value to OR + * @v: pointer of type atomic_t + * + * Atomically OR @i with @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_or(int i, atomic_t *v) { @@ -1347,6 +1431,14 @@ arch_atomic_fetch_xor_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_xor +/** + * arch_atomic_fetch_xor - Atomic xor with full ordering + * @i: value to XOR + * @v: pointer of type atomic_t + * + * Atomically XOR @i with @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_fetch_xor(int i, atomic_t *v) { @@ -1405,6 +1497,14 @@ arch_atomic_xchg_release(atomic_t *v, int i) #endif =20 #ifndef arch_atomic_xchg +/** + * arch_atomic_xchg - Atomic xchg with full ordering + * @v: pointer of type atomic_t + * @i: value to exchange + * + * Atomically exchange @i with @v using full ordering. + * Return old value. + */ static __always_inline int arch_atomic_xchg(atomic_t *v, int i) { @@ -1471,6 +1571,18 @@ arch_atomic_cmpxchg_release(atomic_t *v, int old, in= t new) #endif =20 #ifndef arch_atomic_cmpxchg +/** + * arch_atomic_cmpxchg - Atomic cmpxchg with full ordering + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing full ordering. + * Returns the old value *@v regardless of the result of + * the comparison. Therefore, if the return value is not + * equal to @old, the cmpxchg operation failed. + */ static __always_inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) { @@ -1632,6 +1744,18 @@ arch_atomic_try_cmpxchg_release(atomic_t *v, int *ol= d, int new) #endif =20 #ifndef arch_atomic_try_cmpxchg +/** + * arch_atomic_try_cmpxchg - Atomic try_cmpxchg with full ordering + * @v: pointer of type atomic_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing full ordering. + * Returns @true if the cmpxchg operation succeeded, + * and false otherwise. Either way, stores the old + * value of *@v to *@old. + */ static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { @@ -1815,6 +1939,15 @@ arch_atomic_add_negative_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_add_negative +/** + * arch_atomic_add_negative - Atomic add_negative with full ordering + * @i: value to add + * @v: pointer of type atomic_t + * + * Atomically add @i with @v using full ordering. + * Return @true if the result is negative, or @false when + * the result is greater than or equal to zero. + */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { @@ -2057,6 +2190,14 @@ arch_atomic64_add_return_release(s64 i, atomic64_t *= v) #endif =20 #ifndef arch_atomic64_add_return +/** + * arch_atomic64_add_return - Atomic add with full ordering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i to @v using full ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { @@ -2115,6 +2256,14 @@ arch_atomic64_fetch_add_release(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_add +/** + * arch_atomic64_fetch_add - Atomic add with full ordering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i to @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v) { @@ -2173,6 +2322,14 @@ arch_atomic64_sub_return_release(s64 i, atomic64_t *= v) #endif =20 #ifndef arch_atomic64_sub_return +/** + * arch_atomic64_sub_return - Atomic sub with full ordering + * @i: value to subtract + * @v: pointer of type atomic64_t + * + * Atomically subtract @i from @v using full ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v) { @@ -2231,6 +2388,14 @@ arch_atomic64_fetch_sub_release(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_sub +/** + * arch_atomic64_fetch_sub - Atomic sub with full ordering + * @i: value to subtract + * @v: pointer of type atomic64_t + * + * Atomically subtract @i from @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_sub(s64 i, atomic64_t *v) { @@ -2370,6 +2535,13 @@ arch_atomic64_inc_return_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_inc_return +/** + * arch_atomic64_inc_return - Atomic inc with full ordering + * @v: pointer of type atomic64_t + * + * Atomically increment @v using full ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_inc_return(atomic64_t *v) { @@ -2493,6 +2665,13 @@ arch_atomic64_fetch_inc_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_inc +/** + * arch_atomic64_fetch_inc - Atomic inc with full ordering + * @v: pointer of type atomic64_t + * + * Atomically increment @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_inc(atomic64_t *v) { @@ -2632,6 +2811,13 @@ arch_atomic64_dec_return_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_return +/** + * arch_atomic64_dec_return - Atomic dec with full ordering + * @v: pointer of type atomic64_t + * + * Atomically decrement @v using full ordering. + * Return new value. + */ static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v) { @@ -2755,6 +2941,13 @@ arch_atomic64_fetch_dec_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_dec +/** + * arch_atomic64_fetch_dec - Atomic dec with full ordering + * @v: pointer of type atomic64_t + * + * Atomically decrement @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_dec(atomic64_t *v) { @@ -2813,6 +3006,14 @@ arch_atomic64_fetch_and_release(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_and +/** + * arch_atomic64_fetch_and - Atomic and with full ordering + * @i: value to AND + * @v: pointer of type atomic64_t + * + * Atomically AND @i with @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v) { @@ -2959,6 +3160,14 @@ arch_atomic64_fetch_andnot_release(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_fetch_andnot +/** + * arch_atomic64_fetch_andnot - Atomic andnot with full ordering + * @i: value to complement then AND + * @v: pointer of type atomic64_t + * + * Atomically complement then AND @i with @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) { @@ -3017,6 +3226,14 @@ arch_atomic64_fetch_or_release(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_or +/** + * arch_atomic64_fetch_or - Atomic or with full ordering + * @i: value to OR + * @v: pointer of type atomic64_t + * + * Atomically OR @i with @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v) { @@ -3075,6 +3292,14 @@ arch_atomic64_fetch_xor_release(s64 i, atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_xor +/** + * arch_atomic64_fetch_xor - Atomic xor with full ordering + * @i: value to XOR + * @v: pointer of type atomic64_t + * + * Atomically XOR @i with @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v) { @@ -3133,6 +3358,14 @@ arch_atomic64_xchg_release(atomic64_t *v, s64 i) #endif =20 #ifndef arch_atomic64_xchg +/** + * arch_atomic64_xchg - Atomic xchg with full ordering + * @v: pointer of type atomic64_t + * @i: value to exchange + * + * Atomically exchange @i with @v using full ordering. + * Return old value. + */ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 i) { @@ -3199,6 +3432,18 @@ arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old= , s64 new) #endif =20 #ifndef arch_atomic64_cmpxchg +/** + * arch_atomic64_cmpxchg - Atomic cmpxchg with full ordering + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing full ordering. + * Returns the old value *@v regardless of the result of + * the comparison. Therefore, if the return value is not + * equal to @old, the cmpxchg operation failed. + */ static __always_inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { @@ -3360,6 +3605,18 @@ arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64= *old, s64 new) #endif =20 #ifndef arch_atomic64_try_cmpxchg +/** + * arch_atomic64_try_cmpxchg - Atomic try_cmpxchg with full ordering + * @v: pointer of type atomic64_t + * @old: desired old value to match + * @new: new value to put in + * + * Atomically compares @new to *@v, and if equal, + * stores @new to *@v, providing full ordering. + * Returns @true if the cmpxchg operation succeeded, + * and false otherwise. Either way, stores the old + * value of *@v to *@old. + */ static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { @@ -3543,6 +3800,15 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_add_negative +/** + * arch_atomic64_add_negative - Atomic add_negative with full ordering + * @i: value to add + * @v: pointer of type atomic64_t + * + * Atomically add @i with @v using full ordering. + * Return @true if the result is negative, or @false when + * the result is greater than or equal to zero. + */ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { @@ -3692,4 +3958,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 2caf9e8360f71f3841789431533b32b620a12c1e +// 7c2c97cd48cf9c672efc44b9fed5a37b8970dde4 diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence index 07757d8e338e..975855dfba25 100755 --- a/scripts/atomic/fallbacks/fence +++ b/scripts/atomic/fallbacks/fence @@ -1,3 +1,5 @@ +acqrel=3Dfull +. ${ATOMICDIR}/acqrel.sh cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC0ADC7EE22 for ; Wed, 10 May 2023 18:19:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236209AbjEJSTM (ORCPT ); Wed, 10 May 2023 14:19:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236144AbjEJSSU (ORCPT ); Wed, 10 May 2023 14:18:20 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1730B83E7; Wed, 10 May 2023 11:18:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C80E963F94; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E139EC433AE; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742644; bh=O5+R08c8ff/9UCItSQg4aHCY6O5NYgQTn22+Zu0iVq8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Xce+75GfENdMWBpO3VsBhIq+md1L1PO/sfpQ26MlP7lpDMWE8xELlYvvI7F9H6Ndi VaCKiOScuM6OKVzlkdJSlnVy7YeKReqJcuqcbEUkTBGGE3LoH+hYCX5pZPbqBVuSe8 zO87kovaSZAGKDRIS8UaMgLQGxxZcOeXNjWxbrOaVdyRqjIDOyabSjgel2z2fI3mxz GHx65Y2Rf8mCE4QpBXQ9G+CNjMsVFSYNL3Gq9EhIJIUZTZs+FQIpCfbiwqfK5kmUEu rZZLhD/RTYr8j2Dtrenmxh2MDewaUdsaqqsecWPd69AkjSaE692PbpvFo7MIVyE46/ He9ag/wpppS/g== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 06C1CCE21CB; Wed, 10 May 2023 11:17:19 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Subject: [PATCH locking/atomic 17/19] x86/atomic.h: Remove duplicate kernel-doc headers Date: Wed, 10 May 2023 11:17:15 -0700 Message-Id: <20230510181717.2200934-17-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Scripting the kernel-doc headers resulted in a few duplicates. Remove the duplicates from the x86-specific files. Reported-by: Akira Yokosawa Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: --- arch/x86/include/asm/atomic.h | 60 ----------------------------------- 1 file changed, 60 deletions(-) diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 5e754e895767..5df979d65fb5 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -69,27 +69,12 @@ static __always_inline void arch_atomic_sub(int i, atom= ic_t *v) : "ir" (i) : "memory"); } =20 -/** - * arch_atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, e, "er", i); } #define arch_atomic_sub_and_test arch_atomic_sub_and_test =20 -/** - * arch_atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ static __always_inline void arch_atomic_inc(atomic_t *v) { asm volatile(LOCK_PREFIX "incl %0" @@ -97,12 +82,6 @@ static __always_inline void arch_atomic_inc(atomic_t *v) } #define arch_atomic_inc arch_atomic_inc =20 -/** - * arch_atomic_dec - decrement atomic variable - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ static __always_inline void arch_atomic_dec(atomic_t *v) { asm volatile(LOCK_PREFIX "decl %0" @@ -110,69 +89,30 @@ static __always_inline void arch_atomic_dec(atomic_t *= v) } #define arch_atomic_dec arch_atomic_dec =20 -/** - * arch_atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, e); } #define arch_atomic_dec_and_test arch_atomic_dec_and_test =20 -/** - * arch_atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) { return GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, e); } #define arch_atomic_inc_and_test arch_atomic_inc_and_test =20 -/** - * arch_atomic_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, s, "er", i); } #define arch_atomic_add_negative arch_atomic_add_negative =20 -/** - * arch_atomic_add_return - add integer and return - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns @i + @v - */ static __always_inline int arch_atomic_add_return(int i, atomic_t *v) { return i + xadd(&v->counter, i); } #define arch_atomic_add_return arch_atomic_add_return =20 -/** - * arch_atomic_sub_return - subtract integer and return - * @v: pointer of type atomic_t - * @i: integer value to subtract - * - * Atomically subtracts @i from @v and returns @v - @i - */ static __always_inline int arch_atomic_sub_return(int i, atomic_t *v) { return arch_atomic_add_return(-i, v); --=20 2.40.1 From nobody Tue Feb 10 12:39:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCFB3C7EE22 for ; Wed, 10 May 2023 18:19:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236641AbjEJSTi (ORCPT ); Wed, 10 May 2023 14:19:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236267AbjEJSS3 (ORCPT ); Wed, 10 May 2023 14:18:29 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B52E6EB5; Wed, 10 May 2023 11:18:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DB41663FA0; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2DC51C433B0; Wed, 10 May 2023 18:17:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742645; bh=y/ijSymqpBi0ZgYxVDYAOqEJwBeBohZdg/EiWWO4gQs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SpL4jW3lgLQL6fdbONVbO9vIjML/uzlFGHsP8A+ANFzC0DO+iBakWjs1iRMeeKkuC uAdO67olpmJ8qNeOmlSgkTU316iYAQmFy6t8xx0sxKFUGlDM3BZ1oZuCDpRX64bHbc bY92jOyaLQnDqoxFG2z6gC/r3EHhLGcCf9S+Wr/VIieW06uYdeTAAJpV27IboEiM0j uZ7URj6RR2acCpL3FPQm+HTUnTh0K2ZLCmSmsU8cCfJW+rXORx4RWrBropKCd2jw/F eLjQMEmchBZv5bvUNSeotqC+kCWj3LuMy6Sdg7bGUfolbwzzcldmMkaYy8pWT1bylZ 9BVkZQHNkLQ8Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 09D44CE21D3; Wed, 10 May 2023 11:17:19 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc Date: Wed, 10 May 2023 11:17:16 -0700 Message-Id: <20230510181717.2200934-18-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The gen-atomics.sh script currently generates 42 duplicate definitions: arch_atomic64_add_negative arch_atomic64_add_negative_acquire arch_atomic64_add_negative_release arch_atomic64_dec_return arch_atomic64_dec_return_acquire arch_atomic64_dec_return_release arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_release arch_atomic64_fetch_dec arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_release arch_atomic64_fetch_inc arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_release arch_atomic64_inc_return arch_atomic64_inc_return_acquire arch_atomic64_inc_return_release arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_release arch_atomic_add_negative arch_atomic_add_negative_acquire arch_atomic_add_negative_release arch_atomic_dec_return arch_atomic_dec_return_acquire arch_atomic_dec_return_release arch_atomic_fetch_andnot arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_release arch_atomic_fetch_dec arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_release arch_atomic_fetch_inc arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_release arch_atomic_inc_return arch_atomic_inc_return_acquire arch_atomic_inc_return_release arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_release These duplicates are presumably to handle different architectures generating hand-coded definitions for different subsets of the atomic operations. However, generating duplicate kernel-doc headers is undesirable. Therefore, generate only the first kernel-doc definition in a group of duplicates. A comment indicates the name of the function and the fallback script that generated it. Reported-by: Akira Yokosawa Signed-off-by: Paul E. McKenney Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland --- include/linux/atomic/atomic-arch-fallback.h | 386 +++---------------- scripts/atomic/chkdup.sh | 27 ++ scripts/atomic/fallbacks/acquire | 3 + scripts/atomic/fallbacks/add_negative | 5 + scripts/atomic/fallbacks/add_unless | 5 + scripts/atomic/fallbacks/andnot | 5 + scripts/atomic/fallbacks/dec | 5 + scripts/atomic/fallbacks/dec_and_test | 5 + scripts/atomic/fallbacks/dec_if_positive | 5 + scripts/atomic/fallbacks/dec_unless_positive | 5 + scripts/atomic/fallbacks/fence | 3 + scripts/atomic/fallbacks/fetch_add_unless | 5 + scripts/atomic/fallbacks/inc | 5 + scripts/atomic/fallbacks/inc_and_test | 5 + scripts/atomic/fallbacks/inc_not_zero | 5 + scripts/atomic/fallbacks/inc_unless_negative | 5 + scripts/atomic/fallbacks/read_acquire | 5 + scripts/atomic/fallbacks/release | 3 + scripts/atomic/fallbacks/set_release | 5 + scripts/atomic/fallbacks/sub_and_test | 5 + scripts/atomic/fallbacks/try_cmpxchg | 5 + scripts/atomic/gen-atomics.sh | 4 + 22 files changed, 163 insertions(+), 343 deletions(-) create mode 100644 scripts/atomic/chkdup.sh diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/at= omic/atomic-arch-fallback.h index 41aa94f0aacd..2d56726f8662 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -639,13 +639,7 @@ arch_atomic_inc_return_relaxed(atomic_t *v) #else /* arch_atomic_inc_return_relaxed */ =20 #ifndef arch_atomic_inc_return_acquire -/** - * arch_atomic_inc_return_acquire - Atomic inc with acquire ordering - * @v: pointer of type atomic_t - * - * Atomically increment @v using acquire ordering. - * Return new value. - */ +// Fallback acquire omitting duplicate arch_atomic_inc_return_acquire() ke= rnel-doc header. static __always_inline int arch_atomic_inc_return_acquire(atomic_t *v) { @@ -657,13 +651,7 @@ arch_atomic_inc_return_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_inc_return_release -/** - * arch_atomic_inc_return_release - Atomic inc with release ordering - * @v: pointer of type atomic_t - * - * Atomically increment @v using release ordering. - * Return new value. - */ +// Fallback release omitting duplicate arch_atomic_inc_return_release() ke= rnel-doc header. static __always_inline int arch_atomic_inc_return_release(atomic_t *v) { @@ -674,13 +662,7 @@ arch_atomic_inc_return_release(atomic_t *v) #endif =20 #ifndef arch_atomic_inc_return -/** - * arch_atomic_inc_return - Atomic inc with full ordering - * @v: pointer of type atomic_t - * - * Atomically increment @v using full ordering. - * Return new value. - */ +// Fallback fence omitting duplicate arch_atomic_inc_return() kernel-doc h= eader. static __always_inline int arch_atomic_inc_return(atomic_t *v) { @@ -769,13 +751,7 @@ arch_atomic_fetch_inc_relaxed(atomic_t *v) #else /* arch_atomic_fetch_inc_relaxed */ =20 #ifndef arch_atomic_fetch_inc_acquire -/** - * arch_atomic_fetch_inc_acquire - Atomic inc with acquire ordering - * @v: pointer of type atomic_t - * - * Atomically increment @v using acquire ordering. - * Return old value. - */ +// Fallback acquire omitting duplicate arch_atomic_fetch_inc_acquire() ker= nel-doc header. static __always_inline int arch_atomic_fetch_inc_acquire(atomic_t *v) { @@ -787,13 +763,7 @@ arch_atomic_fetch_inc_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_inc_release -/** - * arch_atomic_fetch_inc_release - Atomic inc with release ordering - * @v: pointer of type atomic_t - * - * Atomically increment @v using release ordering. - * Return old value. - */ +// Fallback release omitting duplicate arch_atomic_fetch_inc_release() ker= nel-doc header. static __always_inline int arch_atomic_fetch_inc_release(atomic_t *v) { @@ -804,13 +774,7 @@ arch_atomic_fetch_inc_release(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_inc -/** - * arch_atomic_fetch_inc - Atomic inc with full ordering - * @v: pointer of type atomic_t - * - * Atomically increment @v using full ordering. - * Return old value. - */ +// Fallback fence omitting duplicate arch_atomic_fetch_inc() kernel-doc he= ader. static __always_inline int arch_atomic_fetch_inc(atomic_t *v) { @@ -915,13 +879,7 @@ arch_atomic_dec_return_relaxed(atomic_t *v) #else /* arch_atomic_dec_return_relaxed */ =20 #ifndef arch_atomic_dec_return_acquire -/** - * arch_atomic_dec_return_acquire - Atomic dec with acquire ordering - * @v: pointer of type atomic_t - * - * Atomically decrement @v using acquire ordering. - * Return new value. - */ +// Fallback acquire omitting duplicate arch_atomic_dec_return_acquire() ke= rnel-doc header. static __always_inline int arch_atomic_dec_return_acquire(atomic_t *v) { @@ -933,13 +891,7 @@ arch_atomic_dec_return_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_return_release -/** - * arch_atomic_dec_return_release - Atomic dec with release ordering - * @v: pointer of type atomic_t - * - * Atomically decrement @v using release ordering. - * Return new value. - */ +// Fallback release omitting duplicate arch_atomic_dec_return_release() ke= rnel-doc header. static __always_inline int arch_atomic_dec_return_release(atomic_t *v) { @@ -950,13 +902,7 @@ arch_atomic_dec_return_release(atomic_t *v) #endif =20 #ifndef arch_atomic_dec_return -/** - * arch_atomic_dec_return - Atomic dec with full ordering - * @v: pointer of type atomic_t - * - * Atomically decrement @v using full ordering. - * Return new value. - */ +// Fallback fence omitting duplicate arch_atomic_dec_return() kernel-doc h= eader. static __always_inline int arch_atomic_dec_return(atomic_t *v) { @@ -1045,13 +991,7 @@ arch_atomic_fetch_dec_relaxed(atomic_t *v) #else /* arch_atomic_fetch_dec_relaxed */ =20 #ifndef arch_atomic_fetch_dec_acquire -/** - * arch_atomic_fetch_dec_acquire - Atomic dec with acquire ordering - * @v: pointer of type atomic_t - * - * Atomically decrement @v using acquire ordering. - * Return old value. - */ +// Fallback acquire omitting duplicate arch_atomic_fetch_dec_acquire() ker= nel-doc header. static __always_inline int arch_atomic_fetch_dec_acquire(atomic_t *v) { @@ -1063,13 +1003,7 @@ arch_atomic_fetch_dec_acquire(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_dec_release -/** - * arch_atomic_fetch_dec_release - Atomic dec with release ordering - * @v: pointer of type atomic_t - * - * Atomically decrement @v using release ordering. - * Return old value. - */ +// Fallback release omitting duplicate arch_atomic_fetch_dec_release() ker= nel-doc header. static __always_inline int arch_atomic_fetch_dec_release(atomic_t *v) { @@ -1080,13 +1014,7 @@ arch_atomic_fetch_dec_release(atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_dec -/** - * arch_atomic_fetch_dec - Atomic dec with full ordering - * @v: pointer of type atomic_t - * - * Atomically decrement @v using full ordering. - * Return old value. - */ +// Fallback fence omitting duplicate arch_atomic_fetch_dec() kernel-doc he= ader. static __always_inline int arch_atomic_fetch_dec(atomic_t *v) { @@ -1262,14 +1190,7 @@ arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v) #else /* arch_atomic_fetch_andnot_relaxed */ =20 #ifndef arch_atomic_fetch_andnot_acquire -/** - * arch_atomic_fetch_andnot_acquire - Atomic andnot with acquire ordering - * @i: value to complement then AND - * @v: pointer of type atomic_t - * - * Atomically complement then AND @i with @v using acquire ordering. - * Return old value. - */ +// Fallback acquire omitting duplicate arch_atomic_fetch_andnot_acquire() = kernel-doc header. static __always_inline int arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) { @@ -1281,14 +1202,7 @@ arch_atomic_fetch_andnot_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_andnot_release -/** - * arch_atomic_fetch_andnot_release - Atomic andnot with release ordering - * @i: value to complement then AND - * @v: pointer of type atomic_t - * - * Atomically complement then AND @i with @v using release ordering. - * Return old value. - */ +// Fallback release omitting duplicate arch_atomic_fetch_andnot_release() = kernel-doc header. static __always_inline int arch_atomic_fetch_andnot_release(int i, atomic_t *v) { @@ -1299,14 +1213,7 @@ arch_atomic_fetch_andnot_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_fetch_andnot -/** - * arch_atomic_fetch_andnot - Atomic andnot with full ordering - * @i: value to complement then AND - * @v: pointer of type atomic_t - * - * Atomically complement then AND @i with @v using full ordering. - * Return old value. - */ +// Fallback fence omitting duplicate arch_atomic_fetch_andnot() kernel-doc= header. static __always_inline int arch_atomic_fetch_andnot(int i, atomic_t *v) { @@ -1699,18 +1606,7 @@ arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *ol= d, int new) #else /* arch_atomic_try_cmpxchg_relaxed */ =20 #ifndef arch_atomic_try_cmpxchg_acquire -/** - * arch_atomic_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire order= ing - * @v: pointer of type atomic_t - * @old: desired old value to match - * @new: new value to put in - * - * Atomically compares @new to *@v, and if equal, - * stores @new to *@v, providing acquire ordering. - * Returns @true if the cmpxchg operation succeeded, - * and false otherwise. Either way, stores the old - * value of *@v to *@old. - */ +// Fallback acquire omitting duplicate arch_atomic_try_cmpxchg_acquire() k= ernel-doc header. static __always_inline bool arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { @@ -1722,18 +1618,7 @@ arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *ol= d, int new) #endif =20 #ifndef arch_atomic_try_cmpxchg_release -/** - * arch_atomic_try_cmpxchg_release - Atomic try_cmpxchg with release order= ing - * @v: pointer of type atomic_t - * @old: desired old value to match - * @new: new value to put in - * - * Atomically compares @new to *@v, and if equal, - * stores @new to *@v, providing release ordering. - * Returns @true if the cmpxchg operation succeeded, - * and false otherwise. Either way, stores the old - * value of *@v to *@old. - */ +// Fallback release omitting duplicate arch_atomic_try_cmpxchg_release() k= ernel-doc header. static __always_inline bool arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { @@ -1744,18 +1629,7 @@ arch_atomic_try_cmpxchg_release(atomic_t *v, int *ol= d, int new) #endif =20 #ifndef arch_atomic_try_cmpxchg -/** - * arch_atomic_try_cmpxchg - Atomic try_cmpxchg with full ordering - * @v: pointer of type atomic_t - * @old: desired old value to match - * @new: new value to put in - * - * Atomically compares @new to *@v, and if equal, - * stores @new to *@v, providing full ordering. - * Returns @true if the cmpxchg operation succeeded, - * and false otherwise. Either way, stores the old - * value of *@v to *@old. - */ +// Fallback fence omitting duplicate arch_atomic_try_cmpxchg() kernel-doc = header. static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new) { @@ -1900,15 +1774,7 @@ arch_atomic_add_negative_relaxed(int i, atomic_t *v) #else /* arch_atomic_add_negative_relaxed */ =20 #ifndef arch_atomic_add_negative_acquire -/** - * arch_atomic_add_negative_acquire - Atomic add_negative with acquire ord= ering - * @i: value to add - * @v: pointer of type atomic_t - * - * Atomically add @i with @v using acquire ordering. - * Return @true if the result is negative, or @false when - * the result is greater than or equal to zero. - */ +// Fallback acquire omitting duplicate arch_atomic_add_negative_acquire() = kernel-doc header. static __always_inline bool arch_atomic_add_negative_acquire(int i, atomic_t *v) { @@ -1920,15 +1786,7 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_add_negative_release -/** - * arch_atomic_add_negative_release - Atomic add_negative with release ord= ering - * @i: value to add - * @v: pointer of type atomic_t - * - * Atomically add @i with @v using release ordering. - * Return @true if the result is negative, or @false when - * the result is greater than or equal to zero. - */ +// Fallback release omitting duplicate arch_atomic_add_negative_release() = kernel-doc header. static __always_inline bool arch_atomic_add_negative_release(int i, atomic_t *v) { @@ -1939,15 +1797,7 @@ arch_atomic_add_negative_release(int i, atomic_t *v) #endif =20 #ifndef arch_atomic_add_negative -/** - * arch_atomic_add_negative - Atomic add_negative with full ordering - * @i: value to add - * @v: pointer of type atomic_t - * - * Atomically add @i with @v using full ordering. - * Return @true if the result is negative, or @false when - * the result is greater than or equal to zero. - */ +// Fallback fence omitting duplicate arch_atomic_add_negative() kernel-doc= header. static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { @@ -2500,13 +2350,7 @@ arch_atomic64_inc_return_relaxed(atomic64_t *v) #else /* arch_atomic64_inc_return_relaxed */ =20 #ifndef arch_atomic64_inc_return_acquire -/** - * arch_atomic64_inc_return_acquire - Atomic inc with acquire ordering - * @v: pointer of type atomic64_t - * - * Atomically increment @v using acquire ordering. - * Return new value. - */ +// Fallback acquire omitting duplicate arch_atomic64_inc_return_acquire() = kernel-doc header. static __always_inline s64 arch_atomic64_inc_return_acquire(atomic64_t *v) { @@ -2518,13 +2362,7 @@ arch_atomic64_inc_return_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_inc_return_release -/** - * arch_atomic64_inc_return_release - Atomic inc with release ordering - * @v: pointer of type atomic64_t - * - * Atomically increment @v using release ordering. - * Return new value. - */ +// Fallback release omitting duplicate arch_atomic64_inc_return_release() = kernel-doc header. static __always_inline s64 arch_atomic64_inc_return_release(atomic64_t *v) { @@ -2535,13 +2373,7 @@ arch_atomic64_inc_return_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_inc_return -/** - * arch_atomic64_inc_return - Atomic inc with full ordering - * @v: pointer of type atomic64_t - * - * Atomically increment @v using full ordering. - * Return new value. - */ +// Fallback fence omitting duplicate arch_atomic64_inc_return() kernel-doc= header. static __always_inline s64 arch_atomic64_inc_return(atomic64_t *v) { @@ -2630,13 +2462,7 @@ arch_atomic64_fetch_inc_relaxed(atomic64_t *v) #else /* arch_atomic64_fetch_inc_relaxed */ =20 #ifndef arch_atomic64_fetch_inc_acquire -/** - * arch_atomic64_fetch_inc_acquire - Atomic inc with acquire ordering - * @v: pointer of type atomic64_t - * - * Atomically increment @v using acquire ordering. - * Return old value. - */ +// Fallback acquire omitting duplicate arch_atomic64_fetch_inc_acquire() k= ernel-doc header. static __always_inline s64 arch_atomic64_fetch_inc_acquire(atomic64_t *v) { @@ -2648,13 +2474,7 @@ arch_atomic64_fetch_inc_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_inc_release -/** - * arch_atomic64_fetch_inc_release - Atomic inc with release ordering - * @v: pointer of type atomic64_t - * - * Atomically increment @v using release ordering. - * Return old value. - */ +// Fallback release omitting duplicate arch_atomic64_fetch_inc_release() k= ernel-doc header. static __always_inline s64 arch_atomic64_fetch_inc_release(atomic64_t *v) { @@ -2665,13 +2485,7 @@ arch_atomic64_fetch_inc_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_inc -/** - * arch_atomic64_fetch_inc - Atomic inc with full ordering - * @v: pointer of type atomic64_t - * - * Atomically increment @v using full ordering. - * Return old value. - */ +// Fallback fence omitting duplicate arch_atomic64_fetch_inc() kernel-doc = header. static __always_inline s64 arch_atomic64_fetch_inc(atomic64_t *v) { @@ -2776,13 +2590,7 @@ arch_atomic64_dec_return_relaxed(atomic64_t *v) #else /* arch_atomic64_dec_return_relaxed */ =20 #ifndef arch_atomic64_dec_return_acquire -/** - * arch_atomic64_dec_return_acquire - Atomic dec with acquire ordering - * @v: pointer of type atomic64_t - * - * Atomically decrement @v using acquire ordering. - * Return new value. - */ +// Fallback acquire omitting duplicate arch_atomic64_dec_return_acquire() = kernel-doc header. static __always_inline s64 arch_atomic64_dec_return_acquire(atomic64_t *v) { @@ -2794,13 +2602,7 @@ arch_atomic64_dec_return_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_return_release -/** - * arch_atomic64_dec_return_release - Atomic dec with release ordering - * @v: pointer of type atomic64_t - * - * Atomically decrement @v using release ordering. - * Return new value. - */ +// Fallback release omitting duplicate arch_atomic64_dec_return_release() = kernel-doc header. static __always_inline s64 arch_atomic64_dec_return_release(atomic64_t *v) { @@ -2811,13 +2613,7 @@ arch_atomic64_dec_return_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_dec_return -/** - * arch_atomic64_dec_return - Atomic dec with full ordering - * @v: pointer of type atomic64_t - * - * Atomically decrement @v using full ordering. - * Return new value. - */ +// Fallback fence omitting duplicate arch_atomic64_dec_return() kernel-doc= header. static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v) { @@ -2906,13 +2702,7 @@ arch_atomic64_fetch_dec_relaxed(atomic64_t *v) #else /* arch_atomic64_fetch_dec_relaxed */ =20 #ifndef arch_atomic64_fetch_dec_acquire -/** - * arch_atomic64_fetch_dec_acquire - Atomic dec with acquire ordering - * @v: pointer of type atomic64_t - * - * Atomically decrement @v using acquire ordering. - * Return old value. - */ +// Fallback acquire omitting duplicate arch_atomic64_fetch_dec_acquire() k= ernel-doc header. static __always_inline s64 arch_atomic64_fetch_dec_acquire(atomic64_t *v) { @@ -2924,13 +2714,7 @@ arch_atomic64_fetch_dec_acquire(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_dec_release -/** - * arch_atomic64_fetch_dec_release - Atomic dec with release ordering - * @v: pointer of type atomic64_t - * - * Atomically decrement @v using release ordering. - * Return old value. - */ +// Fallback release omitting duplicate arch_atomic64_fetch_dec_release() k= ernel-doc header. static __always_inline s64 arch_atomic64_fetch_dec_release(atomic64_t *v) { @@ -2941,13 +2725,7 @@ arch_atomic64_fetch_dec_release(atomic64_t *v) #endif =20 #ifndef arch_atomic64_fetch_dec -/** - * arch_atomic64_fetch_dec - Atomic dec with full ordering - * @v: pointer of type atomic64_t - * - * Atomically decrement @v using full ordering. - * Return old value. - */ +// Fallback fence omitting duplicate arch_atomic64_fetch_dec() kernel-doc = header. static __always_inline s64 arch_atomic64_fetch_dec(atomic64_t *v) { @@ -3123,14 +2901,7 @@ arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t= *v) #else /* arch_atomic64_fetch_andnot_relaxed */ =20 #ifndef arch_atomic64_fetch_andnot_acquire -/** - * arch_atomic64_fetch_andnot_acquire - Atomic andnot with acquire ordering - * @i: value to complement then AND - * @v: pointer of type atomic64_t - * - * Atomically complement then AND @i with @v using acquire ordering. - * Return old value. - */ +// Fallback acquire omitting duplicate arch_atomic64_fetch_andnot_acquire(= ) kernel-doc header. static __always_inline s64 arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { @@ -3142,14 +2913,7 @@ arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_fetch_andnot_release -/** - * arch_atomic64_fetch_andnot_release - Atomic andnot with release ordering - * @i: value to complement then AND - * @v: pointer of type atomic64_t - * - * Atomically complement then AND @i with @v using release ordering. - * Return old value. - */ +// Fallback release omitting duplicate arch_atomic64_fetch_andnot_release(= ) kernel-doc header. static __always_inline s64 arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { @@ -3160,14 +2924,7 @@ arch_atomic64_fetch_andnot_release(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_fetch_andnot -/** - * arch_atomic64_fetch_andnot - Atomic andnot with full ordering - * @i: value to complement then AND - * @v: pointer of type atomic64_t - * - * Atomically complement then AND @i with @v using full ordering. - * Return old value. - */ +// Fallback fence omitting duplicate arch_atomic64_fetch_andnot() kernel-d= oc header. static __always_inline s64 arch_atomic64_fetch_andnot(s64 i, atomic64_t *v) { @@ -3560,18 +3317,7 @@ arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64= *old, s64 new) #else /* arch_atomic64_try_cmpxchg_relaxed */ =20 #ifndef arch_atomic64_try_cmpxchg_acquire -/** - * arch_atomic64_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ord= ering - * @v: pointer of type atomic64_t - * @old: desired old value to match - * @new: new value to put in - * - * Atomically compares @new to *@v, and if equal, - * stores @new to *@v, providing acquire ordering. - * Returns @true if the cmpxchg operation succeeded, - * and false otherwise. Either way, stores the old - * value of *@v to *@old. - */ +// Fallback acquire omitting duplicate arch_atomic64_try_cmpxchg_acquire()= kernel-doc header. static __always_inline bool arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { @@ -3583,18 +3329,7 @@ arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64= *old, s64 new) #endif =20 #ifndef arch_atomic64_try_cmpxchg_release -/** - * arch_atomic64_try_cmpxchg_release - Atomic try_cmpxchg with release ord= ering - * @v: pointer of type atomic64_t - * @old: desired old value to match - * @new: new value to put in - * - * Atomically compares @new to *@v, and if equal, - * stores @new to *@v, providing release ordering. - * Returns @true if the cmpxchg operation succeeded, - * and false otherwise. Either way, stores the old - * value of *@v to *@old. - */ +// Fallback release omitting duplicate arch_atomic64_try_cmpxchg_release()= kernel-doc header. static __always_inline bool arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { @@ -3605,18 +3340,7 @@ arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64= *old, s64 new) #endif =20 #ifndef arch_atomic64_try_cmpxchg -/** - * arch_atomic64_try_cmpxchg - Atomic try_cmpxchg with full ordering - * @v: pointer of type atomic64_t - * @old: desired old value to match - * @new: new value to put in - * - * Atomically compares @new to *@v, and if equal, - * stores @new to *@v, providing full ordering. - * Returns @true if the cmpxchg operation succeeded, - * and false otherwise. Either way, stores the old - * value of *@v to *@old. - */ +// Fallback fence omitting duplicate arch_atomic64_try_cmpxchg() kernel-do= c header. static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { @@ -3761,15 +3485,7 @@ arch_atomic64_add_negative_relaxed(s64 i, atomic64_t= *v) #else /* arch_atomic64_add_negative_relaxed */ =20 #ifndef arch_atomic64_add_negative_acquire -/** - * arch_atomic64_add_negative_acquire - Atomic add_negative with acquire o= rdering - * @i: value to add - * @v: pointer of type atomic64_t - * - * Atomically add @i with @v using acquire ordering. - * Return @true if the result is negative, or @false when - * the result is greater than or equal to zero. - */ +// Fallback acquire omitting duplicate arch_atomic64_add_negative_acquire(= ) kernel-doc header. static __always_inline bool arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) { @@ -3781,15 +3497,7 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_add_negative_release -/** - * arch_atomic64_add_negative_release - Atomic add_negative with release o= rdering - * @i: value to add - * @v: pointer of type atomic64_t - * - * Atomically add @i with @v using release ordering. - * Return @true if the result is negative, or @false when - * the result is greater than or equal to zero. - */ +// Fallback release omitting duplicate arch_atomic64_add_negative_release(= ) kernel-doc header. static __always_inline bool arch_atomic64_add_negative_release(s64 i, atomic64_t *v) { @@ -3800,15 +3508,7 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t= *v) #endif =20 #ifndef arch_atomic64_add_negative -/** - * arch_atomic64_add_negative - Atomic add_negative with full ordering - * @i: value to add - * @v: pointer of type atomic64_t - * - * Atomically add @i with @v using full ordering. - * Return @true if the result is negative, or @false when - * the result is greater than or equal to zero. - */ +// Fallback fence omitting duplicate arch_atomic64_add_negative() kernel-d= oc header. static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { @@ -3958,4 +3658,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif =20 #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// 7c2c97cd48cf9c672efc44b9fed5a37b8970dde4 +// 9bf9febc5288ed9539d1b3cfbbc6e36743b74c3b diff --git a/scripts/atomic/chkdup.sh b/scripts/atomic/chkdup.sh new file mode 100644 index 000000000000..04bb4f5c5c34 --- /dev/null +++ b/scripts/atomic/chkdup.sh @@ -0,0 +1,27 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 +# +# Check to see if the specified atomic is already in use. This is +# done by keeping filenames in the temporary directory specified by the +# environment variable T. +# +# Usage: +# chkdup.sh name fallback +# +# The "name" argument is the name of the function to be generated, and +# the "fallback" argument is the name of the fallback script that is +# doing the generation. +# +# If the function is a duplicate, output a comment saying so and +# exit with non-zero (error) status. Otherwise exit successfully +# +# If the function is a duplicate, output a comment saying so and +# exit with non-zero (error) status. Otherwise exit successfully. + +if test -f ${T}/${1} +then + echo // Fallback ${2} omitting duplicate "${1}()" kernel-doc header. + exit 1 +fi +touch ${T}/${1} +exit 0 diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/ac= quire index 08fc6c30a9ef..a349935ac7fe 100755 --- a/scripts/atomic/fallbacks/acquire +++ b/scripts/atomic/fallbacks/acquire @@ -1,5 +1,8 @@ +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx}_acqui= re acquire +then acqrel=3Dacquire . ${ATOMICDIR}/acqrel.sh +fi cat << EOF static __always_inline ${ret} arch_${atomic}_${pfx}${name}${sfx}_acquire(${params}) diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbac= ks/add_negative index c032e8bec6e2..b105fdfe8fd1 100755 --- a/scripts/atomic/fallbacks/add_negative +++ b/scripts/atomic/fallbacks/add_negative @@ -1,3 +1,5 @@ +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_add_negative${order} add_= negative +then cat < X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0D97C7EE22 for ; Wed, 10 May 2023 18:19:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236463AbjEJSTJ (ORCPT ); Wed, 10 May 2023 14:19:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236159AbjEJSSV (ORCPT ); Wed, 10 May 2023 14:18:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85A528694; Wed, 10 May 2023 11:18:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CA02063F96; Wed, 10 May 2023 18:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEE4BC433AF; Wed, 10 May 2023 18:17:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683742645; bh=5niEoOKO1ZkgLqnjpypuOkCum/ZBxI/z2WOC9+AEEJY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E6FdKMPCzjlU2H9fbkTFQeqdk0HNDS1E6dlpIRaNOGqMSKBtqk5OEVR2vxdhPqcQq UmL7syR5Zydq67rTe89amu9DmA4kLQgd6fTWw8bdF9HSzXheGkJVj+83Bppuh8OqrV 8i8eX/Az14mdD/zd9P6cW47Rdw2zbrG41/siw7z82uINTpKp6TO56B7+59EhlyF6Jk xw6EEDZrM8IUBgtBgPlj1JZa/cakOVO2ZefOEY2pZ6sSy4zwo1jOnLWOwFjqyekILm n4ew9qJfLwqncnzQMbb+wxvxTr9RT5MWmPQlqU921KblIxe0gmwP/fxSFK6L6zZ/aE iwkUwlpWAPSdw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 0C48BCE21E1; Wed, 10 May 2023 11:17:19 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, akiyks@gmail.com, linux-doc@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" , Jonathan Corbet , Kees Cook , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Subject: [PATCH locking/atomic 19/19] docs: Add atomic operations to the driver basic API documentation Date: Wed, 10 May 2023 11:17:17 -0700 Message-Id: <20230510181717.2200934-19-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> References: <19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add the include/linux/atomic/atomic-arch-fallback.h file to the driver-api/basics.rst in order to provide documentation for the Linux kernel's atomic operations. Signed-off-by: Paul E. McKenney Cc: Jonathan Corbet Cc: Kees Cook Cc: Akira Yokosawa Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland Cc: Reviewed-by: Kees Cook --- Documentation/driver-api/basics.rst | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api= /basics.rst index 4b4d8e28d3be..0ae07f0d8601 100644 --- a/Documentation/driver-api/basics.rst +++ b/Documentation/driver-api/basics.rst @@ -87,6 +87,9 @@ Atomics .. kernel-doc:: arch/x86/include/asm/atomic.h :internal: =20 +.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h + :internal: + Kernel objects manipulation --------------------------- =20 --=20 2.40.1