From nobody Sun Nov 24 12:28:09 2024 Received: from frasgout13.his.huawei.com (frasgout13.his.huawei.com [14.137.139.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F51D1D47B4 for ; Tue, 5 Nov 2024 16:49:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=14.137.139.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730825375; cv=none; b=r6Biv1AAs44TLuJSRayRV4zcpehoQJ2Gof5NYM6wXpEAnGnO1rt75hWOV/rCKi8GmXiaskfEmc87rplvezBIIPcC28o7yJM2EWmIO17eTO48kQ5ZnGHumKg+ow5v9yztGRvfKgASmQTQNztUOrqBnNFZbgSF6h1f9olJ78mOE38= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730825375; c=relaxed/simple; bh=DHKKCAIUWZd43CgTM7I24nEBCrR5JIn78pWnPe0zl6k=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=lcOUbZl2hZdwdXHLOvOfjBJy/0p9kZo1InYPlOuhK+Wp20nYq1ELzV5RS3VfZSRXRXIT5d6n2fRT+UUSFsq/I9jPARQqzOm9VxY5L4eU/bZF15/VMOztvOlFHwhOnWP5mO6v5cN0Y+1wFm+NtvItd7plGT7GqgkefYjcVcLpgIk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=14.137.139.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.18.186.29]) by frasgout13.his.huawei.com (SkyGuard) with ESMTP id 4XjYgN3P9Wz9v7JC for ; Wed, 6 Nov 2024 00:28:48 +0800 (CST) Received: from mail02.huawei.com (unknown [7.182.16.47]) by mail.maildlp.com (Postfix) with ESMTP id 48AC21408F9 for ; Wed, 6 Nov 2024 00:49:16 +0800 (CST) Received: from huaweicloud.com (unknown [10.206.133.88]) by APP1 (Coremail) with SMTP id LxC2BwBHBTaATCpnVFAUAQ--.15649S2; Tue, 05 Nov 2024 17:49:15 +0100 (CET) From: Jonas Oberhauser To: paulmck@kernel.org Cc: stern@rowland.harvard.edu, parri.andrea@gmail.com, will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com, joel@joelfernandes.org, urezki@gmail.com, quic_neeraju@quicinc.com, frederic@kernel.org, linux-kernel@vger.kernel.org, lkmm@lists.linux.dev, hernan.poncedeleon@huaweicloud.com, Jonas Oberhauser Subject: [PATCH v5] tools/memory-model: Distinguish between syntactic and semantic tags Date: Tue, 5 Nov 2024 17:48:58 +0100 Message-Id: <20241105164858.218336-1-jonas.oberhauser@huaweicloud.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: LxC2BwBHBTaATCpnVFAUAQ--.15649S2 X-Coremail-Antispam: 1UD129KBjvAXoWfGr1fCrW7Jw18Jr1xur43Awb_yoW8ZrW8Xo WrZr1Syw18XryDWFs8Kw1xJrWDWw4Iq3Z8Xry8Gw1j9Fy7ZayrXrnrGw1UK34fJF15Kwn8 WrZ7Z3sxXa17Jr18n29KB7ZKAUJUUUUr529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUYL7AC8VAFwI0_Xr0_Wr1l1xkIjI8I6I8E6xAIw20EY4v20xva j40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2 x7M28EF7xvwVC0I7IYx2IY67AKxVWUJVWUCwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8 JVWxJwA2z4x0Y4vEx4A2jsIE14v26r4j6F4UM28EF7xvwVC2z280aVCY1x0267AKxVW8JV W8Jr1ln4kS14v26r1Y6r17M2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE 6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72 CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7 M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r4a6rW5MxAIw28IcxkI7VAKI4 8JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xv wVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY6xIIjx v20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20E Y4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267 AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7sRi1vVtUUUUU== X-CM-SenderInfo: 5mrqt2oorev25kdx2v3u6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" Not all annotated accesses provide the semantics their syntactic tags would imply. For example, an 'acquire tag on a write does not imply that the write is finally in the Acquire set and provides acquire ordering. To distinguish in those cases between the syntactic tags and actual sets, we capitalize the former, so 'ACQUIRE tags may be present on both reads and writes, but only reads will appear in the Acquire set. For tags where the two concepts are the same we do not use specific capitalization to make this distinction. Reported-by: Boqun Feng Signed-off-by: Jonas Oberhauser Reviewed-by: Boqun Feng --- .../Documentation/herd-representation.txt | 44 ++-- tools/memory-model/linux-kernel.bell | 22 +- tools/memory-model/linux-kernel.def | 198 +++++++++--------- 3 files changed, 132 insertions(+), 132 deletions(-) diff --git a/tools/memory-model/Documentation/herd-representation.txt b/too= ls/memory-model/Documentation/herd-representation.txt index 7ae1ff3d3769..4e19b4f2a476 100644 --- a/tools/memory-model/Documentation/herd-representation.txt +++ b/tools/memory-model/Documentation/herd-representation.txt @@ -21,7 +21,7 @@ # Note that the syntactic representation does not always match the sets and # relations in linux-kernel.cat, due to redefinitions in linux-kernel.bell= and # lock.cat. For example, the po link between LKR and LKW is upgraded to an= rmw -# link, and W[acquire] are not included in the Acquire set. +# link, and W[ACQUIRE] are not included in the Acquire set. # # Disclaimer. The table includes representations of "add" and "and" opera= tions; # corresponding/identical representations of "sub", "inc", "dec" and "or",= "xor", @@ -32,16 +32,16 @@ ----------------------------------------------------------------------= -------- | Non-RMW ops | = | ----------------------------------------------------------------------= -------- - | READ_ONCE | R[once] = | + | READ_ONCE | R[ONCE] = | | atomic_read | = | - | WRITE_ONCE | W[once] = | + | WRITE_ONCE | W[ONCE] = | | atomic_set | = | - | smp_load_acquire | R[acquire] = | + | smp_load_acquire | R[ACQUIRE] = | | atomic_read_acquire | = | - | smp_store_release | W[release] = | + | smp_store_release | W[RELEASE] = | | atomic_set_release | = | - | smp_store_mb | W[once] ->po F[mb] = | - | smp_mb | F[mb] = | + | smp_store_mb | W[ONCE] ->po F[MB] = | + | smp_mb | F[MB] = | | smp_rmb | F[rmb] = | | smp_wmb | F[wmb] = | | smp_mb__before_atomic | F[before-atomic] = | @@ -54,8 +54,8 @@ | rcu_read_lock | F[rcu-lock] = | | rcu_read_unlock | F[rcu-unlock] = | | synchronize_rcu | F[sync-rcu] = | - | rcu_dereference | R[once] = | - | rcu_assign_pointer | W[release] = | + | rcu_dereference | R[ONCE] = | + | rcu_assign_pointer | W[RELEASE] = | | srcu_read_lock | R[srcu-lock] = | | srcu_down_read | = | | srcu_read_unlock | W[srcu-unlock] = | @@ -65,31 +65,31 @@ ----------------------------------------------------------------------= -------- | RMW ops w/o return value | = | ----------------------------------------------------------------------= -------- - | atomic_add | R*[noreturn] ->rmw W*[noreturn] = | + | atomic_add | R*[NORETURN] ->rmw W*[NORETURN] = | | atomic_and | = | | spin_lock | LKR ->po LKW = | ----------------------------------------------------------------------= -------- | RMW ops w/ return value | = | ----------------------------------------------------------------------= -------- - | atomic_add_return | R*[mb] ->rmw W*[mb] = | + | atomic_add_return | R*[MB] ->rmw W*[MB] = | | atomic_fetch_add | = | | atomic_fetch_and | = | | atomic_xchg | = | | xchg | = | | atomic_add_negative | = | - | atomic_add_return_relaxed | R*[once] ->rmw W*[once] = | + | atomic_add_return_relaxed | R*[ONCE] ->rmw W*[ONCE] = | | atomic_fetch_add_relaxed | = | | atomic_fetch_and_relaxed | = | | atomic_xchg_relaxed | = | | xchg_relaxed | = | | atomic_add_negative_relaxed | = | - | atomic_add_return_acquire | R*[acquire] ->rmw W*[acquire] = | + | atomic_add_return_acquire | R*[ACQUIRE] ->rmw W*[ACQUIRE] = | | atomic_fetch_add_acquire | = | | atomic_fetch_and_acquire | = | | atomic_xchg_acquire | = | | xchg_acquire | = | | atomic_add_negative_acquire | = | - | atomic_add_return_release | R*[release] ->rmw W*[release] = | + | atomic_add_return_release | R*[RELEASE] ->rmw W*[RELEASE] = | | atomic_fetch_add_release | = | | atomic_fetch_and_release | = | | atomic_xchg_release | = | @@ -98,16 +98,16 @@ ----------------------------------------------------------------------= -------- | Conditional RMW ops | = | ----------------------------------------------------------------------= -------- - | atomic_cmpxchg | On success: R*[mb] ->rmw W*[mb] = | - | | On failure: R*[mb] = | + | atomic_cmpxchg | On success: R*[MB] ->rmw W*[MB] = | + | | On failure: R*[MB] = | | cmpxchg | = | | atomic_add_unless | = | - | atomic_cmpxchg_relaxed | On success: R*[once] ->rmw W*[once]= | - | | On failure: R*[once] = | - | atomic_cmpxchg_acquire | On success: R*[acquire] ->rmw W*[ac= quire] | - | | On failure: R*[acquire] = | - | atomic_cmpxchg_release | On success: R*[release] ->rmw W*[re= lease] | - | | On failure: R*[release] = | + | atomic_cmpxchg_relaxed | On success: R*[ONCE] ->rmw W*[ONCE]= | + | | On failure: R*[ONCE] = | + | atomic_cmpxchg_acquire | On success: R*[ACQUIRE] ->rmw W*[AC= QUIRE] | + | | On failure: R*[ACQUIRE] = | + | atomic_cmpxchg_release | On success: R*[RELEASE] ->rmw W*[RE= LEASE] | + | | On failure: R*[RELEASE] = | | spin_trylock | On success: LKR ->po LKW = | | | On failure: LF = | ----------------------------------------------------------------------= -------- diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linu= x-kernel.bell index 8ae47545df97..fe65998002b9 100644 --- a/tools/memory-model/linux-kernel.bell +++ b/tools/memory-model/linux-kernel.bell @@ -13,18 +13,18 @@ =20 "Linux-kernel memory consistency model" =20 -enum Accesses =3D 'once (*READ_ONCE,WRITE_ONCE*) || - 'release (*smp_store_release*) || - 'acquire (*smp_load_acquire*) || - 'noreturn (* R of non-return RMW *) || - 'mb (*xchg(),cmpxchg(),...*) +enum Accesses =3D 'ONCE (*READ_ONCE,WRITE_ONCE*) || + 'RELEASE (*smp_store_release*) || + 'ACQUIRE (*smp_load_acquire*) || + 'NORETURN (* R of non-return RMW *) || + 'MB (*xchg(),cmpxchg(),...*) instructions R[Accesses] instructions W[Accesses] instructions RMW[Accesses] =20 enum Barriers =3D 'wmb (*smp_wmb*) || 'rmb (*smp_rmb*) || - 'mb (*smp_mb*) || + 'MB (*smp_mb*) || 'barrier (*barrier*) || 'rcu-lock (*rcu_read_lock*) || 'rcu-unlock (*rcu_read_unlock*) || @@ -42,10 +42,10 @@ instructions F[Barriers] * semantic ordering, such as Acquire on a store or Mb on a failed RMW. *) let FailedRMW =3D RMW \ (domain(rmw) | range(rmw)) -let Acquire =3D Acquire \ W \ FailedRMW -let Release =3D Release \ R \ FailedRMW -let Mb =3D Mb \ FailedRMW -let Noreturn =3D Noreturn \ W +let Acquire =3D ACQUIRE \ W \ FailedRMW +let Release =3D RELEASE \ R \ FailedRMW +let Mb =3D MB \ FailedRMW +let Noreturn =3D NORETURN \ W =20 (* SRCU *) enum SRCU =3D 'srcu-lock || 'srcu-unlock || 'sync-srcu @@ -85,7 +85,7 @@ flag ~empty rcu-rscs & (po ; [Sync-srcu] ; po) as invalid= -sleep flag ~empty different-values(srcu-rscs) as srcu-bad-value-match =20 (* Compute marked and plain memory accesses *) -let Marked =3D (~M) | IW | Once | Release | Acquire | domain(rmw) | range(= rmw) | +let Marked =3D (~M) | IW | ONCE | RELEASE | ACQUIRE | MB | RMW | LKR | LKW | UL | LF | RL | RU | Srcu-lock | Srcu-unlock let Plain =3D M \ Marked =20 diff --git a/tools/memory-model/linux-kernel.def b/tools/memory-model/linux= -kernel.def index d7279a357cba..49e402782e49 100644 --- a/tools/memory-model/linux-kernel.def +++ b/tools/memory-model/linux-kernel.def @@ -6,18 +6,18 @@ // which appeared in ASPLOS 2018. =20 // ONCE -READ_ONCE(X) __load{once}(X) -WRITE_ONCE(X,V) { __store{once}(X,V); } +READ_ONCE(X) __load{ONCE}(X) +WRITE_ONCE(X,V) { __store{ONCE}(X,V); } =20 // Release Acquire and friends -smp_store_release(X,V) { __store{release}(*X,V); } -smp_load_acquire(X) __load{acquire}(*X) -rcu_assign_pointer(X,V) { __store{release}(X,V); } -rcu_dereference(X) __load{once}(X) -smp_store_mb(X,V) { __store{once}(X,V); __fence{mb}; } +smp_store_release(X,V) { __store{RELEASE}(*X,V); } +smp_load_acquire(X) __load{ACQUIRE}(*X) +rcu_assign_pointer(X,V) { __store{RELEASE}(X,V); } +rcu_dereference(X) __load{ONCE}(X) +smp_store_mb(X,V) { __store{ONCE}(X,V); __fence{MB}; } =20 // Fences -smp_mb() { __fence{mb}; } +smp_mb() { __fence{MB}; } smp_rmb() { __fence{rmb}; } smp_wmb() { __fence{wmb}; } smp_mb__before_atomic() { __fence{before-atomic}; } @@ -28,14 +28,14 @@ smp_mb__after_srcu_read_unlock() { __fence{after-srcu-r= ead-unlock}; } barrier() { __fence{barrier}; } =20 // Exchange -xchg(X,V) __xchg{mb}(X,V) -xchg_relaxed(X,V) __xchg{once}(X,V) -xchg_release(X,V) __xchg{release}(X,V) -xchg_acquire(X,V) __xchg{acquire}(X,V) -cmpxchg(X,V,W) __cmpxchg{mb}(X,V,W) -cmpxchg_relaxed(X,V,W) __cmpxchg{once}(X,V,W) -cmpxchg_acquire(X,V,W) __cmpxchg{acquire}(X,V,W) -cmpxchg_release(X,V,W) __cmpxchg{release}(X,V,W) +xchg(X,V) __xchg{MB}(X,V) +xchg_relaxed(X,V) __xchg{ONCE}(X,V) +xchg_release(X,V) __xchg{RELEASE}(X,V) +xchg_acquire(X,V) __xchg{ACQUIRE}(X,V) +cmpxchg(X,V,W) __cmpxchg{MB}(X,V,W) +cmpxchg_relaxed(X,V,W) __cmpxchg{ONCE}(X,V,W) +cmpxchg_acquire(X,V,W) __cmpxchg{ACQUIRE}(X,V,W) +cmpxchg_release(X,V,W) __cmpxchg{RELEASE}(X,V,W) =20 // Spinlocks spin_lock(X) { __lock(X); } @@ -63,86 +63,86 @@ atomic_set(X,V) { WRITE_ONCE(*X,V); } atomic_read_acquire(X) smp_load_acquire(X) atomic_set_release(X,V) { smp_store_release(X,V); } =20 -atomic_add(V,X) { __atomic_op{noreturn}(X,+,V); } -atomic_sub(V,X) { __atomic_op{noreturn}(X,-,V); } -atomic_and(V,X) { __atomic_op{noreturn}(X,&,V); } -atomic_or(V,X) { __atomic_op{noreturn}(X,|,V); } -atomic_xor(V,X) { __atomic_op{noreturn}(X,^,V); } -atomic_inc(X) { __atomic_op{noreturn}(X,+,1); } -atomic_dec(X) { __atomic_op{noreturn}(X,-,1); } -atomic_andnot(V,X) { __atomic_op{noreturn}(X,&~,V); } - -atomic_add_return(V,X) __atomic_op_return{mb}(X,+,V) -atomic_add_return_relaxed(V,X) __atomic_op_return{once}(X,+,V) -atomic_add_return_acquire(V,X) __atomic_op_return{acquire}(X,+,V) -atomic_add_return_release(V,X) __atomic_op_return{release}(X,+,V) -atomic_fetch_add(V,X) __atomic_fetch_op{mb}(X,+,V) -atomic_fetch_add_relaxed(V,X) __atomic_fetch_op{once}(X,+,V) -atomic_fetch_add_acquire(V,X) __atomic_fetch_op{acquire}(X,+,V) -atomic_fetch_add_release(V,X) __atomic_fetch_op{release}(X,+,V) - -atomic_fetch_and(V,X) __atomic_fetch_op{mb}(X,&,V) -atomic_fetch_and_relaxed(V,X) __atomic_fetch_op{once}(X,&,V) -atomic_fetch_and_acquire(V,X) __atomic_fetch_op{acquire}(X,&,V) -atomic_fetch_and_release(V,X) __atomic_fetch_op{release}(X,&,V) - -atomic_fetch_or(V,X) __atomic_fetch_op{mb}(X,|,V) -atomic_fetch_or_relaxed(V,X) __atomic_fetch_op{once}(X,|,V) -atomic_fetch_or_acquire(V,X) __atomic_fetch_op{acquire}(X,|,V) -atomic_fetch_or_release(V,X) __atomic_fetch_op{release}(X,|,V) - -atomic_fetch_xor(V,X) __atomic_fetch_op{mb}(X,^,V) -atomic_fetch_xor_relaxed(V,X) __atomic_fetch_op{once}(X,^,V) -atomic_fetch_xor_acquire(V,X) __atomic_fetch_op{acquire}(X,^,V) -atomic_fetch_xor_release(V,X) __atomic_fetch_op{release}(X,^,V) - -atomic_inc_return(X) __atomic_op_return{mb}(X,+,1) -atomic_inc_return_relaxed(X) __atomic_op_return{once}(X,+,1) -atomic_inc_return_acquire(X) __atomic_op_return{acquire}(X,+,1) -atomic_inc_return_release(X) __atomic_op_return{release}(X,+,1) -atomic_fetch_inc(X) __atomic_fetch_op{mb}(X,+,1) -atomic_fetch_inc_relaxed(X) __atomic_fetch_op{once}(X,+,1) -atomic_fetch_inc_acquire(X) __atomic_fetch_op{acquire}(X,+,1) -atomic_fetch_inc_release(X) __atomic_fetch_op{release}(X,+,1) - -atomic_sub_return(V,X) __atomic_op_return{mb}(X,-,V) -atomic_sub_return_relaxed(V,X) __atomic_op_return{once}(X,-,V) -atomic_sub_return_acquire(V,X) __atomic_op_return{acquire}(X,-,V) -atomic_sub_return_release(V,X) __atomic_op_return{release}(X,-,V) -atomic_fetch_sub(V,X) __atomic_fetch_op{mb}(X,-,V) -atomic_fetch_sub_relaxed(V,X) __atomic_fetch_op{once}(X,-,V) -atomic_fetch_sub_acquire(V,X) __atomic_fetch_op{acquire}(X,-,V) -atomic_fetch_sub_release(V,X) __atomic_fetch_op{release}(X,-,V) - -atomic_dec_return(X) __atomic_op_return{mb}(X,-,1) -atomic_dec_return_relaxed(X) __atomic_op_return{once}(X,-,1) -atomic_dec_return_acquire(X) __atomic_op_return{acquire}(X,-,1) -atomic_dec_return_release(X) __atomic_op_return{release}(X,-,1) -atomic_fetch_dec(X) __atomic_fetch_op{mb}(X,-,1) -atomic_fetch_dec_relaxed(X) __atomic_fetch_op{once}(X,-,1) -atomic_fetch_dec_acquire(X) __atomic_fetch_op{acquire}(X,-,1) -atomic_fetch_dec_release(X) __atomic_fetch_op{release}(X,-,1) - -atomic_xchg(X,V) __xchg{mb}(X,V) -atomic_xchg_relaxed(X,V) __xchg{once}(X,V) -atomic_xchg_release(X,V) __xchg{release}(X,V) -atomic_xchg_acquire(X,V) __xchg{acquire}(X,V) -atomic_cmpxchg(X,V,W) __cmpxchg{mb}(X,V,W) -atomic_cmpxchg_relaxed(X,V,W) __cmpxchg{once}(X,V,W) -atomic_cmpxchg_acquire(X,V,W) __cmpxchg{acquire}(X,V,W) -atomic_cmpxchg_release(X,V,W) __cmpxchg{release}(X,V,W) - -atomic_sub_and_test(V,X) __atomic_op_return{mb}(X,-,V) =3D=3D 0 -atomic_dec_and_test(X) __atomic_op_return{mb}(X,-,1) =3D=3D 0 -atomic_inc_and_test(X) __atomic_op_return{mb}(X,+,1) =3D=3D 0 -atomic_add_negative(V,X) __atomic_op_return{mb}(X,+,V) < 0 -atomic_add_negative_relaxed(V,X) __atomic_op_return{once}(X,+,V) < 0 -atomic_add_negative_acquire(V,X) __atomic_op_return{acquire}(X,+,V) < 0 -atomic_add_negative_release(V,X) __atomic_op_return{release}(X,+,V) < 0 - -atomic_fetch_andnot(V,X) __atomic_fetch_op{mb}(X,&~,V) -atomic_fetch_andnot_acquire(V,X) __atomic_fetch_op{acquire}(X,&~,V) -atomic_fetch_andnot_release(V,X) __atomic_fetch_op{release}(X,&~,V) -atomic_fetch_andnot_relaxed(V,X) __atomic_fetch_op{once}(X,&~,V) - -atomic_add_unless(X,V,W) __atomic_add_unless{mb}(X,V,W) +atomic_add(V,X) { __atomic_op{NORETURN}(X,+,V); } +atomic_sub(V,X) { __atomic_op{NORETURN}(X,-,V); } +atomic_and(V,X) { __atomic_op{NORETURN}(X,&,V); } +atomic_or(V,X) { __atomic_op{NORETURN}(X,|,V); } +atomic_xor(V,X) { __atomic_op{NORETURN}(X,^,V); } +atomic_inc(X) { __atomic_op{NORETURN}(X,+,1); } +atomic_dec(X) { __atomic_op{NORETURN}(X,-,1); } +atomic_andnot(V,X) { __atomic_op{NORETURN}(X,&~,V); } + +atomic_add_return(V,X) __atomic_op_return{MB}(X,+,V) +atomic_add_return_relaxed(V,X) __atomic_op_return{ONCE}(X,+,V) +atomic_add_return_acquire(V,X) __atomic_op_return{ACQUIRE}(X,+,V) +atomic_add_return_release(V,X) __atomic_op_return{RELEASE}(X,+,V) +atomic_fetch_add(V,X) __atomic_fetch_op{MB}(X,+,V) +atomic_fetch_add_relaxed(V,X) __atomic_fetch_op{ONCE}(X,+,V) +atomic_fetch_add_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,+,V) +atomic_fetch_add_release(V,X) __atomic_fetch_op{RELEASE}(X,+,V) + +atomic_fetch_and(V,X) __atomic_fetch_op{MB}(X,&,V) +atomic_fetch_and_relaxed(V,X) __atomic_fetch_op{ONCE}(X,&,V) +atomic_fetch_and_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,&,V) +atomic_fetch_and_release(V,X) __atomic_fetch_op{RELEASE}(X,&,V) + +atomic_fetch_or(V,X) __atomic_fetch_op{MB}(X,|,V) +atomic_fetch_or_relaxed(V,X) __atomic_fetch_op{ONCE}(X,|,V) +atomic_fetch_or_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,|,V) +atomic_fetch_or_release(V,X) __atomic_fetch_op{RELEASE}(X,|,V) + +atomic_fetch_xor(V,X) __atomic_fetch_op{MB}(X,^,V) +atomic_fetch_xor_relaxed(V,X) __atomic_fetch_op{ONCE}(X,^,V) +atomic_fetch_xor_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,^,V) +atomic_fetch_xor_release(V,X) __atomic_fetch_op{RELEASE}(X,^,V) + +atomic_inc_return(X) __atomic_op_return{MB}(X,+,1) +atomic_inc_return_relaxed(X) __atomic_op_return{ONCE}(X,+,1) +atomic_inc_return_acquire(X) __atomic_op_return{ACQUIRE}(X,+,1) +atomic_inc_return_release(X) __atomic_op_return{RELEASE}(X,+,1) +atomic_fetch_inc(X) __atomic_fetch_op{MB}(X,+,1) +atomic_fetch_inc_relaxed(X) __atomic_fetch_op{ONCE}(X,+,1) +atomic_fetch_inc_acquire(X) __atomic_fetch_op{ACQUIRE}(X,+,1) +atomic_fetch_inc_release(X) __atomic_fetch_op{RELEASE}(X,+,1) + +atomic_sub_return(V,X) __atomic_op_return{MB}(X,-,V) +atomic_sub_return_relaxed(V,X) __atomic_op_return{ONCE}(X,-,V) +atomic_sub_return_acquire(V,X) __atomic_op_return{ACQUIRE}(X,-,V) +atomic_sub_return_release(V,X) __atomic_op_return{RELEASE}(X,-,V) +atomic_fetch_sub(V,X) __atomic_fetch_op{MB}(X,-,V) +atomic_fetch_sub_relaxed(V,X) __atomic_fetch_op{ONCE}(X,-,V) +atomic_fetch_sub_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,-,V) +atomic_fetch_sub_release(V,X) __atomic_fetch_op{RELEASE}(X,-,V) + +atomic_dec_return(X) __atomic_op_return{MB}(X,-,1) +atomic_dec_return_relaxed(X) __atomic_op_return{ONCE}(X,-,1) +atomic_dec_return_acquire(X) __atomic_op_return{ACQUIRE}(X,-,1) +atomic_dec_return_release(X) __atomic_op_return{RELEASE}(X,-,1) +atomic_fetch_dec(X) __atomic_fetch_op{MB}(X,-,1) +atomic_fetch_dec_relaxed(X) __atomic_fetch_op{ONCE}(X,-,1) +atomic_fetch_dec_acquire(X) __atomic_fetch_op{ACQUIRE}(X,-,1) +atomic_fetch_dec_release(X) __atomic_fetch_op{RELEASE}(X,-,1) + +atomic_xchg(X,V) __xchg{MB}(X,V) +atomic_xchg_relaxed(X,V) __xchg{ONCE}(X,V) +atomic_xchg_release(X,V) __xchg{RELEASE}(X,V) +atomic_xchg_acquire(X,V) __xchg{ACQUIRE}(X,V) +atomic_cmpxchg(X,V,W) __cmpxchg{MB}(X,V,W) +atomic_cmpxchg_relaxed(X,V,W) __cmpxchg{ONCE}(X,V,W) +atomic_cmpxchg_acquire(X,V,W) __cmpxchg{ACQUIRE}(X,V,W) +atomic_cmpxchg_release(X,V,W) __cmpxchg{RELEASE}(X,V,W) + +atomic_sub_and_test(V,X) __atomic_op_return{MB}(X,-,V) =3D=3D 0 +atomic_dec_and_test(X) __atomic_op_return{MB}(X,-,1) =3D=3D 0 +atomic_inc_and_test(X) __atomic_op_return{MB}(X,+,1) =3D=3D 0 +atomic_add_negative(V,X) __atomic_op_return{MB}(X,+,V) < 0 +atomic_add_negative_relaxed(V,X) __atomic_op_return{ONCE}(X,+,V) < 0 +atomic_add_negative_acquire(V,X) __atomic_op_return{ACQUIRE}(X,+,V) < 0 +atomic_add_negative_release(V,X) __atomic_op_return{RELEASE}(X,+,V) < 0 + +atomic_fetch_andnot(V,X) __atomic_fetch_op{MB}(X,&~,V) +atomic_fetch_andnot_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,&~,V) +atomic_fetch_andnot_release(V,X) __atomic_fetch_op{RELEASE}(X,&~,V) +atomic_fetch_andnot_relaxed(V,X) __atomic_fetch_op{ONCE}(X,&~,V) + +atomic_add_unless(X,V,W) __atomic_add_unless{MB}(X,V,W) --=20 2.34.1