From nobody Thu Nov 28 15:31:46 2024 Received: from frasgout11.his.huawei.com (frasgout11.his.huawei.com [14.137.139.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E6DD18D625 for ; Mon, 30 Sep 2024 10:57:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=14.137.139.23 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693866; cv=none; b=awhevrXQhyiCM0ddQdGBvDvcpgK9zKRsBDqQTEbEW7sIezkDlHthYrYXZBPzJc9e1t50y3qwFgyKnUYv7UgAinshzGt/DO5O00G00P1ZN46DymDXXCUjI+kaEWnU8Z85FjPPA9A2kIkimk4kq6Yu8Z6G1NOaPySwOIIY1xEDXM0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693866; c=relaxed/simple; bh=AuthoQ2SxkhYYlaXRjMt5sLm3Is9Iy5TFs8uQ/j6sJk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jJ3PjFBFemgr+YWlT2aO2LBZxEBWclMZ5D8A8chk8KdxQAZBYKp8piU/6ppL/UJ1Qf0P4JXQlr+dY8I7/960EJRx4vG23bhj5gZcNiyibCZsSAvO2Wj2f57WMs74iXbu6EK1g3KDt4nQlacy9mdG5vBSwESY/uknO7lVkhHzMXc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=14.137.139.23 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.18.186.29]) by frasgout11.his.huawei.com (SkyGuard) with ESMTP id 4XHHb06bJMz9v7Hy for ; Mon, 30 Sep 2024 18:37:48 +0800 (CST) Received: from mail02.huawei.com (unknown [7.182.16.47]) by mail.maildlp.com (Postfix) with ESMTP id 7DE2314097F for ; Mon, 30 Sep 2024 18:57:40 +0800 (CST) Received: from huaweicloud.com (unknown [10.206.133.88]) by APP1 (Coremail) with SMTP id LxC2BwAHmC8OhPpmdITtAQ--.60189S3; Mon, 30 Sep 2024 11:57:39 +0100 (CET) From: Jonas Oberhauser To: paulmck@kernel.org Cc: stern@rowland.harvard.edu, parri.andrea@gmail.com, will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com, joel@joelfernandes.org, urezki@gmail.com, quic_neeraju@quicinc.com, frederic@kernel.org, linux-kernel@vger.kernel.org, lkmm@lists.linux.dev, hernan.poncedeleon@huaweicloud.com, Jonas Oberhauser Subject: [PATCH v4 1/5] tools/memory-model: Legitimize current use of tags in LKMM macros Date: Mon, 30 Sep 2024 12:57:06 +0200 Message-Id: <20240930105710.383284-2-jonas.oberhauser@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> References: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: LxC2BwAHmC8OhPpmdITtAQ--.60189S3 X-Coremail-Antispam: 1UD129KBjvJXoW7CF4xGr45tF4DGr43tF4ruFg_yoW8Ar4kpF ZrCw15GF4UXrZ8uwnrGrWUua4S9a93Jr45Gr9rJa43A34YvrnFy3WDXFnFqFZ3ZrWv9w12 qr12qFn2q3WUArJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmmb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWUJVWUCwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8JVWxJwA2z4x0Y4vEx4A2jsIE14v26r4j6F4UM28EF7xvwVC2z280aVCY1x0267AKxVW8 Jr0_Cr1UM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7M4IIrI8v6xkF7I 0E8cxan2IY04v7MxkF7I0En4kS14v26r4a6rW5MxAIw28IcxkI7VAKI48JMxC20s026xCa FVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_Jr Wlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j 6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr 0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1U YxBIdaVFxhVjvjDU0xZFpf9x0piKsU_UUUUU= X-CM-SenderInfo: 5mrqt2oorev25kdx2v3u6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" The current macros in linux-kernel.def reference instructions such as __xchg{mb} or __cmpxchg{acquire}, which are invalid combinations of tags and instructions according to the declarations in linux-kernel.bell. This works with current herd7 because herd7 removes these tags anyways and does not actually enforce validity of combinations at all. If a future herd7 version no longer applies these hardcoded transformations, then all currently invalid combinations will actually appear on some instruction. We therefore adjust the declarations to make the resulting combinations valid, by adding the 'mb tag to the set of Accesses and allowing all Accesses to appear on all read, write, and RMW instructions. Signed-off-by: Jonas Oberhauser Reviewed-by: Boqun Feng --- tools/memory-model/linux-kernel.bell | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linu= x-kernel.bell index ce068700939c..dba6b5b6dee0 100644 --- a/tools/memory-model/linux-kernel.bell +++ b/tools/memory-model/linux-kernel.bell @@ -16,10 +16,11 @@ enum Accesses =3D 'once (*READ_ONCE,WRITE_ONCE*) || 'release (*smp_store_release*) || 'acquire (*smp_load_acquire*) || - 'noreturn (* R of non-return RMW *) -instructions R[{'once,'acquire,'noreturn}] -instructions W[{'once,'release}] -instructions RMW[{'once,'acquire,'release}] + 'noreturn (* R of non-return RMW *) || + 'mb (*xchg(),cmpxchg(),...*) +instructions R[Accesses] +instructions W[Accesses] +instructions RMW[Accesses] =20 enum Barriers =3D 'wmb (*smp_wmb*) || 'rmb (*smp_rmb*) || --=20 2.34.1 From nobody Thu Nov 28 15:31:46 2024 Received: from frasgout11.his.huawei.com (frasgout11.his.huawei.com [14.137.139.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02D6D18D65C for ; Mon, 30 Sep 2024 10:57:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=14.137.139.23 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693880; cv=none; b=g78dDlIdYEEia8NIVdLbqNf8VyNc7T7KMAnrdvN5K1TWUdkgUu0GvplnKvphzXIZzVI4b7KhveIMGI+fVklYnK5gC5UTusnaYbm+Y7gJD9+Mued22/Ji8PlIZqNGW7ixdSeNxg9AfIhSUNhrayTWgTP+K9HCO/VqTmPE6ddo50g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693880; c=relaxed/simple; bh=n7vHJunraoWpFYOQuQkGDN97hPkUyRPMLDP5Ffnl8Rs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mKdBe/bdPBi9RodCiS7Ls1CaoHYe2PmhGjIOvHksVjruC9Gt9d2821W8oQLHLEWSxnu6ns/BpGNK346PVSvRUKu+BwqdovXKVu5FFwXmhp+x7e5Ongy2eNuHXVwWmy092Xso7Omj9Vr2B4tmMs1A0ztn2etkVYDZ19Cca6nqjrM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=14.137.139.23 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.18.186.51]) by frasgout11.his.huawei.com (SkyGuard) with ESMTP id 4XHHbG6f5Jz9v7Hm for ; Mon, 30 Sep 2024 18:38:02 +0800 (CST) Received: from mail02.huawei.com (unknown [7.182.16.47]) by mail.maildlp.com (Postfix) with ESMTP id 59B74140932 for ; Mon, 30 Sep 2024 18:57:50 +0800 (CST) Received: from huaweicloud.com (unknown [10.206.133.88]) by APP1 (Coremail) with SMTP id LxC2BwAHmC8OhPpmdITtAQ--.60189S4; Mon, 30 Sep 2024 11:57:49 +0100 (CET) From: Jonas Oberhauser To: paulmck@kernel.org Cc: stern@rowland.harvard.edu, parri.andrea@gmail.com, will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com, joel@joelfernandes.org, urezki@gmail.com, quic_neeraju@quicinc.com, frederic@kernel.org, linux-kernel@vger.kernel.org, lkmm@lists.linux.dev, hernan.poncedeleon@huaweicloud.com, Jonas Oberhauser Subject: [PATCH v4 2/5] tools/memory-model: Define applicable tags on operation in tools/... Date: Mon, 30 Sep 2024 12:57:07 +0200 Message-Id: <20240930105710.383284-3-jonas.oberhauser@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> References: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: LxC2BwAHmC8OhPpmdITtAQ--.60189S4 X-Coremail-Antispam: 1UD129KBjvJXoW7WF48tr4xXr45XrWDJFy7ZFb_yoW8WFW8pr WkX3y5Kr17tw1Sv393J34fuFWrWayrGFW5Gr92y3sxZryDJrn2y39Fg3ZxWF9aqry3Gay5 Za1jgFyqk3yUArJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUm214x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Jr0_JF4l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_Gr0_Cr1l84ACjcxK6I8E87Iv6xkF7I0E14v26r4UJVWxJr1l e2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI 8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwAC jcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lFIxGxcIEc7CjxVA2Y2ka0x kIwI1lc7CjxVAaw2AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_ Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1V AY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAI cVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIx AIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVF xhVjvjDU0xZFpf9x0pRHKZLUUUUU= X-CM-SenderInfo: 5mrqt2oorev25kdx2v3u6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" Herd7 transforms reads, writes, and read-modify-writes by eliminating 'acquire tags from writes, 'release tags from reads, and 'acquire, 'release, and 'mb tags from failed read-modify-writes. We emulate this behavior by redefining Acquire, Release, and Mb sets in linux-kernel.bell to explicitly exclude those combinations. Herd7 furthermore adds 'noreturn tag to certain reads. Currently herd7 does not allow specifying the 'noreturn tag manually, but such manual declaration (e.g., through a syntax __atomic_op{noreturn}) would add invalid 'noreturn tags to writes; in preparation, we already also exclude this combination. Signed-off-by: Jonas Oberhauser Reviewed-by: Boqun Feng --- tools/memory-model/linux-kernel.bell | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linu= x-kernel.bell index dba6b5b6dee0..7c9ae48b9437 100644 --- a/tools/memory-model/linux-kernel.bell +++ b/tools/memory-model/linux-kernel.bell @@ -36,6 +36,17 @@ enum Barriers =3D 'wmb (*smp_wmb*) || 'after-srcu-read-unlock (*smp_mb__after_srcu_read_unlock*) instructions F[Barriers] =20 + +(* + * Filter out syntactic annotations that do not provide the corresponding + * semantic ordering, such as Acquire on a store or Mb on a failed RMW. + *) +let FailedRMW =3D RMW \ (domain(rmw) | range(rmw)) +let Acquire =3D Acquire \ W \ FailedRMW +let Release =3D Release \ R \ FailedRMW +let Mb =3D Mb \ FailedRMW +let Noreturn =3D Noreturn \ W + (* SRCU *) enum SRCU =3D 'srcu-lock || 'srcu-unlock || 'sync-srcu instructions SRCU[SRCU] --=20 2.34.1 From nobody Thu Nov 28 15:31:46 2024 Received: from frasgout12.his.huawei.com (frasgout12.his.huawei.com [14.137.139.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5EE118D65C for ; Mon, 30 Sep 2024 10:58:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=14.137.139.154 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693890; cv=none; b=C6/oHeUl6dTx0nViUBSToGvgbpC7ljwb5WQzqVroDi9h5Ot2xtDNOCfSAzZUm6CR7Oa4UO/vKOPj5I8/oCu6LKMVoi8so+T2dqGRtJ3TRJJAa3ssj+bAxZST7Iu1J2tuPjAKxVeFzNADNtmpWNnXkYaT2+z+9s/1iAV8NA7iyso= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693890; c=relaxed/simple; bh=ppigp21ZNg99k5SITpZMtUx3HYzasDubNk9q/zf7x3I=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FMD+gQt98wqtN9rPiQ/XkvE1IzuiaOUZ2IunnidcCB+DKxjs6FlqrqCBhatrPkpWcL/2wDJ8JNeJl6DK5HX91dWwplvaPFO/yhXpaToFr9tmqsKGIxMNqh7h4WqVVW0Th9/JzEzbRPIAQ4MJXL2AvG4FborF56NxuOf49vy/8Os= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=14.137.139.154 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.18.186.51]) by frasgout12.his.huawei.com (SkyGuard) with ESMTP id 4XHHSj3l9Lz9v7JQ for ; Mon, 30 Sep 2024 18:32:21 +0800 (CST) Received: from mail02.huawei.com (unknown [7.182.16.47]) by mail.maildlp.com (Postfix) with ESMTP id 96662140932 for ; Mon, 30 Sep 2024 18:58:00 +0800 (CST) Received: from huaweicloud.com (unknown [10.206.133.88]) by APP1 (Coremail) with SMTP id LxC2BwAHmC8OhPpmdITtAQ--.60189S5; Mon, 30 Sep 2024 11:58:00 +0100 (CET) From: Jonas Oberhauser To: paulmck@kernel.org Cc: stern@rowland.harvard.edu, parri.andrea@gmail.com, will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com, joel@joelfernandes.org, urezki@gmail.com, quic_neeraju@quicinc.com, frederic@kernel.org, linux-kernel@vger.kernel.org, lkmm@lists.linux.dev, hernan.poncedeleon@huaweicloud.com, Jonas Oberhauser , Viktor Vafeiadis Subject: [PATCH v4 3/5] tools/memory-model: Define effect of Mb tags on RMWs in tools/... Date: Mon, 30 Sep 2024 12:57:08 +0200 Message-Id: <20240930105710.383284-4-jonas.oberhauser@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> References: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: LxC2BwAHmC8OhPpmdITtAQ--.60189S5 X-Coremail-Antispam: 1UD129KBjvJXoW7KFWUZry5tFWkCFW8JrWDurg_yoW8WFy3pr ZYgw15Gr4kKryUu3Z3ZanxZF1rWa1xtF13WFn7A34fZr43XrW7Z34rtan0qF9xXFsI9a45 Zr4jv3WkCa4kAFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUm214x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Jr0_JF4l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_Gr0_Cr1l84ACjcxK6I8E87Iv6xkF7I0E14v26r4UJVWxJr1l e2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI 8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwAC jcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lFIxGxcIEc7CjxVA2Y2ka0x kIwI1lc7CjxVAaw2AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_ Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1V AY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAI cVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIx AIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVF xhVjvjDU0xZFpf9x0pRl_MsUUUUU= X-CM-SenderInfo: 5mrqt2oorev25kdx2v3u6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" Herd7 transforms successful RMW with Mb tags by inserting smp_mb() fences around them. We emulate this by considering imaginary po-edges before the RMW read and before the RMW write, and extending the smp_mb() ordering rule, which currently only applies to real po edges that would be found around a really inserted smp_mb(), also to cases of the only imagined po edges. Reported-by: Viktor Vafeiadis Suggested-by: Alan Stern Signed-off-by: Jonas Oberhauser Reviewed-by: Boqun Feng --- tools/memory-model/linux-kernel.cat | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux= -kernel.cat index adf3c4f41229..d7e7bf13c831 100644 --- a/tools/memory-model/linux-kernel.cat +++ b/tools/memory-model/linux-kernel.cat @@ -34,6 +34,16 @@ let R4rmb =3D R \ Noreturn (* Reads for which rmb works = *) let rmb =3D [R4rmb] ; fencerel(Rmb) ; [R4rmb] let wmb =3D [W] ; fencerel(Wmb) ; [W] let mb =3D ([M] ; fencerel(Mb) ; [M]) | + (* + * full-barrier RMWs (successful cmpxchg(), xchg(), etc.) act as + * though there were enclosed by smp_mb(). + * The effect of these virtual smp_mb() is formalized by adding + * Mb tags to the read and write of the operation, and providing + * the same ordering as though there were additional po edges + * between the Mb tag and the read resp. write. + *) + ([M] ; po ; [Mb & R]) | + ([Mb & W] ; po ; [M]) | ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) | ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) | ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) | --=20 2.34.1 From nobody Thu Nov 28 15:31:46 2024 Received: from frasgout13.his.huawei.com (frasgout13.his.huawei.com [14.137.139.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5BF818DF7C for ; Mon, 30 Sep 2024 10:58:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=14.137.139.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693905; cv=none; b=neo378m9lgl1EpdOdYGpgzdrSVj3LGVa/WazaWbebeoqRf0EqNR8r24tk3MHgNPm/+YEZ78Zd2v6cHBG6cba9X8zgRE1jUtJ2gMiFH5HGRLYDIGT5XyQ+mWvI9gkKcVYC/wKK3m1SwtMtp2x6Slj+k0BdM4i4xpg6vj+8TbQwfw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693905; c=relaxed/simple; bh=egXyVxlCp7V+BcMGAwfT8yfxThhwzBbOhSpZAw68HRk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YsP+N1V+MHALPrfvbXnEgA1vekrqz27fNJZKYD72kyd/amQH3NTdwqtwhXgfnsQ7F6e5DWPr20f6CrBsMjwha+QiONiXRPAGWCjdsRCy4xatGuu+n2RSWoYbY1ymju3MHTHTurGawGBD3rxlaeUP1zKOcC66qLWG8mX1j377HHw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=14.137.139.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.18.186.29]) by frasgout13.his.huawei.com (SkyGuard) with ESMTP id 4XHHbp59vNz9v7J5 for ; Mon, 30 Sep 2024 18:38:30 +0800 (CST) Received: from mail02.huawei.com (unknown [7.182.16.47]) by mail.maildlp.com (Postfix) with ESMTP id F36EF14040D for ; Mon, 30 Sep 2024 18:58:10 +0800 (CST) Received: from huaweicloud.com (unknown [10.206.133.88]) by APP1 (Coremail) with SMTP id LxC2BwAHmC8OhPpmdITtAQ--.60189S6; Mon, 30 Sep 2024 11:58:10 +0100 (CET) From: Jonas Oberhauser To: paulmck@kernel.org Cc: stern@rowland.harvard.edu, parri.andrea@gmail.com, will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com, joel@joelfernandes.org, urezki@gmail.com, quic_neeraju@quicinc.com, frederic@kernel.org, linux-kernel@vger.kernel.org, lkmm@lists.linux.dev, hernan.poncedeleon@huaweicloud.com, Jonas Oberhauser Subject: [PATCH v4 4/5] tools/memory-model: Switch to softcoded herd7 tags Date: Mon, 30 Sep 2024 12:57:09 +0200 Message-Id: <20240930105710.383284-5-jonas.oberhauser@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> References: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: LxC2BwAHmC8OhPpmdITtAQ--.60189S6 X-Coremail-Antispam: 1UD129KBjvJXoWxKrWxCF17ury3Xw45tr4UXFb_yoW3WrWrpr sxXrZrtr4UXw1UA3ykJr4UC3WrWw40y3yUGr1vyryrZr12krs8A3W8Jr1vqFyUJry8Kw48 Xr12gFyjkr1UJrJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmv14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Jr0_JF4l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26r4j6F4UM28EF7xvwVC2z280aVCY1x0267AKxVW8Jr0_ Cr1UM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x IIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_ Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7M4IIrI8v6xkF7I0E8c xan2IY04v7MxkF7I0En4kS14v26r4a6rW5MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCj c4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4 CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1x MIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJV WUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4UJVWxJrUv cSsGvfC2KfnxnUUI43ZEXa7sRiHUDtUUUUU== X-CM-SenderInfo: 5mrqt2oorev25kdx2v3u6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" A new version of herd7 provides a -lkmmv2 switch which overrides the old he= rd7 behavior of simply ignoring any softcoded tags in the .def and .bell files.= We port LKMM to this version of herd7 by providing the switch in linux-kernel.= cfg and reporting an error if the LKMM is used without this switch. To preserve the semantics of LKMM, we also softcode the Noreturn tag on ato= mic RMW which do not return a value and define atomic_add_unless with an Mb tag= in linux-kernel.def. We update the herd-representation.txt accordingly and clarify some of the resulting combinations. (To be) Signed-off-by: Hernan Ponce de Leon Signed-off-by: Jonas Oberhauser Reviewed-by: Boqun Feng --- .../Documentation/herd-representation.txt | 27 ++++++++++--------- tools/memory-model/README | 2 +- tools/memory-model/linux-kernel.bell | 3 +++ tools/memory-model/linux-kernel.cfg | 1 + tools/memory-model/linux-kernel.def | 18 +++++++------ 5 files changed, 30 insertions(+), 21 deletions(-) diff --git a/tools/memory-model/Documentation/herd-representation.txt b/too= ls/memory-model/Documentation/herd-representation.txt index ed988906f2b7..7ae1ff3d3769 100644 --- a/tools/memory-model/Documentation/herd-representation.txt +++ b/tools/memory-model/Documentation/herd-representation.txt @@ -18,6 +18,11 @@ # # By convention, a blank line in a cell means "same as the preceding line". # +# Note that the syntactic representation does not always match the sets and +# relations in linux-kernel.cat, due to redefinitions in linux-kernel.bell= and +# lock.cat. For example, the po link between LKR and LKW is upgraded to an= rmw +# link, and W[acquire] are not included in the Acquire set. +# # Disclaimer. The table includes representations of "add" and "and" opera= tions; # corresponding/identical representations of "sub", "inc", "dec" and "or",= "xor", # "andnot" operations are omitted. @@ -60,14 +65,13 @@ ----------------------------------------------------------------------= -------- | RMW ops w/o return value | = | ----------------------------------------------------------------------= -------- - | atomic_add | R*[noreturn] ->rmw W*[once] = | + | atomic_add | R*[noreturn] ->rmw W*[noreturn] = | | atomic_and | = | | spin_lock | LKR ->po LKW = | ----------------------------------------------------------------------= -------- | RMW ops w/ return value | = | ----------------------------------------------------------------------= -------- - | atomic_add_return | F[mb] ->po R*[once] = | - | | ->rmw W*[once] ->po F[mb] = | + | atomic_add_return | R*[mb] ->rmw W*[mb] = | | atomic_fetch_add | = | | atomic_fetch_and | = | | atomic_xchg | = | @@ -79,13 +83,13 @@ | atomic_xchg_relaxed | = | | xchg_relaxed | = | | atomic_add_negative_relaxed | = | - | atomic_add_return_acquire | R*[acquire] ->rmw W*[once] = | + | atomic_add_return_acquire | R*[acquire] ->rmw W*[acquire] = | | atomic_fetch_add_acquire | = | | atomic_fetch_and_acquire | = | | atomic_xchg_acquire | = | | xchg_acquire | = | | atomic_add_negative_acquire | = | - | atomic_add_return_release | R*[once] ->rmw W*[release] = | + | atomic_add_return_release | R*[release] ->rmw W*[release] = | | atomic_fetch_add_release | = | | atomic_fetch_and_release | = | | atomic_xchg_release | = | @@ -94,17 +98,16 @@ ----------------------------------------------------------------------= -------- | Conditional RMW ops | = | ----------------------------------------------------------------------= -------- - | atomic_cmpxchg | On success: F[mb] ->po R*[once] = | - | | ->rmw W*[once] ->po= F[mb] | - | | On failure: R*[once] = | + | atomic_cmpxchg | On success: R*[mb] ->rmw W*[mb] = | + | | On failure: R*[mb] = | | cmpxchg | = | | atomic_add_unless | = | | atomic_cmpxchg_relaxed | On success: R*[once] ->rmw W*[once]= | | | On failure: R*[once] = | - | atomic_cmpxchg_acquire | On success: R*[acquire] ->rmw W*[on= ce] | - | | On failure: R*[once] = | - | atomic_cmpxchg_release | On success: R*[once] ->rmw W*[relea= se] | - | | On failure: R*[once] = | + | atomic_cmpxchg_acquire | On success: R*[acquire] ->rmw W*[ac= quire] | + | | On failure: R*[acquire] = | + | atomic_cmpxchg_release | On success: R*[release] ->rmw W*[re= lease] | + | | On failure: R*[release] = | | spin_trylock | On success: LKR ->po LKW = | | | On failure: LF = | ----------------------------------------------------------------------= -------- diff --git a/tools/memory-model/README b/tools/memory-model/README index dab38904206a..59bc15edeb8a 100644 --- a/tools/memory-model/README +++ b/tools/memory-model/README @@ -20,7 +20,7 @@ that litmus test to be exercised within the Linux kernel. REQUIREMENTS =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 -Version 7.52 or higher of the "herd7" and "klitmus7" tools must be +Version 7.58 or higher of the "herd7" and "klitmus7" tools must be downloaded separately: =20 https://github.com/herd/herdtools7 diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linu= x-kernel.bell index 7c9ae48b9437..8ae47545df97 100644 --- a/tools/memory-model/linux-kernel.bell +++ b/tools/memory-model/linux-kernel.bell @@ -94,3 +94,6 @@ let carry-dep =3D (data ; [~ Srcu-unlock] ; rfi)* let addr =3D carry-dep ; addr let ctrl =3D carry-dep ; ctrl let data =3D carry-dep ; data + +flag ~empty (if "lkmmv2" then 0 else _) + as this-model-requires-variant-higher-than-lkmmv1 diff --git a/tools/memory-model/linux-kernel.cfg b/tools/memory-model/linux= -kernel.cfg index 3c8098e99f41..69b04f3aad73 100644 --- a/tools/memory-model/linux-kernel.cfg +++ b/tools/memory-model/linux-kernel.cfg @@ -1,6 +1,7 @@ macros linux-kernel.def bell linux-kernel.bell model linux-kernel.cat +variant lkmmv2 graph columns squished true showevents noregs diff --git a/tools/memory-model/linux-kernel.def b/tools/memory-model/linux= -kernel.def index a12b96c547b7..d7279a357cba 100644 --- a/tools/memory-model/linux-kernel.def +++ b/tools/memory-model/linux-kernel.def @@ -63,14 +63,14 @@ atomic_set(X,V) { WRITE_ONCE(*X,V); } atomic_read_acquire(X) smp_load_acquire(X) atomic_set_release(X,V) { smp_store_release(X,V); } =20 -atomic_add(V,X) { __atomic_op(X,+,V); } -atomic_sub(V,X) { __atomic_op(X,-,V); } -atomic_and(V,X) { __atomic_op(X,&,V); } -atomic_or(V,X) { __atomic_op(X,|,V); } -atomic_xor(V,X) { __atomic_op(X,^,V); } -atomic_inc(X) { __atomic_op(X,+,1); } -atomic_dec(X) { __atomic_op(X,-,1); } -atomic_andnot(V,X) { __atomic_op(X,&~,V); } +atomic_add(V,X) { __atomic_op{noreturn}(X,+,V); } +atomic_sub(V,X) { __atomic_op{noreturn}(X,-,V); } +atomic_and(V,X) { __atomic_op{noreturn}(X,&,V); } +atomic_or(V,X) { __atomic_op{noreturn}(X,|,V); } +atomic_xor(V,X) { __atomic_op{noreturn}(X,^,V); } +atomic_inc(X) { __atomic_op{noreturn}(X,+,1); } +atomic_dec(X) { __atomic_op{noreturn}(X,-,1); } +atomic_andnot(V,X) { __atomic_op{noreturn}(X,&~,V); } =20 atomic_add_return(V,X) __atomic_op_return{mb}(X,+,V) atomic_add_return_relaxed(V,X) __atomic_op_return{once}(X,+,V) @@ -144,3 +144,5 @@ atomic_fetch_andnot(V,X) __atomic_fetch_op{mb}(X,&~,V) atomic_fetch_andnot_acquire(V,X) __atomic_fetch_op{acquire}(X,&~,V) atomic_fetch_andnot_release(V,X) __atomic_fetch_op{release}(X,&~,V) atomic_fetch_andnot_relaxed(V,X) __atomic_fetch_op{once}(X,&~,V) + +atomic_add_unless(X,V,W) __atomic_add_unless{mb}(X,V,W) --=20 2.34.1 From nobody Thu Nov 28 15:31:46 2024 Received: from frasgout11.his.huawei.com (frasgout11.his.huawei.com [14.137.139.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E9F318DF6B for ; Mon, 30 Sep 2024 10:58:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=14.137.139.23 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693905; cv=none; b=X83M8rrUZsrjUhLtjf+CVwn8SyxFcrtxSUvB9t8/qywwxyVnawPpGz/O7+lYxWbow95N0Jhhw76zt3+qV5OSr2//g5ykdFAEbU8HxIecs6oIEuiNli/p85QI//FU8t7lE4s39JyQoZhRHwlb42WKS2eR6dzjiCajDwZAoUwAQOQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727693905; c=relaxed/simple; bh=ywaq1cvwH9YuVV8/XgZd3Gg+CIoEMFRjuxUza3xxwbs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jmwkensRIBvJsIRjj/9UAZx9quG2UyshtZ8zb/pWPdKG/adh3k6sWzuR6QqsVsvp15rI0mT70DCzeI9eWGf3D8PwBeCJr9rcdW1deGDXeZAxaRdOrcJR3E8nAADlFpIP+ebTEu+TFG+sa4d4HmAcycCyqBjx4Z1ZgdQem2kdfi4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=14.137.139.23 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.18.186.51]) by frasgout11.his.huawei.com (SkyGuard) with ESMTP id 4XHHbm3ZsGz9v7Hq for ; Mon, 30 Sep 2024 18:38:28 +0800 (CST) Received: from mail02.huawei.com (unknown [7.182.16.47]) by mail.maildlp.com (Postfix) with ESMTP id CDC50140A26 for ; Mon, 30 Sep 2024 18:58:20 +0800 (CST) Received: from huaweicloud.com (unknown [10.206.133.88]) by APP1 (Coremail) with SMTP id LxC2BwAHmC8OhPpmdITtAQ--.60189S7; Mon, 30 Sep 2024 11:58:20 +0100 (CET) From: Jonas Oberhauser To: paulmck@kernel.org Cc: stern@rowland.harvard.edu, parri.andrea@gmail.com, will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com, joel@joelfernandes.org, urezki@gmail.com, quic_neeraju@quicinc.com, frederic@kernel.org, linux-kernel@vger.kernel.org, lkmm@lists.linux.dev, hernan.poncedeleon@huaweicloud.com, Jonas Oberhauser Subject: [PATCH v4 5/5] tools/memory-model: Distinguish between syntactic and semantic tags Date: Mon, 30 Sep 2024 12:57:10 +0200 Message-Id: <20240930105710.383284-6-jonas.oberhauser@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> References: <20240930105710.383284-1-jonas.oberhauser@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: LxC2BwAHmC8OhPpmdITtAQ--.60189S7 X-Coremail-Antispam: 1UD129KBjvAXoWfGr1kJw18Aw4ftrWDAF13Jwb_yoW8XrWxZo WrGryft3W8XryDWan8Kw1xJrWDWw4Iq3Z0gry8Gw1jvFy7ZayrXrnrG3WUt3yxJFy5Awn8 WrZ7Z3sxXay7Ar1kn29KB7ZKAUJUUUU8529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUOa7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20EY4v20xva j40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l82xGYIkIc2x26280x7IE14v26r126s0DM28Irc Ia0xkI8VCY1x0267AKxVW5JVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK021l 84ACjcxK6xIIjxv20xvE14v26r1I6r4UM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4UJV WxJr1l84ACjcxK6I8E87Iv67AKxVW8JVWxJwA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_Gr1j 6F4UJwAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7V C0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j 6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x0262 8vn2kIc2xKxwCY1x0262kKe7AKxVW8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE 7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI 8E67AF67kF1VAFwI0_GFv_WrylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8 JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr 0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1U YxBIdaVFxhVjvjDU0xZFpf9x0pRQJ5wUUUUU= X-CM-SenderInfo: 5mrqt2oorev25kdx2v3u6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" Not all tags that are always there syntactically also provide semantic membership in the corresponding set. For example, an 'acquire tag on a write does not imply that the write is finally in the Acquire set and provides acquire ordering. To distinguish in those cases between the syntactic tags and actual sets, we capitalize the former, so 'ACQUIRE tags may be present on both reads and writes, but only reads will appear in the Acquire set. For tags where the two concepts are the same we do not use specific capitalization to make this distinction. Reported-by: Boqun Feng Signed-off-by: Jonas Oberhauser Reviewed-by: Boqun Feng Tested-by: Boqun Feng reviewed-by of the previous patch? --- tools/memory-model/linux-kernel.bell | 22 +-- tools/memory-model/linux-kernel.def | 198 +++++++++++++-------------- 2 files changed, 110 insertions(+), 110 deletions(-) diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linu= x-kernel.bell index 8ae47545df97..fe65998002b9 100644 --- a/tools/memory-model/linux-kernel.bell +++ b/tools/memory-model/linux-kernel.bell @@ -13,18 +13,18 @@ =20 "Linux-kernel memory consistency model" =20 -enum Accesses =3D 'once (*READ_ONCE,WRITE_ONCE*) || - 'release (*smp_store_release*) || - 'acquire (*smp_load_acquire*) || - 'noreturn (* R of non-return RMW *) || - 'mb (*xchg(),cmpxchg(),...*) +enum Accesses =3D 'ONCE (*READ_ONCE,WRITE_ONCE*) || + 'RELEASE (*smp_store_release*) || + 'ACQUIRE (*smp_load_acquire*) || + 'NORETURN (* R of non-return RMW *) || + 'MB (*xchg(),cmpxchg(),...*) instructions R[Accesses] instructions W[Accesses] instructions RMW[Accesses] =20 enum Barriers =3D 'wmb (*smp_wmb*) || 'rmb (*smp_rmb*) || - 'mb (*smp_mb*) || + 'MB (*smp_mb*) || 'barrier (*barrier*) || 'rcu-lock (*rcu_read_lock*) || 'rcu-unlock (*rcu_read_unlock*) || @@ -42,10 +42,10 @@ instructions F[Barriers] * semantic ordering, such as Acquire on a store or Mb on a failed RMW. *) let FailedRMW =3D RMW \ (domain(rmw) | range(rmw)) -let Acquire =3D Acquire \ W \ FailedRMW -let Release =3D Release \ R \ FailedRMW -let Mb =3D Mb \ FailedRMW -let Noreturn =3D Noreturn \ W +let Acquire =3D ACQUIRE \ W \ FailedRMW +let Release =3D RELEASE \ R \ FailedRMW +let Mb =3D MB \ FailedRMW +let Noreturn =3D NORETURN \ W =20 (* SRCU *) enum SRCU =3D 'srcu-lock || 'srcu-unlock || 'sync-srcu @@ -85,7 +85,7 @@ flag ~empty rcu-rscs & (po ; [Sync-srcu] ; po) as invalid= -sleep flag ~empty different-values(srcu-rscs) as srcu-bad-value-match =20 (* Compute marked and plain memory accesses *) -let Marked =3D (~M) | IW | Once | Release | Acquire | domain(rmw) | range(= rmw) | +let Marked =3D (~M) | IW | ONCE | RELEASE | ACQUIRE | MB | RMW | LKR | LKW | UL | LF | RL | RU | Srcu-lock | Srcu-unlock let Plain =3D M \ Marked =20 diff --git a/tools/memory-model/linux-kernel.def b/tools/memory-model/linux= -kernel.def index d7279a357cba..49e402782e49 100644 --- a/tools/memory-model/linux-kernel.def +++ b/tools/memory-model/linux-kernel.def @@ -6,18 +6,18 @@ // which appeared in ASPLOS 2018. =20 // ONCE -READ_ONCE(X) __load{once}(X) -WRITE_ONCE(X,V) { __store{once}(X,V); } +READ_ONCE(X) __load{ONCE}(X) +WRITE_ONCE(X,V) { __store{ONCE}(X,V); } =20 // Release Acquire and friends -smp_store_release(X,V) { __store{release}(*X,V); } -smp_load_acquire(X) __load{acquire}(*X) -rcu_assign_pointer(X,V) { __store{release}(X,V); } -rcu_dereference(X) __load{once}(X) -smp_store_mb(X,V) { __store{once}(X,V); __fence{mb}; } +smp_store_release(X,V) { __store{RELEASE}(*X,V); } +smp_load_acquire(X) __load{ACQUIRE}(*X) +rcu_assign_pointer(X,V) { __store{RELEASE}(X,V); } +rcu_dereference(X) __load{ONCE}(X) +smp_store_mb(X,V) { __store{ONCE}(X,V); __fence{MB}; } =20 // Fences -smp_mb() { __fence{mb}; } +smp_mb() { __fence{MB}; } smp_rmb() { __fence{rmb}; } smp_wmb() { __fence{wmb}; } smp_mb__before_atomic() { __fence{before-atomic}; } @@ -28,14 +28,14 @@ smp_mb__after_srcu_read_unlock() { __fence{after-srcu-r= ead-unlock}; } barrier() { __fence{barrier}; } =20 // Exchange -xchg(X,V) __xchg{mb}(X,V) -xchg_relaxed(X,V) __xchg{once}(X,V) -xchg_release(X,V) __xchg{release}(X,V) -xchg_acquire(X,V) __xchg{acquire}(X,V) -cmpxchg(X,V,W) __cmpxchg{mb}(X,V,W) -cmpxchg_relaxed(X,V,W) __cmpxchg{once}(X,V,W) -cmpxchg_acquire(X,V,W) __cmpxchg{acquire}(X,V,W) -cmpxchg_release(X,V,W) __cmpxchg{release}(X,V,W) +xchg(X,V) __xchg{MB}(X,V) +xchg_relaxed(X,V) __xchg{ONCE}(X,V) +xchg_release(X,V) __xchg{RELEASE}(X,V) +xchg_acquire(X,V) __xchg{ACQUIRE}(X,V) +cmpxchg(X,V,W) __cmpxchg{MB}(X,V,W) +cmpxchg_relaxed(X,V,W) __cmpxchg{ONCE}(X,V,W) +cmpxchg_acquire(X,V,W) __cmpxchg{ACQUIRE}(X,V,W) +cmpxchg_release(X,V,W) __cmpxchg{RELEASE}(X,V,W) =20 // Spinlocks spin_lock(X) { __lock(X); } @@ -63,86 +63,86 @@ atomic_set(X,V) { WRITE_ONCE(*X,V); } atomic_read_acquire(X) smp_load_acquire(X) atomic_set_release(X,V) { smp_store_release(X,V); } =20 -atomic_add(V,X) { __atomic_op{noreturn}(X,+,V); } -atomic_sub(V,X) { __atomic_op{noreturn}(X,-,V); } -atomic_and(V,X) { __atomic_op{noreturn}(X,&,V); } -atomic_or(V,X) { __atomic_op{noreturn}(X,|,V); } -atomic_xor(V,X) { __atomic_op{noreturn}(X,^,V); } -atomic_inc(X) { __atomic_op{noreturn}(X,+,1); } -atomic_dec(X) { __atomic_op{noreturn}(X,-,1); } -atomic_andnot(V,X) { __atomic_op{noreturn}(X,&~,V); } - -atomic_add_return(V,X) __atomic_op_return{mb}(X,+,V) -atomic_add_return_relaxed(V,X) __atomic_op_return{once}(X,+,V) -atomic_add_return_acquire(V,X) __atomic_op_return{acquire}(X,+,V) -atomic_add_return_release(V,X) __atomic_op_return{release}(X,+,V) -atomic_fetch_add(V,X) __atomic_fetch_op{mb}(X,+,V) -atomic_fetch_add_relaxed(V,X) __atomic_fetch_op{once}(X,+,V) -atomic_fetch_add_acquire(V,X) __atomic_fetch_op{acquire}(X,+,V) -atomic_fetch_add_release(V,X) __atomic_fetch_op{release}(X,+,V) - -atomic_fetch_and(V,X) __atomic_fetch_op{mb}(X,&,V) -atomic_fetch_and_relaxed(V,X) __atomic_fetch_op{once}(X,&,V) -atomic_fetch_and_acquire(V,X) __atomic_fetch_op{acquire}(X,&,V) -atomic_fetch_and_release(V,X) __atomic_fetch_op{release}(X,&,V) - -atomic_fetch_or(V,X) __atomic_fetch_op{mb}(X,|,V) -atomic_fetch_or_relaxed(V,X) __atomic_fetch_op{once}(X,|,V) -atomic_fetch_or_acquire(V,X) __atomic_fetch_op{acquire}(X,|,V) -atomic_fetch_or_release(V,X) __atomic_fetch_op{release}(X,|,V) - -atomic_fetch_xor(V,X) __atomic_fetch_op{mb}(X,^,V) -atomic_fetch_xor_relaxed(V,X) __atomic_fetch_op{once}(X,^,V) -atomic_fetch_xor_acquire(V,X) __atomic_fetch_op{acquire}(X,^,V) -atomic_fetch_xor_release(V,X) __atomic_fetch_op{release}(X,^,V) - -atomic_inc_return(X) __atomic_op_return{mb}(X,+,1) -atomic_inc_return_relaxed(X) __atomic_op_return{once}(X,+,1) -atomic_inc_return_acquire(X) __atomic_op_return{acquire}(X,+,1) -atomic_inc_return_release(X) __atomic_op_return{release}(X,+,1) -atomic_fetch_inc(X) __atomic_fetch_op{mb}(X,+,1) -atomic_fetch_inc_relaxed(X) __atomic_fetch_op{once}(X,+,1) -atomic_fetch_inc_acquire(X) __atomic_fetch_op{acquire}(X,+,1) -atomic_fetch_inc_release(X) __atomic_fetch_op{release}(X,+,1) - -atomic_sub_return(V,X) __atomic_op_return{mb}(X,-,V) -atomic_sub_return_relaxed(V,X) __atomic_op_return{once}(X,-,V) -atomic_sub_return_acquire(V,X) __atomic_op_return{acquire}(X,-,V) -atomic_sub_return_release(V,X) __atomic_op_return{release}(X,-,V) -atomic_fetch_sub(V,X) __atomic_fetch_op{mb}(X,-,V) -atomic_fetch_sub_relaxed(V,X) __atomic_fetch_op{once}(X,-,V) -atomic_fetch_sub_acquire(V,X) __atomic_fetch_op{acquire}(X,-,V) -atomic_fetch_sub_release(V,X) __atomic_fetch_op{release}(X,-,V) - -atomic_dec_return(X) __atomic_op_return{mb}(X,-,1) -atomic_dec_return_relaxed(X) __atomic_op_return{once}(X,-,1) -atomic_dec_return_acquire(X) __atomic_op_return{acquire}(X,-,1) -atomic_dec_return_release(X) __atomic_op_return{release}(X,-,1) -atomic_fetch_dec(X) __atomic_fetch_op{mb}(X,-,1) -atomic_fetch_dec_relaxed(X) __atomic_fetch_op{once}(X,-,1) -atomic_fetch_dec_acquire(X) __atomic_fetch_op{acquire}(X,-,1) -atomic_fetch_dec_release(X) __atomic_fetch_op{release}(X,-,1) - -atomic_xchg(X,V) __xchg{mb}(X,V) -atomic_xchg_relaxed(X,V) __xchg{once}(X,V) -atomic_xchg_release(X,V) __xchg{release}(X,V) -atomic_xchg_acquire(X,V) __xchg{acquire}(X,V) -atomic_cmpxchg(X,V,W) __cmpxchg{mb}(X,V,W) -atomic_cmpxchg_relaxed(X,V,W) __cmpxchg{once}(X,V,W) -atomic_cmpxchg_acquire(X,V,W) __cmpxchg{acquire}(X,V,W) -atomic_cmpxchg_release(X,V,W) __cmpxchg{release}(X,V,W) - -atomic_sub_and_test(V,X) __atomic_op_return{mb}(X,-,V) =3D=3D 0 -atomic_dec_and_test(X) __atomic_op_return{mb}(X,-,1) =3D=3D 0 -atomic_inc_and_test(X) __atomic_op_return{mb}(X,+,1) =3D=3D 0 -atomic_add_negative(V,X) __atomic_op_return{mb}(X,+,V) < 0 -atomic_add_negative_relaxed(V,X) __atomic_op_return{once}(X,+,V) < 0 -atomic_add_negative_acquire(V,X) __atomic_op_return{acquire}(X,+,V) < 0 -atomic_add_negative_release(V,X) __atomic_op_return{release}(X,+,V) < 0 - -atomic_fetch_andnot(V,X) __atomic_fetch_op{mb}(X,&~,V) -atomic_fetch_andnot_acquire(V,X) __atomic_fetch_op{acquire}(X,&~,V) -atomic_fetch_andnot_release(V,X) __atomic_fetch_op{release}(X,&~,V) -atomic_fetch_andnot_relaxed(V,X) __atomic_fetch_op{once}(X,&~,V) - -atomic_add_unless(X,V,W) __atomic_add_unless{mb}(X,V,W) +atomic_add(V,X) { __atomic_op{NORETURN}(X,+,V); } +atomic_sub(V,X) { __atomic_op{NORETURN}(X,-,V); } +atomic_and(V,X) { __atomic_op{NORETURN}(X,&,V); } +atomic_or(V,X) { __atomic_op{NORETURN}(X,|,V); } +atomic_xor(V,X) { __atomic_op{NORETURN}(X,^,V); } +atomic_inc(X) { __atomic_op{NORETURN}(X,+,1); } +atomic_dec(X) { __atomic_op{NORETURN}(X,-,1); } +atomic_andnot(V,X) { __atomic_op{NORETURN}(X,&~,V); } + +atomic_add_return(V,X) __atomic_op_return{MB}(X,+,V) +atomic_add_return_relaxed(V,X) __atomic_op_return{ONCE}(X,+,V) +atomic_add_return_acquire(V,X) __atomic_op_return{ACQUIRE}(X,+,V) +atomic_add_return_release(V,X) __atomic_op_return{RELEASE}(X,+,V) +atomic_fetch_add(V,X) __atomic_fetch_op{MB}(X,+,V) +atomic_fetch_add_relaxed(V,X) __atomic_fetch_op{ONCE}(X,+,V) +atomic_fetch_add_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,+,V) +atomic_fetch_add_release(V,X) __atomic_fetch_op{RELEASE}(X,+,V) + +atomic_fetch_and(V,X) __atomic_fetch_op{MB}(X,&,V) +atomic_fetch_and_relaxed(V,X) __atomic_fetch_op{ONCE}(X,&,V) +atomic_fetch_and_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,&,V) +atomic_fetch_and_release(V,X) __atomic_fetch_op{RELEASE}(X,&,V) + +atomic_fetch_or(V,X) __atomic_fetch_op{MB}(X,|,V) +atomic_fetch_or_relaxed(V,X) __atomic_fetch_op{ONCE}(X,|,V) +atomic_fetch_or_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,|,V) +atomic_fetch_or_release(V,X) __atomic_fetch_op{RELEASE}(X,|,V) + +atomic_fetch_xor(V,X) __atomic_fetch_op{MB}(X,^,V) +atomic_fetch_xor_relaxed(V,X) __atomic_fetch_op{ONCE}(X,^,V) +atomic_fetch_xor_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,^,V) +atomic_fetch_xor_release(V,X) __atomic_fetch_op{RELEASE}(X,^,V) + +atomic_inc_return(X) __atomic_op_return{MB}(X,+,1) +atomic_inc_return_relaxed(X) __atomic_op_return{ONCE}(X,+,1) +atomic_inc_return_acquire(X) __atomic_op_return{ACQUIRE}(X,+,1) +atomic_inc_return_release(X) __atomic_op_return{RELEASE}(X,+,1) +atomic_fetch_inc(X) __atomic_fetch_op{MB}(X,+,1) +atomic_fetch_inc_relaxed(X) __atomic_fetch_op{ONCE}(X,+,1) +atomic_fetch_inc_acquire(X) __atomic_fetch_op{ACQUIRE}(X,+,1) +atomic_fetch_inc_release(X) __atomic_fetch_op{RELEASE}(X,+,1) + +atomic_sub_return(V,X) __atomic_op_return{MB}(X,-,V) +atomic_sub_return_relaxed(V,X) __atomic_op_return{ONCE}(X,-,V) +atomic_sub_return_acquire(V,X) __atomic_op_return{ACQUIRE}(X,-,V) +atomic_sub_return_release(V,X) __atomic_op_return{RELEASE}(X,-,V) +atomic_fetch_sub(V,X) __atomic_fetch_op{MB}(X,-,V) +atomic_fetch_sub_relaxed(V,X) __atomic_fetch_op{ONCE}(X,-,V) +atomic_fetch_sub_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,-,V) +atomic_fetch_sub_release(V,X) __atomic_fetch_op{RELEASE}(X,-,V) + +atomic_dec_return(X) __atomic_op_return{MB}(X,-,1) +atomic_dec_return_relaxed(X) __atomic_op_return{ONCE}(X,-,1) +atomic_dec_return_acquire(X) __atomic_op_return{ACQUIRE}(X,-,1) +atomic_dec_return_release(X) __atomic_op_return{RELEASE}(X,-,1) +atomic_fetch_dec(X) __atomic_fetch_op{MB}(X,-,1) +atomic_fetch_dec_relaxed(X) __atomic_fetch_op{ONCE}(X,-,1) +atomic_fetch_dec_acquire(X) __atomic_fetch_op{ACQUIRE}(X,-,1) +atomic_fetch_dec_release(X) __atomic_fetch_op{RELEASE}(X,-,1) + +atomic_xchg(X,V) __xchg{MB}(X,V) +atomic_xchg_relaxed(X,V) __xchg{ONCE}(X,V) +atomic_xchg_release(X,V) __xchg{RELEASE}(X,V) +atomic_xchg_acquire(X,V) __xchg{ACQUIRE}(X,V) +atomic_cmpxchg(X,V,W) __cmpxchg{MB}(X,V,W) +atomic_cmpxchg_relaxed(X,V,W) __cmpxchg{ONCE}(X,V,W) +atomic_cmpxchg_acquire(X,V,W) __cmpxchg{ACQUIRE}(X,V,W) +atomic_cmpxchg_release(X,V,W) __cmpxchg{RELEASE}(X,V,W) + +atomic_sub_and_test(V,X) __atomic_op_return{MB}(X,-,V) =3D=3D 0 +atomic_dec_and_test(X) __atomic_op_return{MB}(X,-,1) =3D=3D 0 +atomic_inc_and_test(X) __atomic_op_return{MB}(X,+,1) =3D=3D 0 +atomic_add_negative(V,X) __atomic_op_return{MB}(X,+,V) < 0 +atomic_add_negative_relaxed(V,X) __atomic_op_return{ONCE}(X,+,V) < 0 +atomic_add_negative_acquire(V,X) __atomic_op_return{ACQUIRE}(X,+,V) < 0 +atomic_add_negative_release(V,X) __atomic_op_return{RELEASE}(X,+,V) < 0 + +atomic_fetch_andnot(V,X) __atomic_fetch_op{MB}(X,&~,V) +atomic_fetch_andnot_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,&~,V) +atomic_fetch_andnot_release(V,X) __atomic_fetch_op{RELEASE}(X,&~,V) +atomic_fetch_andnot_relaxed(V,X) __atomic_fetch_op{ONCE}(X,&~,V) + +atomic_add_unless(X,V,W) __atomic_add_unless{MB}(X,V,W) --=20 2.34.1