From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599091; cv=none; d=zohomail.com; s=zohoarc; b=Ir46X24TqCZ1ukESz+h8aZj0ryDJxmw+ZcwUxyjUIEf7dFIOKUOj+s0EolBOSzUk05VzEmaz6m3ZVwmgc73jqJ7hk6AXgnhN7aYJWQYPW4xKbc+e03ajm7k//QnTBjDc72tS92UXGleOVyzDXF4SU+us5YXapZhyKFDHwhHDRuI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599091; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rlko1oJqJfMI0VUnBWAZzkGKcnDJFsu0OhrI7F7gks0=; b=E9e+8gBlmS0et6mD9fIRNvwKrQs4IJbdhHuV6yNC49/U/JEVHt/z+pGgt/9vAJfTbr8Z4YR/deHuelH407Rkl6qw/oKt9BCZhAF5mSdEpdrKYxLGWzr/UQZFf+paGXaMv9TVDT+N7xJVDnFbTMWN3efG/IpLbsEQ7Ap9YJImntA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599091777443.71946027234037; Tue, 20 Dec 2022 21:04:51 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGG-0002Eb-Ih; Wed, 21 Dec 2022 00:03:24 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGE-0002Bk-Hy for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:22 -0500 Received: from mail-pj1-x102a.google.com ([2607:f8b0:4864:20::102a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGA-0003Mt-KI for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:22 -0500 Received: by mail-pj1-x102a.google.com with SMTP id k88-20020a17090a4ce100b00219d0b857bcso973332pjh.1 for ; Tue, 20 Dec 2022 21:03:18 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rlko1oJqJfMI0VUnBWAZzkGKcnDJFsu0OhrI7F7gks0=; b=jMOLe5es/doK9LZk9jc97BHmVKSHqsTuWHFM0nh2SXsXtlDaVf7iOrwdBSuF+fyx3c R0Qhgn+nMpGdTd7PdXV+0mKvvGWVa6XP+8XjGv8eooBvId2GSNN0A1PQPdsBIqv4yG4u L1LDrJIoKaCvvjRMRHxu+ZyFmj2Cl2Rtzhkiq1H/3N4YzjogIG5wVfRayepbwUhb0jRR BLiZczAGPJ3Yrg+ZGaxzbB6rJ4vKz06sUhWzjF/MnrACH0YH38zl20A9zf0hzo2bOGq2 QEvlBMmJV4IERqpYwDF+ER7qvDLihX6OkzRQoBv9e8UAeFqoTVUY1D2Cil0JQATiwJGZ 7p1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rlko1oJqJfMI0VUnBWAZzkGKcnDJFsu0OhrI7F7gks0=; b=VPgIHclUhkNj8b3RlJuibglD2jyCHbHLRShFw532gBM9KXU0wy6iEiXQAfh7hDb7Mq 67lMMw1vMyP3cvSTl5OBLq5YBCdb74fDDPuX0PNgAQpHMx3Ud2QDDlnVBlxPHDfKjpEv 0tyHCUC6lagG9v3/AozWySkmMD5Oa6WTyVrnVi9XNk/Wjn9DhM4GxXaM/bwKvxUXVXea zyU7ybV2TsS3NBUHMdVjEPE46VRIwi1zMEcfQWJ5yhw5Mfd7rgJCbZ5RbPQ6OskQbBh8 4WIiiAHiyB5JC8X51pPVU+7l7i/3tQawxwsebM0s+97M0AXlypTdrYTNPXYTyRez03tV C+5w== X-Gm-Message-State: AFqh2kopOQSdF3KydBHo0AT/jWu3j6HXSGa3alCPXQfHtsB3g1jJ6sto cLL4aFhbnk2Mi1/8cEJez2XUUKfIhuajEX6A X-Google-Smtp-Source: AMrXdXt3R1XCuk+DdfL5vnAgMNmaWzWT9IG/8lvwXGPUHlwSQ9dCVomYQ/QKDIXfatfjz+FRkvI6lQ== X-Received: by 2002:a17:90a:7e14:b0:219:eeb9:943f with SMTP id i20-20020a17090a7e1400b00219eeb9943fmr687521pjl.49.1671598996779; Tue, 20 Dec 2022 21:03:16 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 01/14] util: Add interval-tree.c Date: Tue, 20 Dec 2022 21:03:00 -0800 Message-Id: <20221221050313.2950701-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102a; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599093273100001 Copy and simplify the Linux kernel's interval_tree_generic.h, instantiating for uint64_t. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- include/qemu/interval-tree.h | 99 ++++ tests/unit/test-interval-tree.c | 209 ++++++++ util/interval-tree.c | 882 ++++++++++++++++++++++++++++++++ tests/unit/meson.build | 1 + util/meson.build | 1 + 5 files changed, 1192 insertions(+) create mode 100644 include/qemu/interval-tree.h create mode 100644 tests/unit/test-interval-tree.c create mode 100644 util/interval-tree.c diff --git a/include/qemu/interval-tree.h b/include/qemu/interval-tree.h new file mode 100644 index 0000000000..25006debe8 --- /dev/null +++ b/include/qemu/interval-tree.h @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Interval trees. + * + * Derived from include/linux/interval_tree.h and its dependencies. + */ + +#ifndef QEMU_INTERVAL_TREE_H +#define QEMU_INTERVAL_TREE_H + +/* + * For now, don't expose Linux Red-Black Trees separately, but retain the + * separate type definitions to keep the implementation sane, and allow + * the possibility of disentangling them later. + */ +typedef struct RBNode +{ + /* Encodes parent with color in the lsb. */ + uintptr_t rb_parent_color; + struct RBNode *rb_right; + struct RBNode *rb_left; +} RBNode; + +typedef struct RBRoot +{ + RBNode *rb_node; +} RBRoot; + +typedef struct RBRootLeftCached { + RBRoot rb_root; + RBNode *rb_leftmost; +} RBRootLeftCached; + +typedef struct IntervalTreeNode +{ + RBNode rb; + + uint64_t start; /* Start of interval */ + uint64_t last; /* Last location _in_ interval */ + uint64_t subtree_last; +} IntervalTreeNode; + +typedef RBRootLeftCached IntervalTreeRoot; + +/** + * interval_tree_is_empty + * @root: root of the tree. + * + * Returns true if the tree contains no nodes. + */ +static inline bool interval_tree_is_empty(const IntervalTreeRoot *root) +{ + return root->rb_root.rb_node =3D=3D NULL; +} + +/** + * interval_tree_insert + * @node: node to insert, + * @root: root of the tree. + * + * Insert @node into @root, and rebalance. + */ +void interval_tree_insert(IntervalTreeNode *node, IntervalTreeRoot *root); + +/** + * interval_tree_remove + * @node: node to remove, + * @root: root of the tree. + * + * Remove @node from @root, and rebalance. + */ +void interval_tree_remove(IntervalTreeNode *node, IntervalTreeRoot *root); + +/** + * interval_tree_iter_first: + * @root: root of the tree, + * @start, @last: the inclusive interval [start, last]. + * + * Locate the "first" of a set of nodes within the tree at @root + * that overlap the interval, where "first" is sorted by start. + * Returns NULL if no overlap found. + */ +IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, + uint64_t start, uint64_t last); + +/** + * interval_tree_iter_next: + * @node: previous search result + * @start, @last: the inclusive interval [start, last]. + * + * Locate the "next" of a set of nodes within the tree that overlap the + * interval; @next is the result of a previous call to + * interval_tree_iter_{first,next}. Returns NULL if @next was the last + * node in the set. + */ +IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, + uint64_t start, uint64_t last); + +#endif /* QEMU_INTERVAL_TREE_H */ diff --git a/tests/unit/test-interval-tree.c b/tests/unit/test-interval-tre= e.c new file mode 100644 index 0000000000..119817a019 --- /dev/null +++ b/tests/unit/test-interval-tree.c @@ -0,0 +1,209 @@ +/* + * Test interval trees + * + * This work is licensed under the terms of the GNU LGPL, version 2 or lat= er. + * See the COPYING.LIB file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include "qemu/interval-tree.h" + +static IntervalTreeNode nodes[20]; +static IntervalTreeRoot root; + +static void rand_interval(IntervalTreeNode *n, uint64_t start, uint64_t la= st) +{ + gint32 s_ofs, l_ofs, l_max; + + if (last - start > INT32_MAX) { + l_max =3D INT32_MAX; + } else { + l_max =3D last - start; + } + s_ofs =3D g_test_rand_int_range(0, l_max); + l_ofs =3D g_test_rand_int_range(s_ofs, l_max); + + n->start =3D start + s_ofs; + n->last =3D start + l_ofs; +} + +static void test_empty(void) +{ + g_assert(root.rb_root.rb_node =3D=3D NULL); + g_assert(root.rb_leftmost =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, UINT64_MAX) =3D=3D NULL); +} + +static void test_find_one_point(void) +{ + /* Create a tree of a single node, which is the point [1,1]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 1; + + interval_tree_insert(&nodes[0], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_next(&nodes[0], 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 2) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 2, 2) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); + g_assert(root.rb_root.rb_node =3D=3D NULL); + g_assert(root.rb_leftmost =3D=3D NULL); +} + +static void test_find_two_point(void) +{ + IntervalTreeNode *find0, *find1; + + /* Create a tree of a two nodes, which are both the point [1,1]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 1; + nodes[1] =3D nodes[0]; + + interval_tree_insert(&nodes[0], &root); + interval_tree_insert(&nodes[1], &root); + + find0 =3D interval_tree_iter_first(&root, 0, 9); + g_assert(find0 =3D=3D &nodes[0] || find0 =3D=3D &nodes[1]); + + find1 =3D interval_tree_iter_next(find0, 0, 9); + g_assert(find1 =3D=3D &nodes[0] || find1 =3D=3D &nodes[1]); + g_assert(find0 !=3D find1); + + interval_tree_remove(&nodes[1], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); +} + +static void test_find_one_range(void) +{ + /* Create a tree of a single node, which is the range [1,8]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 8; + + interval_tree_insert(&nodes[0], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 4, 6) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 8, 8) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 9, 9) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); +} + +static void test_find_one_range_many(void) +{ + int i; + + /* + * Create a tree of many nodes in [0,99] and [200,299], + * but only one node with exactly [110,190]. + */ + nodes[0].start =3D 110; + nodes[0].last =3D 190; + + for (i =3D 1; i < ARRAY_SIZE(nodes) / 2; ++i) { + rand_interval(&nodes[i], 0, 99); + } + for (; i < ARRAY_SIZE(nodes); ++i) { + rand_interval(&nodes[i], 200, 299); + } + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_insert(&nodes[i], &root); + } + + /* Test that we find exactly the one node. */ + g_assert(interval_tree_iter_first(&root, 100, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 100, 199) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 100, 109) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 100, 110) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 111, 120) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 111, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 190, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 192, 199) =3D=3D NULL); + + /* + * Test that if there are multiple matches, we return the one + * with the minimal start. + */ + g_assert(interval_tree_iter_first(&root, 100, 300) =3D=3D &nodes[0]); + + /* Test that we don't find it after it is removed. */ + interval_tree_remove(&nodes[0], &root); + g_assert(interval_tree_iter_first(&root, 100, 199) =3D=3D NULL); + + for (i =3D 1; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_remove(&nodes[i], &root); + } +} + +static void test_find_many_range(void) +{ + IntervalTreeNode *find; + int i, n; + + n =3D g_test_rand_int_range(ARRAY_SIZE(nodes) / 3, ARRAY_SIZE(nodes) /= 2); + + /* + * Create a fair few nodes in [2000,2999], with the others + * distributed around. + */ + for (i =3D 0; i < n; ++i) { + rand_interval(&nodes[i], 2000, 2999); + } + for (; i < ARRAY_SIZE(nodes) * 2 / 3; ++i) { + rand_interval(&nodes[i], 1000, 1899); + } + for (; i < ARRAY_SIZE(nodes); ++i) { + rand_interval(&nodes[i], 3100, 3999); + } + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_insert(&nodes[i], &root); + } + + /* Test that we find all of the nodes. */ + find =3D interval_tree_iter_first(&root, 2000, 2999); + for (i =3D 0; find !=3D NULL; i++) { + find =3D interval_tree_iter_next(find, 2000, 2999); + } + g_assert_cmpint(i, =3D=3D, n); + + g_assert(interval_tree_iter_first(&root, 0, 999) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 1900, 1999) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 3000, 3099) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 4000, UINT64_MAX) =3D=3D NULL= ); + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_remove(&nodes[i], &root); + } +} + +int main(int argc, char **argv) +{ + g_test_init(&argc, &argv, NULL); + + g_test_add_func("/interval-tree/empty", test_empty); + g_test_add_func("/interval-tree/find-one-point", test_find_one_point); + g_test_add_func("/interval-tree/find-two-point", test_find_two_point); + g_test_add_func("/interval-tree/find-one-range", test_find_one_range); + g_test_add_func("/interval-tree/find-one-range-many", + test_find_one_range_many); + g_test_add_func("/interval-tree/find-many-range", test_find_many_range= ); + + return g_test_run(); +} diff --git a/util/interval-tree.c b/util/interval-tree.c new file mode 100644 index 0000000000..4c0baf108f --- /dev/null +++ b/util/interval-tree.c @@ -0,0 +1,882 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#include "qemu/osdep.h" +#include "qemu/interval-tree.h" +#include "qemu/atomic.h" + +/* + * Red Black Trees. + * + * For now, don't expose Linux Red-Black Trees separately, but retain the + * separate type definitions to keep the implementation sane, and allow + * the possibility of separating them later. + * + * Derived from include/linux/rbtree_augmented.h and its dependencies. + */ + +/* + * red-black trees properties: https://en.wikipedia.org/wiki/Rbtree + * + * 1) A node is either red or black + * 2) The root is black + * 3) All leaves (NULL) are black + * 4) Both children of every red node are black + * 5) Every simple path from root to leaves contains the same number + * of black nodes. + * + * 4 and 5 give the O(log n) guarantee, since 4 implies you cannot have t= wo + * consecutive red nodes in a path and every red node is therefore follow= ed by + * a black. So if B is the number of black nodes on every simple path (as= per + * 5), then the longest possible path due to 4 is 2B. + * + * We shall indicate color with case, where black nodes are uppercase and= red + * nodes will be lowercase. Unknown color nodes shall be drawn as red wit= hin + * parentheses and have some accompanying text comment. + * + * Notes on lockless lookups: + * + * All stores to the tree structure (rb_left and rb_right) must be done us= ing + * WRITE_ONCE [qatomic_set for QEMU]. And we must not inadvertently cause + * (temporary) loops in the tree structure as seen in program order. + * + * These two requirements will allow lockless iteration of the tree -- not + * correct iteration mind you, tree rotations are not atomic so a lookup m= ight + * miss entire subtrees. + * + * But they do guarantee that any such traversal will only see valid eleme= nts + * and that it will indeed complete -- does not get stuck in a loop. + * + * It also guarantees that if the lookup returns an element it is the 'cor= rect' + * one. But not returning an element does _NOT_ mean it's not present. + * + * NOTE: + * + * Stores to __rb_parent_color are not important for simple lookups so tho= se + * are left undone as of now. Nor did I check for loops involving parent + * pointers. + */ + +typedef enum RBColor +{ + RB_RED, + RB_BLACK, +} RBColor; + +typedef struct RBAugmentCallbacks { + void (*propagate)(RBNode *node, RBNode *stop); + void (*copy)(RBNode *old, RBNode *new); + void (*rotate)(RBNode *old, RBNode *new); +} RBAugmentCallbacks; + +static inline RBNode *rb_parent(const RBNode *n) +{ + return (RBNode *)(n->rb_parent_color & ~1); +} + +static inline RBNode *rb_red_parent(const RBNode *n) +{ + return (RBNode *)n->rb_parent_color; +} + +static inline RBColor pc_color(uintptr_t pc) +{ + return (RBColor)(pc & 1); +} + +static inline bool pc_is_red(uintptr_t pc) +{ + return pc_color(pc) =3D=3D RB_RED; +} + +static inline bool pc_is_black(uintptr_t pc) +{ + return !pc_is_red(pc); +} + +static inline RBColor rb_color(const RBNode *n) +{ + return pc_color(n->rb_parent_color); +} + +static inline bool rb_is_red(const RBNode *n) +{ + return pc_is_red(n->rb_parent_color); +} + +static inline bool rb_is_black(const RBNode *n) +{ + return pc_is_black(n->rb_parent_color); +} + +static inline void rb_set_black(RBNode *n) +{ + n->rb_parent_color |=3D RB_BLACK; +} + +static inline void rb_set_parent_color(RBNode *n, RBNode *p, RBColor color) +{ + n->rb_parent_color =3D (uintptr_t)p | color; +} + +static inline void rb_set_parent(RBNode *n, RBNode *p) +{ + rb_set_parent_color(n, p, rb_color(n)); +} + +static inline void rb_link_node(RBNode *node, RBNode *parent, RBNode **rb_= link) +{ + node->rb_parent_color =3D (uintptr_t)parent; + node->rb_left =3D node->rb_right =3D NULL; + + qatomic_set(rb_link, node); +} + +static RBNode *rb_next(RBNode *node) +{ + RBNode *parent; + + /* OMIT: if empty node, return null. */ + + /* + * If we have a right-hand child, go down and then left as far as we c= an. + */ + if (node->rb_right) { + node =3D node->rb_right; + while (node->rb_left) { + node =3D node->rb_left; + } + return node; + } + + /* + * No right-hand children. Everything down and left is smaller than us, + * so any 'next' node must be in the general direction of our parent. + * Go up the tree; any time the ancestor is a right-hand child of its + * parent, keep going up. First time it's a left-hand child of its + * parent, said parent is our 'next' node. + */ + while ((parent =3D rb_parent(node)) && node =3D=3D parent->rb_right) { + node =3D parent; + } + + return parent; +} + +static inline void rb_change_child(RBNode *old, RBNode *new, + RBNode *parent, RBRoot *root) +{ + if (!parent) { + qatomic_set(&root->rb_node, new); + } else if (parent->rb_left =3D=3D old) { + qatomic_set(&parent->rb_left, new); + } else { + qatomic_set(&parent->rb_right, new); + } +} + +static inline void rb_rotate_set_parents(RBNode *old, RBNode *new, + RBRoot *root, RBColor color) +{ + RBNode *parent =3D rb_parent(old); + + new->rb_parent_color =3D old->rb_parent_color; + rb_set_parent_color(old, new, color); + rb_change_child(old, new, parent, root); +} + +static void rb_insert_augmented(RBNode *node, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *parent =3D rb_red_parent(node), *gparent, *tmp; + + while (true) { + /* + * Loop invariant: node is red. + */ + if (unlikely(!parent)) { + /* + * The inserted node is root. Either this is the first node, or + * we recursed at Case 1 below and are no longer violating 4). + */ + rb_set_parent_color(node, NULL, RB_BLACK); + break; + } + + /* + * If there is a black parent, we are done. Otherwise, take some + * corrective action as, per 4), we don't want a red root or two + * consecutive red nodes. + */ + if (rb_is_black(parent)) { + break; + } + + gparent =3D rb_red_parent(parent); + + tmp =3D gparent->rb_right; + if (parent !=3D tmp) { /* parent =3D=3D gparent->rb_left */ + if (tmp && rb_is_red(tmp)) { + /* + * Case 1 - node's uncle is red (color flips). + * + * G g + * / \ / \ + * p u --> P U + * / / + * n n + * + * However, since g's parent might be red, and 4) does not + * allow this, we need to recurse at g. + */ + rb_set_parent_color(tmp, gparent, RB_BLACK); + rb_set_parent_color(parent, gparent, RB_BLACK); + node =3D gparent; + parent =3D rb_parent(node); + rb_set_parent_color(node, parent, RB_RED); + continue; + } + + tmp =3D parent->rb_right; + if (node =3D=3D tmp) { + /* + * Case 2 - node's uncle is black and node is + * the parent's right child (left rotate at parent). + * + * G G + * / \ / \ + * p U --> n U + * \ / + * n p + * + * This still leaves us in violation of 4), the + * continuation into Case 3 will fix that. + */ + tmp =3D node->rb_left; + qatomic_set(&parent->rb_right, tmp); + qatomic_set(&node->rb_left, parent); + if (tmp) { + rb_set_parent_color(tmp, parent, RB_BLACK); + } + rb_set_parent_color(parent, node, RB_RED); + augment->rotate(parent, node); + parent =3D node; + tmp =3D node->rb_right; + } + + /* + * Case 3 - node's uncle is black and node is + * the parent's left child (right rotate at gparent). + * + * G P + * / \ / \ + * p U --> n g + * / \ + * n U + */ + qatomic_set(&gparent->rb_left, tmp); /* =3D=3D parent->rb_righ= t */ + qatomic_set(&parent->rb_right, gparent); + if (tmp) { + rb_set_parent_color(tmp, gparent, RB_BLACK); + } + rb_rotate_set_parents(gparent, parent, root, RB_RED); + augment->rotate(gparent, parent); + break; + } else { + tmp =3D gparent->rb_left; + if (tmp && rb_is_red(tmp)) { + /* Case 1 - color flips */ + rb_set_parent_color(tmp, gparent, RB_BLACK); + rb_set_parent_color(parent, gparent, RB_BLACK); + node =3D gparent; + parent =3D rb_parent(node); + rb_set_parent_color(node, parent, RB_RED); + continue; + } + + tmp =3D parent->rb_left; + if (node =3D=3D tmp) { + /* Case 2 - right rotate at parent */ + tmp =3D node->rb_right; + qatomic_set(&parent->rb_left, tmp); + qatomic_set(&node->rb_right, parent); + if (tmp) { + rb_set_parent_color(tmp, parent, RB_BLACK); + } + rb_set_parent_color(parent, node, RB_RED); + augment->rotate(parent, node); + parent =3D node; + tmp =3D node->rb_left; + } + + /* Case 3 - left rotate at gparent */ + qatomic_set(&gparent->rb_right, tmp); /* =3D=3D parent->rb_lef= t */ + qatomic_set(&parent->rb_left, gparent); + if (tmp) { + rb_set_parent_color(tmp, gparent, RB_BLACK); + } + rb_rotate_set_parents(gparent, parent, root, RB_RED); + augment->rotate(gparent, parent); + break; + } + } +} + +static void rb_insert_augmented_cached(RBNode *node, + RBRootLeftCached *root, bool newlef= t, + const RBAugmentCallbacks *augment) +{ + if (newleft) { + root->rb_leftmost =3D node; + } + rb_insert_augmented(node, &root->rb_root, augment); +} + +static void rb_erase_color(RBNode *parent, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *node =3D NULL, *sibling, *tmp1, *tmp2; + + while (true) { + /* + * Loop invariants: + * - node is black (or NULL on first iteration) + * - node is not the root (parent is not NULL) + * - All leaf paths going through parent and node have a + * black node count that is 1 lower than other leaf paths. + */ + sibling =3D parent->rb_right; + if (node !=3D sibling) { /* node =3D=3D parent->rb_left */ + if (rb_is_red(sibling)) { + /* + * Case 1 - left rotate at parent + * + * P S + * / \ / \=20 + * N s --> p Sr + * / \ / \=20 + * Sl Sr N Sl + */ + tmp1 =3D sibling->rb_left; + qatomic_set(&parent->rb_right, tmp1); + qatomic_set(&sibling->rb_left, parent); + rb_set_parent_color(tmp1, parent, RB_BLACK); + rb_rotate_set_parents(parent, sibling, root, RB_RED); + augment->rotate(parent, sibling); + sibling =3D tmp1; + } + tmp1 =3D sibling->rb_right; + if (!tmp1 || rb_is_black(tmp1)) { + tmp2 =3D sibling->rb_left; + if (!tmp2 || rb_is_black(tmp2)) { + /* + * Case 2 - sibling color flip + * (p could be either color here) + * + * (p) (p) + * / \ / \=20 + * N S --> N s + * / \ / \=20 + * Sl Sr Sl Sr + * + * This leaves us violating 5) which + * can be fixed by flipping p to black + * if it was red, or by recursing at p. + * p is red when coming from Case 1. + */ + rb_set_parent_color(sibling, parent, RB_RED); + if (rb_is_red(parent)) { + rb_set_black(parent); + } else { + node =3D parent; + parent =3D rb_parent(node); + if (parent) { + continue; + } + } + break; + } + /* + * Case 3 - right rotate at sibling + * (p could be either color here) + * + * (p) (p) + * / \ / \ + * N S --> N sl + * / \ \ + * sl Sr S + * \ + * Sr + * + * Note: p might be red, and then bot + * p and sl are red after rotation (which + * breaks property 4). This is fixed in + * Case 4 (in rb_rotate_set_parents() + * which set sl the color of p + * and set p RB_BLACK) + * + * (p) (sl) + * / \ / \ + * N sl --> P S + * \ / \ + * S N Sr + * \ + * Sr + */ + tmp1 =3D tmp2->rb_right; + qatomic_set(&sibling->rb_left, tmp1); + qatomic_set(&tmp2->rb_right, sibling); + qatomic_set(&parent->rb_right, tmp2); + if (tmp1) { + rb_set_parent_color(tmp1, sibling, RB_BLACK); + } + augment->rotate(sibling, tmp2); + tmp1 =3D sibling; + sibling =3D tmp2; + } + /* + * Case 4 - left rotate at parent + color flips + * (p and sl could be either color here. + * After rotation, p becomes black, s acquires + * p's color, and sl keeps its color) + * + * (p) (s) + * / \ / \ + * N S --> P Sr + * / \ / \ + * (sl) sr N (sl) + */ + tmp2 =3D sibling->rb_left; + qatomic_set(&parent->rb_right, tmp2); + qatomic_set(&sibling->rb_left, parent); + rb_set_parent_color(tmp1, sibling, RB_BLACK); + if (tmp2) { + rb_set_parent(tmp2, parent); + } + rb_rotate_set_parents(parent, sibling, root, RB_BLACK); + augment->rotate(parent, sibling); + break; + } else { + sibling =3D parent->rb_left; + if (rb_is_red(sibling)) { + /* Case 1 - right rotate at parent */ + tmp1 =3D sibling->rb_right; + qatomic_set(&parent->rb_left, tmp1); + qatomic_set(&sibling->rb_right, parent); + rb_set_parent_color(tmp1, parent, RB_BLACK); + rb_rotate_set_parents(parent, sibling, root, RB_RED); + augment->rotate(parent, sibling); + sibling =3D tmp1; + } + tmp1 =3D sibling->rb_left; + if (!tmp1 || rb_is_black(tmp1)) { + tmp2 =3D sibling->rb_right; + if (!tmp2 || rb_is_black(tmp2)) { + /* Case 2 - sibling color flip */ + rb_set_parent_color(sibling, parent, RB_RED); + if (rb_is_red(parent)) { + rb_set_black(parent); + } else { + node =3D parent; + parent =3D rb_parent(node); + if (parent) { + continue; + } + } + break; + } + /* Case 3 - left rotate at sibling */ + tmp1 =3D tmp2->rb_left; + qatomic_set(&sibling->rb_right, tmp1); + qatomic_set(&tmp2->rb_left, sibling); + qatomic_set(&parent->rb_left, tmp2); + if (tmp1) { + rb_set_parent_color(tmp1, sibling, RB_BLACK); + } + augment->rotate(sibling, tmp2); + tmp1 =3D sibling; + sibling =3D tmp2; + } + /* Case 4 - right rotate at parent + color flips */ + tmp2 =3D sibling->rb_right; + qatomic_set(&parent->rb_left, tmp2); + qatomic_set(&sibling->rb_right, parent); + rb_set_parent_color(tmp1, sibling, RB_BLACK); + if (tmp2) { + rb_set_parent(tmp2, parent); + } + rb_rotate_set_parents(parent, sibling, root, RB_BLACK); + augment->rotate(parent, sibling); + break; + } + } +} + +static void rb_erase_augmented(RBNode *node, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *child =3D node->rb_right; + RBNode *tmp =3D node->rb_left; + RBNode *parent, *rebalance; + uintptr_t pc; + + if (!tmp) { + /* + * Case 1: node to erase has no more than 1 child (easy!) + * + * Note that if there is one child it must be red due to 5) + * and node must be black due to 4). We adjust colors locally + * so as to bypass rb_erase_color() later on. + */ + pc =3D node->rb_parent_color; + parent =3D rb_parent(node); + rb_change_child(node, child, parent, root); + if (child) { + child->rb_parent_color =3D pc; + rebalance =3D NULL; + } else { + rebalance =3D pc_is_black(pc) ? parent : NULL; + } + tmp =3D parent; + } else if (!child) { + /* Still case 1, but this time the child is node->rb_left */ + pc =3D node->rb_parent_color; + parent =3D rb_parent(node); + tmp->rb_parent_color =3D pc; + rb_change_child(node, tmp, parent, root); + rebalance =3D NULL; + tmp =3D parent; + } else { + RBNode *successor =3D child, *child2; + tmp =3D child->rb_left; + if (!tmp) { + /* + * Case 2: node's successor is its right child + * + * (n) (s) + * / \ / \ + * (x) (s) -> (x) (c) + * \ + * (c) + */ + parent =3D successor; + child2 =3D successor->rb_right; + + augment->copy(node, successor); + } else { + /* + * Case 3: node's successor is leftmost under + * node's right child subtree + * + * (n) (s) + * / \ / \ + * (x) (y) -> (x) (y) + * / / + * (p) (p) + * / / + * (s) (c) + * \ + * (c) + */ + do { + parent =3D successor; + successor =3D tmp; + tmp =3D tmp->rb_left; + } while (tmp); + child2 =3D successor->rb_right; + qatomic_set(&parent->rb_left, child2); + qatomic_set(&successor->rb_right, child); + rb_set_parent(child, successor); + + augment->copy(node, successor); + augment->propagate(parent, successor); + } + + tmp =3D node->rb_left; + qatomic_set(&successor->rb_left, tmp); + rb_set_parent(tmp, successor); + + pc =3D node->rb_parent_color; + tmp =3D rb_parent(node); + rb_change_child(node, successor, tmp, root); + + if (child2) { + rb_set_parent_color(child2, parent, RB_BLACK); + rebalance =3D NULL; + } else { + rebalance =3D rb_is_black(successor) ? parent : NULL; + } + successor->rb_parent_color =3D pc; + tmp =3D successor; + } + + augment->propagate(tmp, NULL); + + if (rebalance) { + rb_erase_color(rebalance, root, augment); + } +} + +static void rb_erase_augmented_cached(RBNode *node, RBRootLeftCached *root, + const RBAugmentCallbacks *augment) +{ + if (root->rb_leftmost =3D=3D node) { + root->rb_leftmost =3D rb_next(node); + } + rb_erase_augmented(node, &root->rb_root, augment); +} + + +/* + * Interval trees. + * + * Derived from lib/interval_tree.c and its dependencies, + * especially include/linux/interval_tree_generic.h. + */ + +#define rb_to_itree(N) container_of(N, IntervalTreeNode, rb) + +static bool interval_tree_compute_max(IntervalTreeNode *node, bool exit) +{ + IntervalTreeNode *child; + uint64_t max =3D node->last; + + if (node->rb.rb_left) { + child =3D rb_to_itree(node->rb.rb_left); + if (child->subtree_last > max) { + max =3D child->subtree_last; + } + } + if (node->rb.rb_right) { + child =3D rb_to_itree(node->rb.rb_right); + if (child->subtree_last > max) { + max =3D child->subtree_last; + } + } + if (exit && node->subtree_last =3D=3D max) { + return true; + } + node->subtree_last =3D max; + return false; +} + +static void interval_tree_propagate(RBNode *rb, RBNode *stop) +{ + while (rb !=3D stop) { + IntervalTreeNode *node =3D rb_to_itree(rb); + if (interval_tree_compute_max(node, true)) { + break; + } + rb =3D rb_parent(&node->rb); + } +} + +static void interval_tree_copy(RBNode *rb_old, RBNode *rb_new) +{ + IntervalTreeNode *old =3D rb_to_itree(rb_old); + IntervalTreeNode *new =3D rb_to_itree(rb_new); + + new->subtree_last =3D old->subtree_last; +} + +static void interval_tree_rotate(RBNode *rb_old, RBNode *rb_new) +{ + IntervalTreeNode *old =3D rb_to_itree(rb_old); + IntervalTreeNode *new =3D rb_to_itree(rb_new); + + new->subtree_last =3D old->subtree_last; + interval_tree_compute_max(old, false); +} + +static const RBAugmentCallbacks interval_tree_augment =3D { + .propagate =3D interval_tree_propagate, + .copy =3D interval_tree_copy, + .rotate =3D interval_tree_rotate, +}; + +/* Insert / remove interval nodes from the tree */ +void interval_tree_insert(IntervalTreeNode *node, IntervalTreeRoot *root) +{ + RBNode **link =3D &root->rb_root.rb_node, *rb_parent =3D NULL; + uint64_t start =3D node->start, last =3D node->last; + IntervalTreeNode *parent; + bool leftmost =3D true; + + while (*link) { + rb_parent =3D *link; + parent =3D rb_to_itree(rb_parent); + + if (parent->subtree_last < last) { + parent->subtree_last =3D last; + } + if (start < parent->start) { + link =3D &parent->rb.rb_left; + } else { + link =3D &parent->rb.rb_right; + leftmost =3D false; + } + } + + node->subtree_last =3D last; + rb_link_node(&node->rb, rb_parent, link); + rb_insert_augmented_cached(&node->rb, root, leftmost, + &interval_tree_augment); +} + +void interval_tree_remove(IntervalTreeNode *node, IntervalTreeRoot *root) +{ + rb_erase_augmented_cached(&node->rb, root, &interval_tree_augment); +} + +/* + * Iterate over intervals intersecting [start;last] + * + * Note that a node's interval intersects [start;last] iff: + * Cond1: node->start <=3D last + * and + * Cond2: start <=3D node->last + */ + +static IntervalTreeNode *interval_tree_subtree_search(IntervalTreeNode *no= de, + uint64_t start, + uint64_t last) +{ + while (true) { + /* + * Loop invariant: start <=3D node->subtree_last + * (Cond2 is satisfied by one of the subtree nodes) + */ + if (node->rb.rb_left) { + IntervalTreeNode *left =3D rb_to_itree(node->rb.rb_left); + + if (start <=3D left->subtree_last) { + /* + * Some nodes in left subtree satisfy Cond2. + * Iterate to find the leftmost such node N. + * If it also satisfies Cond1, that's the + * match we are looking for. Otherwise, there + * is no matching interval as nodes to the + * right of N can't satisfy Cond1 either. + */ + node =3D left; + continue; + } + } + if (node->start <=3D last) { /* Cond1 */ + if (start <=3D node->last) { /* Cond2 */ + return node; /* node is leftmost match */ + } + if (node->rb.rb_right) { + node =3D rb_to_itree(node->rb.rb_right); + if (start <=3D node->subtree_last) { + continue; + } + } + } + return NULL; /* no match */ + } +} + +IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, + uint64_t start, uint64_t last) +{ + IntervalTreeNode *node, *leftmost; + + if (!root->rb_root.rb_node) { + return NULL; + } + + /* + * Fastpath range intersection/overlap between A: [a0, a1] and + * B: [b0, b1] is given by: + * + * a0 <=3D b1 && b0 <=3D a1 + * + * ... where A holds the lock range and B holds the smallest + * 'start' and largest 'last' in the tree. For the later, we + * rely on the root node, which by augmented interval tree + * property, holds the largest value in its last-in-subtree. + * This allows mitigating some of the tree walk overhead for + * for non-intersecting ranges, maintained and consulted in O(1). + */ + node =3D rb_to_itree(root->rb_root.rb_node); + if (node->subtree_last < start) { + return NULL; + } + + leftmost =3D rb_to_itree(root->rb_leftmost); + if (leftmost->start > last) { + return NULL; + } + + return interval_tree_subtree_search(node, start, last); +} + +IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, + uint64_t start, uint64_t last) +{ + RBNode *rb =3D node->rb.rb_right, *prev; + + while (true) { + /* + * Loop invariants: + * Cond1: node->start <=3D last + * rb =3D=3D node->rb.rb_right + * + * First, search right subtree if suitable + */ + if (rb) { + IntervalTreeNode *right =3D rb_to_itree(rb); + + if (start <=3D right->subtree_last) { + return interval_tree_subtree_search(right, start, last); + } + } + + /* Move up the tree until we come from a node's left child */ + do { + rb =3D rb_parent(&node->rb); + if (!rb) { + return NULL; + } + prev =3D &node->rb; + node =3D rb_to_itree(rb); + rb =3D node->rb.rb_right; + } while (prev =3D=3D rb); + + /* Check if the node intersects [start;last] */ + if (last < node->start) { /* !Cond1 */ + return NULL; + } + if (start <=3D node->last) { /* Cond2 */ + return node; + } + } +} + +/* Occasionally useful for calling from within the debugger. */ +#if 0 +static void debug_interval_tree_int(IntervalTreeNode *node, + const char *dir, int level) +{ + printf("%4d %*s %s [%" PRIu64 ",%" PRIu64 "] subtree_last:%" PRIu64 "\= n", + level, level + 1, dir, rb_is_red(&node->rb) ? "r" : "b", + node->start, node->last, node->subtree_last); + + if (node->rb.rb_left) { + debug_interval_tree_int(rb_to_itree(node->rb.rb_left), "<", level = + 1); + } + if (node->rb.rb_right) { + debug_interval_tree_int(rb_to_itree(node->rb.rb_right), ">", level= + 1); + } +} + +void debug_interval_tree(IntervalTreeNode *node); +void debug_interval_tree(IntervalTreeNode *node) +{ + if (node) { + debug_interval_tree_int(node, "*", 0); + } else { + printf("null\n"); + } +} +#endif diff --git a/tests/unit/meson.build b/tests/unit/meson.build index b497a41378..ffa444f432 100644 --- a/tests/unit/meson.build +++ b/tests/unit/meson.build @@ -47,6 +47,7 @@ tests =3D { 'ptimer-test': ['ptimer-test-stubs.c', meson.project_source_root() / 'hw= /core/ptimer.c'], 'test-qapi-util': [], 'test-smp-parse': [qom, meson.project_source_root() / 'hw/core/machine-s= mp.c'], + 'test-interval-tree': [], } =20 if have_system or have_tools diff --git a/util/meson.build b/util/meson.build index 25b9b61f98..d8d109ff84 100644 --- a/util/meson.build +++ b/util/meson.build @@ -57,6 +57,7 @@ util_ss.add(files('guest-random.c')) util_ss.add(files('yank.c')) util_ss.add(files('int128.c')) util_ss.add(files('memalign.c')) +util_ss.add(files('interval-tree.c')) =20 if have_user util_ss.add(files('selfmap.c')) --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599048; cv=none; d=zohomail.com; s=zohoarc; b=lrHAS2d9xtZReggC6O75/igLwoW5Byrv+ohpLxy5WnlynoRU2NgkSKDq9GDBjhO2zvBNei6ZrEPxiqmaTGXbo2q8layCQfYkbNlbSS54Ecss2JSdfsuFcA729nHMQbkOKway4CEmOfe75C5CLGmysNRBcq4KVtSBECM5lwGlulo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599048; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=lQgJmIQMrjWysce1e2je5afVIp4tHra/146j/kBBfUs=; b=CJ95HxKskZZEHC5sc8ooErSRimlJf0pEs3vmG30lFzo6yGoJ7EGFA/iqnyckdPZMjB8ft4Rtrlm9mziCiJZkAaqS0L0IMmzgpsLxyvj5IH2NF8Hjr2EQ5YP1jwV+DPb2b8kYhOhbbPx7/4hEkWmwzXckqAJfxMaxEx1Mb7JjFtk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599048584834.0501073978354; Tue, 20 Dec 2022 21:04:08 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGF-0002D0-Ho; Wed, 21 Dec 2022 00:03:23 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGD-0002AS-ON for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:21 -0500 Received: from mail-pj1-x1029.google.com ([2607:f8b0:4864:20::1029]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGB-0003N0-8h for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:21 -0500 Received: by mail-pj1-x1029.google.com with SMTP id hd14-20020a17090b458e00b0021909875bccso2582089pjb.1 for ; Tue, 20 Dec 2022 21:03:18 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lQgJmIQMrjWysce1e2je5afVIp4tHra/146j/kBBfUs=; b=kAQnHQsgu44gP5gPgMf/OQjjIS/hmKLyKyHKQKiagmDKt2RiMn4Ici3SqYIHssP4rf FnWGN8mUKQPUnQtfpStJQ58K+BdHYc/iOMbvNU62fC4YGEpmQSOk8CYBBgWGJkUFjVjn FOPPFPTkp17TDQooqhiLT8nto0962RnLcbBoGkiqDJwtivRCpyZl1oks+jCGtne6//gf G31Pnl2gfD25mvca4ySkeY2XmH1yxgtIvFdE0/HZ7IJtErc1YeinvmPRshAlSzpGUD5+ 68zT3AYh0LpWQByeXUSE5QDGfRucaW39tszFjXyAKRkgXhMyBQ0lD4fFvBJ6TRlmY1E/ K/lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lQgJmIQMrjWysce1e2je5afVIp4tHra/146j/kBBfUs=; b=EsnzX1FcEGnfSGaqVSjSgEGnkQmPTQRVz8D4lPMLrknu9Ae/yl5XTQ6NOYEGaPV2HK fYVd5U1hvHw482XAgx6pkUnk7RPEDYB0TvFtXfY8/YocYiHVU6mxfaMz0R6uuHVrpwnu z7jGsQufxObYCDafs1MXHj8tvnd046Sqj7geRRJYRHZ8FG84bayusTiU9xVNANs0Mo82 PVBjJNfmX6wMYU9lStE8HQ/EGwSl93ZOGuJsRfS7CVpDg/GBE7wmHmNrLRNvRAfXk04O bYDSP6SjmT6wdpxNaSiaeFrZPPDQun3eaYGakMQzSyt2JQg8XfMN4/Hv90dfQp0d5a55 JruQ== X-Gm-Message-State: AFqh2kql7soV6xbC7yHhFSDD7ja/WYcyF6AM9INK4IOUmoBCU1OefQj0 F7gkgaFYFCaAqMVX5E4LtN6P9enS7oiZOm6e X-Google-Smtp-Source: AMrXdXuURBup7phSEGJ/H8XcGAJ6CxLZIjn7d8RI217xkPsCGqsoF7FopKXy//8zYLANUItcdiCosQ== X-Received: by 2002:a17:90b:188c:b0:219:44fd:fc0a with SMTP id mn12-20020a17090b188c00b0021944fdfc0amr17734778pjb.4.1671598997798; Tue, 20 Dec 2022 21:03:17 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Subject: [PULL v2 02/14] accel/tcg: Rename page_flush_tb Date: Tue, 20 Dec 2022 21:03:01 -0800 Message-Id: <20221221050313.2950701-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1029; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1029.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599050991100011 Rename to tb_remove_all, to remove the PageDesc "page" from the name, and to avoid suggesting a "flush" in the icache sense. Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Richard Henderson --- accel/tcg/tb-maint.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 0cdb35548c..b5b90347ae 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -51,7 +51,7 @@ void tb_htable_init(void) } =20 /* Set to NULL all the 'first_tb' fields in all PageDescs. */ -static void page_flush_tb_1(int level, void **lp) +static void tb_remove_all_1(int level, void **lp) { int i; =20 @@ -70,17 +70,17 @@ static void page_flush_tb_1(int level, void **lp) void **pp =3D *lp; =20 for (i =3D 0; i < V_L2_SIZE; ++i) { - page_flush_tb_1(level - 1, pp + i); + tb_remove_all_1(level - 1, pp + i); } } } =20 -static void page_flush_tb(void) +static void tb_remove_all(void) { int i, l1_sz =3D v_l1_size; =20 for (i =3D 0; i < l1_sz; i++) { - page_flush_tb_1(v_l2_levels, l1_map + i); + tb_remove_all_1(v_l2_levels, l1_map + i); } } =20 @@ -101,7 +101,7 @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_data = tb_flush_count) } =20 qht_reset_size(&tb_ctx.htable, CODE_GEN_HTABLE_SIZE); - page_flush_tb(); + tb_remove_all(); =20 tcg_region_reset_all(); /* XXX: flush processor icache at this point if cache flush is expensi= ve */ --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599047; cv=none; d=zohomail.com; s=zohoarc; b=P6xOVdpntT7oLqZuIsD0nwE3e+MP9pEmclO1qXgQKiOPJfuCPRQvj5U9lwGAo31Rz1rNmQDo4eR3elVfZe99AwPTEEQ4AyVPpedsPaC/tb8FHiD9ZLf0otptV3oYAjhvfbLQFZkLSKR9ogWUoVNf+SjNCbMYS1msBnI+XgJ/01U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599047; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=J/h0b2L1icbN8detkr/ytUwOyb8luYEYfEhmAMSx4bk=; b=UVnudbir0hBsypkDv+YTOY7OdBKbjj5Rr/OmkyZpsekjwek09pXPHVGxfX7dJIzboR44fcIWdnnzaxNooo0TAIMx9IG6nJDGVBkiUVoZvXczs8iv2vf2JXegUlEopmDoRlzbUJqPllnFpYzrE0Y7LXKiJR9Z/pZi42NIcv5oxBo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599047790786.8046082111445; Tue, 20 Dec 2022 21:04:07 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGI-0002GC-GJ; Wed, 21 Dec 2022 00:03:26 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGF-0002DB-Ro for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:23 -0500 Received: from mail-pj1-x102b.google.com ([2607:f8b0:4864:20::102b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGC-0003N9-PD for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:23 -0500 Received: by mail-pj1-x102b.google.com with SMTP id k88-20020a17090a4ce100b00219d0b857bcso973386pjh.1 for ; Tue, 20 Dec 2022 21:03:20 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J/h0b2L1icbN8detkr/ytUwOyb8luYEYfEhmAMSx4bk=; b=OAwnQBw6OsU5Cf+Ez1dyjmoT9BuktyrQQENAmJHCVOVzLZpTtiQ66fPME++BzwhxPz AG9DBru+5fBMbi+ewuvZt2dgZeKPnDCG+qvQOzm2+6zY8Aw4RcXbCaB59TMo1BQwdlXk ptLe9CuykOw4jRToTilWgIkvb16aRxdrJjEeZoDobHLJa4dJBZzXa/07sJ+B4QNr5GfH uCTKGnF8fk0wR/Af52lU1odHRkDxG2Mcw8AFgPaWCx88kxh4lZg3zWkqsWzjiORxDH/B WcEVy6yB6EWWl3lOctkLDqbWZRYOvL6ELLoLDaj0LFnp8k7lTnnpFkwulIKKpS1ZhEMq u33Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J/h0b2L1icbN8detkr/ytUwOyb8luYEYfEhmAMSx4bk=; b=WOJPUw50UI3o2iIT6YLjSI8F+vWtkAoRh5cZHxi3ad4Y9iS42iDVFHzehuE7urQpFR cs1hgtfgE1yLO+1HaWP1dE3w56XNdV2q0nosgUy78EepDYzsBn3l5TwrKkXPT41Hoj76 bwKk6clQ+Zv87WZOlHwEnTAbZKZB/lRhZxPTQ+S/DxuH06jcic/i/0Fu+1TR2C1oB97y LOPjepSUDT9EMiJJj2NCPeIY884ceTETAepOBvwTEDxXhPswo+GetlymwbZpAQPYbnvM SdJMOsJ/ARc6vTsdo7hRjCLFA9999V+xXtHFoV8b321mMg/9MY7AXa+7ir4ewNIIVed5 /fHw== X-Gm-Message-State: AFqh2kq5SDn1pmBRYbitX34GJvLMeqBsXo8OZUnvtN4S9lzkNA+wEYp6 QYdI26YWw4ETJxQGXPpGF+605CHWptlFBMh+ X-Google-Smtp-Source: AMrXdXsnb+epzLWjraLIL/Dio+o4kDCwe8q3RPZFxELPPvKeeYby5y0wKwEw8zfbSZRl54HqhvUsiQ== X-Received: by 2002:a17:90a:66ce:b0:219:fe32:5dc with SMTP id z14-20020a17090a66ce00b00219fe3205dcmr715500pjl.18.1671598998853; Tue, 20 Dec 2022 21:03:18 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 03/14] accel/tcg: Use interval tree for TBs in user-only mode Date: Tue, 20 Dec 2022 21:03:02 -0800 Message-Id: <20221221050313.2950701-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102b; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599048529100001 Begin weaning user-only away from PageDesc. Since, for user-only, all TB (and page) manipulation is done with a single mutex, and there is no virtual/physical discontinuity to split a TB across discontinuous pages, place all of the TBs into a single IntervalTree. This makes it trivial to find all of the TBs intersecting a range. Retain the existing PageDesc + linked list implementation for system mode. Move the portion of the implementation that overlaps the new user-only code behind the common ifdef. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 16 +- include/exec/exec-all.h | 43 ++++- accel/tcg/tb-maint.c | 387 ++++++++++++++++++++++---------------- accel/tcg/translate-all.c | 4 +- 4 files changed, 279 insertions(+), 171 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index cb13bade4f..bf1bf62e2a 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -24,14 +24,13 @@ #endif =20 typedef struct PageDesc { - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; #ifdef CONFIG_USER_ONLY unsigned long flags; void *target_data; -#endif -#ifdef CONFIG_SOFTMMU +#else QemuSpin lock; + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; #endif } PageDesc; =20 @@ -69,9 +68,6 @@ static inline PageDesc *page_find(tb_page_addr_t index) tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) =20 -#define PAGE_FOR_EACH_TB(pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - #define TB_FOR_EACH_JMP(head_tb, tb, n) \ TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) =20 @@ -89,6 +85,12 @@ void do_assert_page_locked(const PageDesc *pd, const cha= r *file, int line); #endif void page_lock(PageDesc *pd); void page_unlock(PageDesc *pd); + +/* TODO: For now, still shared with translate-all.c for system mode. */ +typedef int PageForEachNext; +#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + #endif #if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) void assert_no_pages_locked(void); diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 9b7bfbf09a..25e11b0a8d 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -24,6 +24,7 @@ #ifdef CONFIG_TCG #include "exec/cpu_ldst.h" #endif +#include "qemu/interval-tree.h" =20 /* allow to see translation results - the slowdown should be negligible, s= o we leave it */ #define DEBUG_DISAS @@ -559,11 +560,20 @@ struct TranslationBlock { =20 struct tb_tc tc; =20 - /* first and second physical page containing code. The lower bit - of the pointer tells the index in page_next[]. - The list is protected by the TB's page('s) lock(s) */ + /* + * Track tb_page_addr_t intervals that intersect this TB. + * For user-only, the virtual addresses are always contiguous, + * and we use a unified interval tree. For system, we use a + * linked list headed in each PageDesc. Within the list, the lsb + * of the previous pointer tells the index of page_next[], and the + * list is protected by the PageDesc lock(s). + */ +#ifdef CONFIG_USER_ONLY + IntervalTreeNode itree; +#else uintptr_t page_next[2]; tb_page_addr_t page_addr[2]; +#endif =20 /* jmp_lock placed here to fill a 4-byte hole. Its documentation is be= low */ QemuSpin jmp_lock; @@ -619,24 +629,51 @@ static inline uint32_t tb_cflags(const TranslationBlo= ck *tb) =20 static inline tb_page_addr_t tb_page_addr0(const TranslationBlock *tb) { +#ifdef CONFIG_USER_ONLY + return tb->itree.start; +#else return tb->page_addr[0]; +#endif } =20 static inline tb_page_addr_t tb_page_addr1(const TranslationBlock *tb) { +#ifdef CONFIG_USER_ONLY + tb_page_addr_t next =3D tb->itree.last & TARGET_PAGE_MASK; + return next =3D=3D (tb->itree.start & TARGET_PAGE_MASK) ? -1 : next; +#else return tb->page_addr[1]; +#endif } =20 static inline void tb_set_page_addr0(TranslationBlock *tb, tb_page_addr_t addr) { +#ifdef CONFIG_USER_ONLY + tb->itree.start =3D addr; + /* + * To begin, we record an interval of one byte. When the translation + * loop encounters a second page, the interval will be extended to + * include the first byte of the second page, which is sufficient to + * allow tb_page_addr1() above to work properly. The final corrected + * interval will be set by tb_page_add() from tb->size before the + * node is added to the interval tree. + */ + tb->itree.last =3D addr; +#else tb->page_addr[0] =3D addr; +#endif } =20 static inline void tb_set_page_addr1(TranslationBlock *tb, tb_page_addr_t addr) { +#ifdef CONFIG_USER_ONLY + /* Extend the interval to the first byte of the second page. See abov= e. */ + tb->itree.last =3D addr; +#else tb->page_addr[1] =3D addr; +#endif } =20 /* current cflags for hashing/comparison */ diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index b5b90347ae..8da2c64d87 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -18,6 +18,7 @@ */ =20 #include "qemu/osdep.h" +#include "qemu/interval-tree.h" #include "exec/cputlb.h" #include "exec/log.h" #include "exec/exec-all.h" @@ -50,6 +51,75 @@ void tb_htable_init(void) qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } =20 +#ifdef CONFIG_USER_ONLY +/* + * For user-only, since we are protecting all of memory with a single lock, + * and because the two pages of a TranslationBlock are always contiguous, + * use a single data structure to record all TranslationBlocks. + */ +static IntervalTreeRoot tb_root; + +static void tb_remove_all(void) +{ + assert_memory_lock(); + memset(&tb_root, 0, sizeof(tb_root)); +} + +/* Call with mmap_lock held. */ +static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) +{ + /* translator_loop() must have made all TB pages non-writable */ + assert(!(p1->flags & PAGE_WRITE)); + if (p2) { + assert(!(p2->flags & PAGE_WRITE)); + } + + assert_memory_lock(); + + tb->itree.last =3D tb->itree.start + tb->size - 1; + interval_tree_insert(&tb->itree, &tb_root); +} + +/* Call with mmap_lock held. */ +static void tb_remove(TranslationBlock *tb) +{ + assert_memory_lock(); + interval_tree_remove(&tb->itree, &tb_root); +} + +/* TODO: For now, still shared with translate-all.c for system mode. */ +#define PAGE_FOR_EACH_TB(start, end, pagedesc, T, N) \ + for (T =3D foreach_tb_first(start, end), \ + N =3D foreach_tb_next(T, start, end); \ + T !=3D NULL; \ + T =3D N, N =3D foreach_tb_next(N, start, end)) + +typedef TranslationBlock *PageForEachNext; + +static PageForEachNext foreach_tb_first(tb_page_addr_t start, + tb_page_addr_t end) +{ + IntervalTreeNode *n =3D interval_tree_iter_first(&tb_root, start, end = - 1); + return n ? container_of(n, TranslationBlock, itree) : NULL; +} + +static PageForEachNext foreach_tb_next(PageForEachNext tb, + tb_page_addr_t start, + tb_page_addr_t end) +{ + IntervalTreeNode *n; + + if (tb) { + n =3D interval_tree_iter_next(&tb->itree, start, end - 1); + if (n) { + return container_of(n, TranslationBlock, itree); + } + } + return NULL; +} + +#else + /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void tb_remove_all_1(int level, void **lp) { @@ -84,6 +154,70 @@ static void tb_remove_all(void) } } =20 +/* + * Add the tb in the target page and protect it if necessary. + * Called with @p->lock held. + */ +static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, + unsigned int n) +{ + bool page_already_protected; + + assert_page_locked(p); + + tb->page_next[n] =3D p->first_tb; + page_already_protected =3D p->first_tb !=3D 0; + p->first_tb =3D (uintptr_t)tb | n; + + /* + * If some code is already present, then the pages are already + * protected. So we handle the case where only the first TB is + * allocated in a physical page. + */ + if (!page_already_protected) { + tlb_protect_code(tb->page_addr[n] & TARGET_PAGE_MASK); + } +} + +static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) +{ + tb_page_add(p1, tb, 0); + if (unlikely(p2)) { + tb_page_add(p2, tb, 1); + } +} + +static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) +{ + TranslationBlock *tb1; + uintptr_t *pprev; + PageForEachNext n1; + + assert_page_locked(pd); + pprev =3D &pd->first_tb; + PAGE_FOR_EACH_TB(unused, unused, pd, tb1, n1) { + if (tb1 =3D=3D tb) { + *pprev =3D tb1->page_next[n1]; + return; + } + pprev =3D &tb1->page_next[n1]; + } + g_assert_not_reached(); +} + +static void tb_remove(TranslationBlock *tb) +{ + PageDesc *pd; + + pd =3D page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + tb_page_remove(pd, tb); + if (unlikely(tb->page_addr[1] !=3D -1)) { + pd =3D page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + tb_page_remove(pd, tb); + } +} +#endif /* CONFIG_USER_ONLY */ + /* flush all the translation blocks */ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) { @@ -128,28 +262,6 @@ void tb_flush(CPUState *cpu) } } =20 -/* - * user-mode: call with mmap_lock held - * !user-mode: call with @pd->lock held - */ -static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) -{ - TranslationBlock *tb1; - uintptr_t *pprev; - unsigned int n1; - - assert_page_locked(pd); - pprev =3D &pd->first_tb; - PAGE_FOR_EACH_TB(pd, tb1, n1) { - if (tb1 =3D=3D tb) { - *pprev =3D tb1->page_next[n1]; - return; - } - pprev =3D &tb1->page_next[n1]; - } - g_assert_not_reached(); -} - /* remove @orig from its @n_orig-th jump list */ static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_o= rig) { @@ -255,7 +367,6 @@ static void tb_jmp_cache_inval_tb(TranslationBlock *tb) */ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_= list) { - PageDesc *p; uint32_t h; tb_page_addr_t phys_pc; uint32_t orig_cflags =3D tb_cflags(tb); @@ -277,13 +388,7 @@ static void do_tb_phys_invalidate(TranslationBlock *tb= , bool rm_from_page_list) =20 /* remove the TB from the page list */ if (rm_from_page_list) { - p =3D page_find(phys_pc >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - phys_pc =3D tb_page_addr1(tb); - if (phys_pc !=3D -1) { - p =3D page_find(phys_pc >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - } + tb_remove(tb); } =20 /* remove the TB from the hash list */ @@ -387,41 +492,6 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_= addr_t page_addr) } } =20 -/* - * Add the tb in the target page and protect it if necessary. - * Called with mmap_lock held for user-mode emulation. - * Called with @p->lock held in !user-mode. - */ -static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, - unsigned int n, tb_page_addr_t page_addr) -{ -#ifndef CONFIG_USER_ONLY - bool page_already_protected; -#endif - - assert_page_locked(p); - - tb->page_next[n] =3D p->first_tb; -#ifndef CONFIG_USER_ONLY - page_already_protected =3D p->first_tb !=3D (uintptr_t)NULL; -#endif - p->first_tb =3D (uintptr_t)tb | n; - -#if defined(CONFIG_USER_ONLY) - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p->flags & PAGE_WRITE)); -#else - /* - * If some code is already present, then the pages are already - * protected. So we handle the case where only the first TB is - * allocated in a physical page. - */ - if (!page_already_protected) { - tlb_protect_code(page_addr); - } -#endif -} - /* * Add a new TB and link it to the physical page tables. phys_page2 is * (-1) to indicate that only one page contains the TB. @@ -453,10 +523,7 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, * we can only insert TBs that are fully initialized. */ page_lock_pair(&p, phys_pc, &p2, phys_page2, true); - tb_page_add(p, tb, 0, phys_pc); - if (p2) { - tb_page_add(p2, tb, 1, phys_page2); - } + tb_record(tb, p, p2); =20 /* add in the hash table */ h =3D tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), @@ -465,10 +532,7 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, =20 /* remove TB from the page(s) if we couldn't insert it */ if (unlikely(existing_tb)) { - tb_page_remove(p, tb); - if (p2) { - tb_page_remove(p2, tb); - } + tb_remove(tb); tb =3D existing_tb; } =20 @@ -479,6 +543,87 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, return tb; } =20 +#ifdef CONFIG_USER_ONLY +/* + * Invalidate all TBs which intersect with the target address range. + * Called with mmap_lock held for user-mode emulation. + * NOTE: this function must not be called while a TB is running. + */ +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +{ + TranslationBlock *tb; + PageForEachNext n; + + assert_memory_lock(); + + PAGE_FOR_EACH_TB(start, end, unused, tb, n) { + tb_phys_invalidate__locked(tb); + } +} + +/* + * Invalidate all TBs which intersect with the target address page @addr. + * Called with mmap_lock held for user-mode emulation + * NOTE: this function must not be called while a TB is running. + */ +void tb_invalidate_phys_page(tb_page_addr_t addr) +{ + tb_page_addr_t start, end; + + start =3D addr & TARGET_PAGE_MASK; + end =3D start + TARGET_PAGE_SIZE; + tb_invalidate_phys_range(start, end); +} + +/* + * Called with mmap_lock held. If pc is not 0 then it indicates the + * host PC of the faulting store instruction that caused this invalidate. + * Returns true if the caller needs to abort execution of the current + * TB (because it was modified by this store and the guest CPU has + * precise-SMC semantics). + */ +bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) +{ + assert(pc !=3D 0); +#ifdef TARGET_HAS_PRECISE_SMC + assert_memory_lock(); + { + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + bool current_tb_modified =3D false; + TranslationBlock *tb; + PageForEachNext n; + + addr &=3D TARGET_PAGE_MASK; + + PAGE_FOR_EACH_TB(addr, addr + TARGET_PAGE_SIZE, unused, tb, n) { + if (current_tb =3D=3D tb && + (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { + /* + * If we are modifying the current TB, we must stop its + * execution. We could be more precise by checking that + * the modification is after the current PC, but it would + * require a specialized function to partially restore + * the CPU state. + */ + current_tb_modified =3D true; + cpu_restore_state_from_tb(current_cpu, current_tb, pc); + } + tb_phys_invalidate__locked(tb); + } + + if (current_tb_modified) { + /* Force execution of one insn next time. */ + CPUState *cpu =3D current_cpu; + cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(current_cpu= ); + return true; + } + } +#else + tb_invalidate_phys_page(addr); +#endif /* TARGET_HAS_PRECISE_SMC */ + return false; +} +#else /* * @p must be non-NULL. * user-mode: call with mmap_lock held. @@ -492,22 +637,17 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, { TranslationBlock *tb; tb_page_addr_t tb_start, tb_end; - int n; + PageForEachNext n; #ifdef TARGET_HAS_PRECISE_SMC - CPUState *cpu =3D current_cpu; - bool current_tb_not_found =3D retaddr !=3D 0; bool current_tb_modified =3D false; - TranslationBlock *current_tb =3D NULL; + TranslationBlock *current_tb =3D retaddr ? tcg_tb_lookup(retaddr) : NU= LL; #endif /* TARGET_HAS_PRECISE_SMC */ =20 - assert_page_locked(p); - /* * We remove all the TBs in the range [start, end[. * XXX: see if in some cases it could be faster to invalidate all the = code */ - PAGE_FOR_EACH_TB(p, tb, n) { - assert_page_locked(p); + PAGE_FOR_EACH_TB(start, end, p, tb, n) { /* NOTE: this is subtle as a TB may span two physical pages */ if (n =3D=3D 0) { /* NOTE: tb_end may be after the end of the page, but @@ -521,11 +661,6 @@ tb_invalidate_phys_page_range__locked(struct page_coll= ection *pages, } if (!(tb_end <=3D start || tb_start >=3D end)) { #ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_not_found) { - current_tb_not_found =3D false; - /* now we have a real cpu fault */ - current_tb =3D tcg_tb_lookup(retaddr); - } if (current_tb =3D=3D tb && (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { /* @@ -536,25 +671,25 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, * restore the CPU state. */ current_tb_modified =3D true; - cpu_restore_state_from_tb(cpu, current_tb, retaddr); + cpu_restore_state_from_tb(current_cpu, current_tb, retaddr= ); } #endif /* TARGET_HAS_PRECISE_SMC */ tb_phys_invalidate__locked(tb); } } -#if !defined(CONFIG_USER_ONLY) + /* if no code remaining, no need to continue to use slow writes */ if (!p->first_tb) { tlb_unprotect_code(start); } -#endif + #ifdef TARGET_HAS_PRECISE_SMC if (current_tb_modified) { page_collection_unlock(pages); /* Force execution of one insn next time. */ - cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(cpu); + current_cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(current= _cpu); mmap_unlock(); - cpu_loop_exit_noexc(cpu); + cpu_loop_exit_noexc(current_cpu); } #endif } @@ -571,8 +706,6 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) tb_page_addr_t start, end; PageDesc *p; =20 - assert_memory_lock(); - p =3D page_find(addr >> TARGET_PAGE_BITS); if (p =3D=3D NULL) { return; @@ -599,8 +732,6 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_= page_addr_t end) struct page_collection *pages; tb_page_addr_t next; =20 - assert_memory_lock(); - pages =3D page_collection_lock(start, end); for (next =3D (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; start < end; @@ -611,12 +742,12 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) if (pd =3D=3D NULL) { continue; } + assert_page_locked(pd); tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0); } page_collection_unlock(pages); } =20 -#ifdef CONFIG_SOFTMMU /* * len must be <=3D 8 and start must be a multiple of len. * Called via softmmu_template.h when code areas are written to with @@ -630,8 +761,6 @@ void tb_invalidate_phys_page_fast(struct page_collectio= n *pages, { PageDesc *p; =20 - assert_memory_lock(); - p =3D page_find(start >> TARGET_PAGE_BITS); if (!p) { return; @@ -641,64 +770,4 @@ void tb_invalidate_phys_page_fast(struct page_collecti= on *pages, tb_invalidate_phys_page_range__locked(pages, p, start, start + len, retaddr); } -#else -/* - * Called with mmap_lock held. If pc is not 0 then it indicates the - * host PC of the faulting store instruction that caused this invalidate. - * Returns true if the caller needs to abort execution of the current - * TB (because it was modified by this store and the guest CPU has - * precise-SMC semantics). - */ -bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) -{ - TranslationBlock *tb; - PageDesc *p; - int n; -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D NULL; - CPUState *cpu =3D current_cpu; - bool current_tb_modified =3D false; -#endif - - assert_memory_lock(); - - addr &=3D TARGET_PAGE_MASK; - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return false; - } - -#ifdef TARGET_HAS_PRECISE_SMC - if (p->first_tb && pc !=3D 0) { - current_tb =3D tcg_tb_lookup(pc); - } -#endif - assert_page_locked(p); - PAGE_FOR_EACH_TB(p, tb, n) { -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb =3D=3D tb && - (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { - /* - * If we are modifying the current TB, we must stop its execut= ion. - * We could be more precise by checking that the modification = is - * after the current PC, but it would require a specialized - * function to partially restore the CPU state. - */ - current_tb_modified =3D true; - cpu_restore_state_from_tb(cpu, current_tb, pc); - } -#endif /* TARGET_HAS_PRECISE_SMC */ - tb_phys_invalidate(tb, addr); - } - p->first_tb =3D (uintptr_t)NULL; -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_modified) { - /* Force execution of one insn next time. */ - cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(cpu); - return true; - } -#endif - - return false; -} -#endif +#endif /* CONFIG_USER_ONLY */ diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index ac3ee3740c..b964ea44d7 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -709,7 +709,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) =20 for (index =3D start; index <=3D end; index++) { TranslationBlock *tb; - int n; + PageForEachNext n; =20 pd =3D page_find(index); if (pd =3D=3D NULL) { @@ -720,7 +720,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) goto retry; } assert_page_locked(pd); - PAGE_FOR_EACH_TB(pd, tb, n) { + PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { if (page_trylock_add(set, tb_page_addr0(tb)) || (tb_page_addr1(tb) !=3D -1 && page_trylock_add(set, tb_page_addr1(tb)))) { --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599264; cv=none; d=zohomail.com; s=zohoarc; b=kAPwy8WvnieiXA58WTsCrgSichtG1/LyuhjbdJTHnbvW68zF6m9FISFUqrYg1Ih/aThOVuAPvw7KPX25PUez2nOuUyMKiKlsnfm365Zo8fj9efD7nVTiAkHAnHbnZBdTZddlzvrHV8QqZ6cwdvCnOkF02cWYDEwyI6TC0BuVy1Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599264; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zm3VUx9ZkFCce7tlymmqLl3qxTdEEqRkB/7xizKqvmo=; b=Co1nvMVbqRxta52gP8cktmH0rCTPJm9axdfX6Nvwe2BFVvjET/Wo6MaNnoTUZF8X8DDHPW1+g4fDK9zS3btTt2xSrggx1duc/xR1AKUmjNa+CkCXP60TA91y6gI8FglwkRAa/F4i9JXlDqYsyxu7S7JCHnqe9qleEb/miAS+VPU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599264401991.5194176006368; Tue, 20 Dec 2022 21:07:44 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGH-0002G0-Ek; Wed, 21 Dec 2022 00:03:25 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGF-0002Cq-Bk for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:23 -0500 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGD-0003NB-3b for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:23 -0500 Received: by mail-pl1-x631.google.com with SMTP id u7so6248886plq.11 for ; Tue, 20 Dec 2022 21:03:20 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zm3VUx9ZkFCce7tlymmqLl3qxTdEEqRkB/7xizKqvmo=; b=tkWHZivXMaZlnkTIOslcPloTNoqDHsTTx+chaVuLcW6n9RxRkmTC2a5v6Jg8NQMxFc ytdtbtNqOu+wmi/TBO0LzDntJ8hFYCP3C1D0yHbWgaUnBLhkH+LqVjQ88fiQKOJn4Qvw fjXzmTiqce1kGqDk8IR2gT/PDXak/9OjzsBVWlnasCRV8wQtQbKth7Fu+iEJXRnQQH4M CY6Pz3lyhctduMWL4yk1V3bcB/VJPmr05y5bsckWTMFlgAR997YIwxARxnt8hVMPMh4K 5/7xQBsMjpDLqEX7SvtJBqusAIAfOWylPigbI+zcjBCjRVZ+qcD0Fzck95l+mt6GpGSY Q11w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zm3VUx9ZkFCce7tlymmqLl3qxTdEEqRkB/7xizKqvmo=; b=sq0BwnVumKDkZ+RQeTyU1UTxcKBGUv/STSoe0OvkrdOoewelz5gmmWLsNf+uwVMvaE ImqJL3bN91s36/QpAzKr7EUHXttbUOlS/FP1nfmKi8PPdBmrPHqDyDWqwTy5g3EEzEBU FLUMfxOaGY/1kKK6D2OG8g3ZZu2EhR9wZ/GEwwndFULZxYXBv1paD4UsoKBJx6T5Kd2d kcpXGjeG+QkxfBcYz24q0uNJP2FXp0mNIMaCGGEz/CcPK3LNhKj9Z7++EMtxkAlb+efe j3rNaZa02I6CbVEd4+0O2pxtceiLemzeeMfjtYMLyc48s5BZSVzg5vrAHdBOerQ8AC9C 6Cdw== X-Gm-Message-State: AFqh2koMKlxd19c9ss97ERHtr/TJYiLIW5DttSpNWf6oWKFKTbFN2Sm+ oUXuL4G5EAYjDplClrq2zHCNlHCwfKuOsaVy X-Google-Smtp-Source: AMrXdXs3v6QWAqTVD9NGnOM/ReAfCTSXtW/FUP5MwbJylixIBvIA0kWtTqA/GbiKI55K8jAqfTIqkw== X-Received: by 2002:a05:6a20:4d9e:b0:af:91a1:94a0 with SMTP id gj30-20020a056a204d9e00b000af91a194a0mr974582pzb.32.1671598999733; Tue, 20 Dec 2022 21:03:19 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 04/14] accel/tcg: Use interval tree for TARGET_PAGE_DATA_SIZE Date: Tue, 20 Dec 2022 21:03:03 -0800 Message-Id: <20221221050313.2950701-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::631; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x631.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599266040100003 Continue weaning user-only away from PageDesc. Use an interval tree to record target data. Chunk the data, to minimize allocation overhead. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 1 - accel/tcg/user-exec.c | 99 ++++++++++++++++++++++++++++++++----------- 2 files changed, 74 insertions(+), 26 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index bf1bf62e2a..0f91ee939c 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -26,7 +26,6 @@ typedef struct PageDesc { #ifdef CONFIG_USER_ONLY unsigned long flags; - void *target_data; #else QemuSpin lock; /* list of TBs intersecting this ram page */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index fb7d6ee9e9..42a04bdb21 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -210,47 +210,96 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState = *env, target_ulong addr, return addr; } =20 +#ifdef TARGET_PAGE_DATA_SIZE +/* + * Allocate chunks of target data together. For the only current user, + * if we allocate one hunk per page, we have overhead of 40/128 or 40%. + * Therefore, allocate memory for 64 pages at a time for overhead < 1%. + */ +#define TPD_PAGES 64 +#define TBD_MASK (TARGET_PAGE_MASK * TPD_PAGES) + +typedef struct TargetPageDataNode { + IntervalTreeNode itree; + char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned)); +} TargetPageDataNode; + +static IntervalTreeRoot targetdata_root; + void page_reset_target_data(target_ulong start, target_ulong end) { -#ifdef TARGET_PAGE_DATA_SIZE - target_ulong addr, len; + IntervalTreeNode *n, *next; + target_ulong last; =20 - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - assert(end - 1 <=3D GUEST_ADDR_MAX); - assert(start < end); assert_memory_lock(); =20 start =3D start & TARGET_PAGE_MASK; - end =3D TARGET_PAGE_ALIGN(end); + last =3D TARGET_PAGE_ALIGN(end) - 1; =20 - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, 1); + for (n =3D interval_tree_iter_first(&targetdata_root, start, last), + next =3D n ? interval_tree_iter_next(n, start, last) : NULL; + n !=3D NULL; + n =3D next, + next =3D next ? interval_tree_iter_next(n, start, last) : NULL) { + target_ulong n_start, n_last, p_ofs, p_len; + TargetPageDataNode *t; =20 - g_free(p->target_data); - p->target_data =3D NULL; + if (n->start >=3D start && n->last <=3D last) { + interval_tree_remove(n, &targetdata_root); + g_free(n); + continue; + } + + if (n->start < start) { + n_start =3D start; + p_ofs =3D (start - n->start) >> TARGET_PAGE_BITS; + } else { + n_start =3D n->start; + p_ofs =3D 0; + } + n_last =3D MIN(last, n->last); + p_len =3D (n_last + 1 - n_start) >> TARGET_PAGE_BITS; + + t =3D container_of(n, TargetPageDataNode, itree); + memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE); } -#endif } =20 -#ifdef TARGET_PAGE_DATA_SIZE void *page_get_target_data(target_ulong address) { - PageDesc *p =3D page_find(address >> TARGET_PAGE_BITS); - void *ret =3D p->target_data; + IntervalTreeNode *n; + TargetPageDataNode *t; + target_ulong page, region; =20 - if (!ret) { - ret =3D g_malloc0(TARGET_PAGE_DATA_SIZE); - p->target_data =3D ret; + page =3D address & TARGET_PAGE_MASK; + region =3D address & TBD_MASK; + + n =3D interval_tree_iter_first(&targetdata_root, page, page); + if (!n) { + /* + * See util/interval-tree.c re lockless lookups: no false positives + * but there are false negatives. If we find nothing, retry with + * the mmap lock acquired. We also need the lock for the + * allocation + insert. + */ + mmap_lock(); + n =3D interval_tree_iter_first(&targetdata_root, page, page); + if (!n) { + t =3D g_new0(TargetPageDataNode, 1); + n =3D &t->itree; + n->start =3D region; + n->last =3D region | ~TBD_MASK; + interval_tree_insert(n, &targetdata_root); + } + mmap_unlock(); } - return ret; + + t =3D container_of(n, TargetPageDataNode, itree); + return t->data[(page - region) >> TARGET_PAGE_BITS]; } -#endif +#else +void page_reset_target_data(target_ulong start, target_ulong end) { } +#endif /* TARGET_PAGE_DATA_SIZE */ =20 /* The softmmu versions of these helpers are in cputlb.c. */ =20 --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599102; cv=none; d=zohomail.com; s=zohoarc; b=POkT0HlXml6MGXG6QeHeL4aRKKOSbgdhxISBJjkRWbOu822TtLqAGnJoxzI5J2QTYXPb0Qv8Nv4TJQRNjCA4AikqhNU/7Lr1UiVBN1FfO73+7CnprRLsrxFzGuHpD45b0UjUuscEeGGsXCZzYLuj2EgjZS5g2NRWI5EndVrvayc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599102; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=AAUonYiwjRnP6N+RJ7H0Na3sdK2hQtHKLKMXocwrETk=; b=D5Wrm3/KWZ9jfMU1gfmCOeac2mMx1wYlEnJTkKSZ12btUWvvJCjYvuQ43INmBsnwmOhhph/ZhTGqpHOL+ecTGf1Bgq3j6PpsgSr2GE69aTS39q30t94molFEdPwPpO5jEOR8A5pn8oAjhRSGQnfHUXDnp0uyyqeoSBBPMqRxJkE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599102025177.79185190467751; Tue, 20 Dec 2022 21:05:02 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGJ-0002Gy-5I; Wed, 21 Dec 2022 00:03:27 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGF-0002DC-SQ for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:23 -0500 Received: from mail-pj1-x102f.google.com ([2607:f8b0:4864:20::102f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGE-0003NV-4z for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:23 -0500 Received: by mail-pj1-x102f.google.com with SMTP id o1-20020a17090a678100b00219cf69e5f0so1024518pjj.2 for ; Tue, 20 Dec 2022 21:03:21 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AAUonYiwjRnP6N+RJ7H0Na3sdK2hQtHKLKMXocwrETk=; b=v92p0/YrHnSCADhlHANPMHJMa2Pzmku7kOA+tOv7VBDKqnEhFvRxe/91KpHDeEeyFy 3JdyezwknguGZ8eT2YSAQOnryC9Ys806w3Zck73goC+3J5U5CwR1cz2emy3U7LzAlJEP l9U/mHVUq4BhXbDECDFYHIGx7nDncvlJ6i66w0XVx6FZWNLj7bKhN7XLP07aeYnGCK6A PcGx3Xy9dJ37oiVLJpJZugs7VrkKtr6e6HjTcViTxDvqNrvLZTpPybzX/szcSArAeRXI i0B1mGT+GTmklaPCeeWlp8ayo/QLHvWOuy5Kqml/sWUFlIqbJX5XDRW0XdYF9Db5gIFY 1I4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AAUonYiwjRnP6N+RJ7H0Na3sdK2hQtHKLKMXocwrETk=; b=AV9phfl3SxoRUr1Ff+sSbIa96nJUCQULSoY02JHAE/k1JkqcdcdwceNDOl5NFqOLgq GZdisrb9I7OchcBxDVXweHybuN1iksu/8n42ZgVC6vyCOA9kminuyZFhUmJkmGIjHLop +n3R3qhzTRGOCmTzVpE4IVH+3GPWU91ne/MZf9zAykQTQtpFV8a0Yl5uHtZcQBALKyUU dzIrSy7y5C4jvqACwZHoCeRYgCAdlXxSQiX33ZhgoL2U7z7NlnxLxTQuOkKVyXDVLlBe cJPymyoe0FOUCZlyedZogBi9na9cohbpnn7KaU8pbNNqILsK6MQB4LJKKKm4Awxcj5nZ PYuw== X-Gm-Message-State: AFqh2kqeHq+sC48K1QfvHft2Qf78CXrFIk2XBQhyd0PO20Cgk92/F/IU Wg1ppU0Kx66eXJkNWWb/FAaBP827KjhQ9TH5 X-Google-Smtp-Source: AMrXdXtRakJ2X0ukBnf2wEIST3IFYdvhzpXPTrrgnuqbARaydHkYsdmQnwS/eqezWhkBB+0FiyNmqQ== X-Received: by 2002:a17:90b:794:b0:220:f82d:b938 with SMTP id l20-20020a17090b079400b00220f82db938mr17369828pjz.23.1671599000786; Tue, 20 Dec 2022 21:03:20 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, Warner Losh , =?UTF-8?q?Alex=20Benn=C3=A9e?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Subject: [PULL v2 05/14] accel/tcg: Drop PAGE_RESERVED for CONFIG_BSD Date: Tue, 20 Dec 2022 21:03:04 -0800 Message-Id: <20221221050313.2950701-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102f; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599103424100003 Make bsd-user match linux-user in not marking host pages as reserved. This isn't especially effective anyway, as it doesn't take into account any heap memory that qemu may allocate after startup. Reviewed-by: Warner Losh Tested-by: Warner Losh Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Richard Henderson --- accel/tcg/translate-all.c | 65 --------------------------------------- 1 file changed, 65 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index b964ea44d7..48e9d70b4e 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -354,71 +354,6 @@ void page_init(void) { page_size_init(); page_table_config_init(); - -#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY) - { -#ifdef HAVE_KINFO_GETVMMAP - struct kinfo_vmentry *freep; - int i, cnt; - - freep =3D kinfo_getvmmap(getpid(), &cnt); - if (freep) { - mmap_lock(); - for (i =3D 0; i < cnt; i++) { - unsigned long startaddr, endaddr; - - startaddr =3D freep[i].kve_start; - endaddr =3D freep[i].kve_end; - if (h2g_valid(startaddr)) { - startaddr =3D h2g(startaddr) & TARGET_PAGE_MASK; - - if (h2g_valid(endaddr)) { - endaddr =3D h2g(endaddr); - page_set_flags(startaddr, endaddr, PAGE_RESERVED); - } else { -#if TARGET_ABI_BITS <=3D L1_MAP_ADDR_SPACE_BITS - endaddr =3D ~0ul; - page_set_flags(startaddr, endaddr, PAGE_RESERVED); -#endif - } - } - } - free(freep); - mmap_unlock(); - } -#else - FILE *f; - - last_brk =3D (unsigned long)sbrk(0); - - f =3D fopen("/compat/linux/proc/self/maps", "r"); - if (f) { - mmap_lock(); - - do { - unsigned long startaddr, endaddr; - int n; - - n =3D fscanf(f, "%lx-%lx %*[^\n]\n", &startaddr, &endaddr); - - if (n =3D=3D 2 && h2g_valid(startaddr)) { - startaddr =3D h2g(startaddr) & TARGET_PAGE_MASK; - - if (h2g_valid(endaddr)) { - endaddr =3D h2g(endaddr); - } else { - endaddr =3D ~0ul; - } - page_set_flags(startaddr, endaddr, PAGE_RESERVED); - } - } while (!feof(f)); - - fclose(f); - mmap_unlock(); - } -#endif - } -#endif } =20 PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599047; cv=none; d=zohomail.com; s=zohoarc; b=NeiCxrKNj9ecSuR2GPkyec83f26hznlqN7IkP1RnE53PqMxLkTx+sp1P3bQww0mPQeicWLwKTvVAVfngd3d7McDa8qutbArLRbxdS8c/Aj6VHGitH7Tc7cvRRdbkmTlh6D1TnVrlDaGTatRrkMcXaabg+30vaUZfoZ/P9g0knpc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599047; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=GjerdaENFxkuMoCFaupHgqr7za3gE/4ohKAUmNpCqb8=; b=ePsdW21NzgQQW1GXJl1H8gzZu91ehplRwfVuxaxJnlP96k8efW3ubI1qDmHLGpl5SZe3LX1cWRwQwm+Jc7SrM/J1xID/esbbjdanSpiin0wKAE1cyGHr8r4hszRcBeQU7ZAzn4iAWgITHmwe3l+gmVEacUhQ+QbVSIFchyYpd8s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 167159904775462.144405185903224; Tue, 20 Dec 2022 21:04:07 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGK-0002HQ-Qz; Wed, 21 Dec 2022 00:03:28 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGI-0002GB-DO for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:26 -0500 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGF-0003Ns-GW for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:26 -0500 Received: by mail-pl1-x62e.google.com with SMTP id d15so14440835pls.6 for ; Tue, 20 Dec 2022 21:03:23 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GjerdaENFxkuMoCFaupHgqr7za3gE/4ohKAUmNpCqb8=; b=e/XHZlKw76qC9AsApRagYNU51+xqhrER4DzY7w+s13dBvuBMqhhQz1BMs+8PWupnT9 1qt+ryURdCXNhWU7RIU/mn0wxszICsK5atT7zAK96QMn+5aF3HfYdj3HFhohm6uMrEE1 8MEJ6ubooD8yFAyFnP98iRcrJjz/83iapqcD5yODz3FzQVcebYNAqVHzXjEPVKGZIiZo uP6dUPAv9WCnuTu3vMceBT1TDpWl/8ruvfcbEPzn4B++iTrcUA6aIwOL+je0XnXRO9Yo 1EJQYgpjosaR04AE8H1CCZ/zZ7XBvmm+gaRkEj+MdFaDs4x+SqIvFpP/6YbL8fvu5vrA DI3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GjerdaENFxkuMoCFaupHgqr7za3gE/4ohKAUmNpCqb8=; b=iCQj661/Z8BAj+WXypFttGRTKNgiy6clTWZaYKTwESdnBzS7zjocNqEJpT/IhlMxdh TUC1JHAb8tZNwcyOSnvrxLdKtZd9ohRVoxpd9Zp7FVdRh8OYA+5qG6ev2eyF/pM8aJG1 ooFJpfDbPm8VNtOt9IwagE/6uQPnaoSdSrm+IJIPzfgbxxCjv/JMAnXjOsovQ97bzfvT zfBtgDcCJj5g2lGqzTLUw7LCLUpprHZ9A2uJ8ciMiXjn0AcS5uMiWRscap2Et3dYtoQo rbbszhrzEX5P9HpteNfuAVNLNsQmXPjb0cwSFN6wdueRbZ7BIZcTcLd4Ycy+kJy9fEnO uIXw== X-Gm-Message-State: AFqh2koS2HfZROTrr+r44/eTdpr9vzZUbrEkO20yKFnnrRLNinNG/Oba BaDD2pCss9NHclpolSTPbXijoqy7WffVX4Fw X-Google-Smtp-Source: AMrXdXuejl6bUtSdtDYV8nndaN08PEz1d/VtWvJFn7gNfyBPIXKHANqoeVhGJg02poD4JQIVovXv2Q== X-Received: by 2002:a17:90a:b904:b0:219:d636:dd82 with SMTP id p4-20020a17090ab90400b00219d636dd82mr834579pjr.4.1671599001946; Tue, 20 Dec 2022 21:03:21 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 06/14] accel/tcg: Move page_{get,set}_flags to user-exec.c Date: Tue, 20 Dec 2022 21:03:05 -0800 Message-Id: <20221221050313.2950701-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::62e; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599048645100003 This page tracking implementation is specific to user-only, since the system softmmu version is in cputlb.c. Move it out of translate-all.c to user-exec.c. Reviewed-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 17 ++ accel/tcg/translate-all.c | 350 -------------------------------------- accel/tcg/user-exec.c | 346 +++++++++++++++++++++++++++++++++++++ 3 files changed, 363 insertions(+), 350 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 0f91ee939c..ddd1fa6bdc 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -33,6 +33,23 @@ typedef struct PageDesc { #endif } PageDesc; =20 +/* + * In system mode we want L1_MAP to be based on ram offsets, + * while in user mode we want it to be based on virtual addresses. + * + * TODO: For user mode, see the caveat re host vs guest virtual + * address spaces near GUEST_ADDR_MAX. + */ +#if !defined(CONFIG_USER_ONLY) +#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS +# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS +#else +# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS +#endif +#else +# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) +#endif + /* Size of the L2 (and L3, etc) page tables. */ #define V_L2_BITS 10 #define V_L2_SIZE (1 << V_L2_BITS) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 48e9d70b4e..cc3ec36d7a 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -109,23 +109,6 @@ struct page_collection { struct page_entry *max; }; =20 -/* - * In system mode we want L1_MAP to be based on ram offsets, - * while in user mode we want it to be based on virtual addresses. - * - * TODO: For user mode, see the caveat re host vs guest virtual - * address spaces near GUEST_ADDR_MAX. - */ -#if !defined(CONFIG_USER_ONLY) -#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS -# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS -#else -# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS -#endif -#else -# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) -#endif - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -1170,339 +1153,6 @@ void cpu_interrupt(CPUState *cpu, int mask) qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); } =20 -/* - * Walks guest process memory "regions" one by one - * and calls callback function 'fn' for each region. - */ -struct walk_memory_regions_data { - walk_memory_regions_fn fn; - void *priv; - target_ulong start; - int prot; -}; - -static int walk_memory_regions_end(struct walk_memory_regions_data *data, - target_ulong end, int new_prot) -{ - if (data->start !=3D -1u) { - int rc =3D data->fn(data->priv, data->start, end, data->prot); - if (rc !=3D 0) { - return rc; - } - } - - data->start =3D (new_prot ? end : -1u); - data->prot =3D new_prot; - - return 0; -} - -static int walk_memory_regions_1(struct walk_memory_regions_data *data, - target_ulong base, int level, void **lp) -{ - target_ulong pa; - int i, rc; - - if (*lp =3D=3D NULL) { - return walk_memory_regions_end(data, base, 0); - } - - if (level =3D=3D 0) { - PageDesc *pd =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - int prot =3D pd[i].flags; - - pa =3D base | (i << TARGET_PAGE_BITS); - if (prot !=3D data->prot) { - rc =3D walk_memory_regions_end(data, pa, prot); - if (rc !=3D 0) { - return rc; - } - } - } - } else { - void **pp =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - pa =3D base | ((target_ulong)i << - (TARGET_PAGE_BITS + V_L2_BITS * level)); - rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); - if (rc !=3D 0) { - return rc; - } - } - } - - return 0; -} - -int walk_memory_regions(void *priv, walk_memory_regions_fn fn) -{ - struct walk_memory_regions_data data; - uintptr_t i, l1_sz =3D v_l1_size; - - data.fn =3D fn; - data.priv =3D priv; - data.start =3D -1u; - data.prot =3D 0; - - for (i =3D 0; i < l1_sz; i++) { - target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); - int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); - if (rc !=3D 0) { - return rc; - } - } - - return walk_memory_regions_end(&data, 0, 0); -} - -static int dump_region(void *priv, target_ulong start, - target_ulong end, unsigned long prot) -{ - FILE *f =3D (FILE *)priv; - - (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx - " "TARGET_FMT_lx" %c%c%c\n", - start, end, end - start, - ((prot & PAGE_READ) ? 'r' : '-'), - ((prot & PAGE_WRITE) ? 'w' : '-'), - ((prot & PAGE_EXEC) ? 'x' : '-')); - - return 0; -} - -/* dump memory mappings */ -void page_dump(FILE *f) -{ - const int length =3D sizeof(target_ulong) * 2; - (void) fprintf(f, "%-*s %-*s %-*s %s\n", - length, "start", length, "end", length, "size", "prot"); - walk_memory_regions(f, dump_region); -} - -int page_get_flags(target_ulong address) -{ - PageDesc *p; - - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { - return 0; - } - return p->flags; -} - -/* - * Allow the target to decide if PAGE_TARGET_[12] may be reset. - * By default, they are not kept. - */ -#ifndef PAGE_TARGET_STICKY -#define PAGE_TARGET_STICKY 0 -#endif -#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) - -/* Modify the flags of a page and invalidate the code if necessary. - The flag PAGE_WRITE_ORG is positioned automatically depending - on PAGE_WRITE. The mmap_lock should already be held. */ -void page_set_flags(target_ulong start, target_ulong end, int flags) -{ - target_ulong addr, len; - bool reset, inval_tb =3D false; - - /* This function should never be called with addresses outside the - guest address space. If this assert fires, it probably indicates - a missing call to h2g_valid. */ - assert(end - 1 <=3D GUEST_ADDR_MAX); - assert(start < end); - /* Only set PAGE_ANON with new mappings. */ - assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); - assert_memory_lock(); - - start =3D start & TARGET_PAGE_MASK; - end =3D TARGET_PAGE_ALIGN(end); - - if (flags & PAGE_WRITE) { - flags |=3D PAGE_WRITE_ORG; - } - reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); - if (reset) { - page_reset_target_data(start, end); - } - flags &=3D ~PAGE_RESET; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); - - /* - * If the page was executable, but is reset, or is no longer - * executable, or has become writable, then invalidate any code. - */ - if ((p->flags & PAGE_EXEC) - && (reset || - !(flags & PAGE_EXEC) || - (flags & ~p->flags & PAGE_WRITE))) { - inval_tb =3D true; - } - /* Using mprotect on a page does not change sticky bits. */ - p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; - } - - if (inval_tb) { - tb_invalidate_phys_range(start, end); - } -} - -int page_check_range(target_ulong start, target_ulong len, int flags) -{ - PageDesc *p; - target_ulong end; - target_ulong addr; - - /* This function should never be called with addresses outside the - guest address space. If this assert fires, it probably indicates - a missing call to h2g_valid. */ - if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { - assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); - } - - if (len =3D=3D 0) { - return 0; - } - if (start + len - 1 < start) { - /* We've wrapped around. */ - return -1; - } - - /* must do before we loose bits in the next step */ - end =3D TARGET_PAGE_ALIGN(start + len); - start =3D start & TARGET_PAGE_MASK; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return -1; - } - if (!(p->flags & PAGE_VALID)) { - return -1; - } - - if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { - return -1; - } - if (flags & PAGE_WRITE) { - if (!(p->flags & PAGE_WRITE_ORG)) { - return -1; - } - /* unprotect the page if it was put read-only because it - contains translated code */ - if (!(p->flags & PAGE_WRITE)) { - if (!page_unprotect(addr, 0)) { - return -1; - } - } - } - } - return 0; -} - -void page_protect(tb_page_addr_t page_addr) -{ - target_ulong addr; - PageDesc *p; - int prot; - - p =3D page_find(page_addr >> TARGET_PAGE_BITS); - if (p && (p->flags & PAGE_WRITE)) { - /* - * Force the host page as non writable (writes will have a page fa= ult + - * mprotect overhead). - */ - page_addr &=3D qemu_host_page_mask; - prot =3D 0; - for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; - addr +=3D TARGET_PAGE_SIZE) { - - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - continue; - } - prot |=3D p->flags; - p->flags &=3D ~PAGE_WRITE; - } - mprotect(g2h_untagged(page_addr), qemu_host_page_size, - (prot & PAGE_BITS) & ~PAGE_WRITE); - } -} - -/* called from signal handler: invalidate the code and unprotect the - * page. Return 0 if the fault was not handled, 1 if it was handled, - * and 2 if it was handled but the caller must cause the TB to be - * immediately exited. (We can only return 2 if the 'pc' argument is - * non-zero.) - */ -int page_unprotect(target_ulong address, uintptr_t pc) -{ - unsigned int prot; - bool current_tb_invalidated; - PageDesc *p; - target_ulong host_start, host_end, addr; - - /* Technically this isn't safe inside a signal handler. However we - know this only ever happens in a synchronous SEGV handler, so in - practice it seems to be ok. */ - mmap_lock(); - - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { - mmap_unlock(); - return 0; - } - - /* if the page was really writable, then we change its - protection back to writable */ - if (p->flags & PAGE_WRITE_ORG) { - current_tb_invalidated =3D false; - if (p->flags & PAGE_WRITE) { - /* If the page is actually marked WRITE then assume this is be= cause - * this thread raced with another one which got here first and - * set the page to PAGE_WRITE and did the TB invalidate for us. - */ -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D tcg_tb_lookup(pc); - if (current_tb) { - current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; - } -#endif - } else { - host_start =3D address & qemu_host_page_mask; - host_end =3D host_start + qemu_host_page_size; - - prot =3D 0; - for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - p->flags |=3D PAGE_WRITE; - prot |=3D p->flags; - - /* and since the content will be modified, we must invalid= ate - the corresponding translated code. */ - current_tb_invalidated |=3D - tb_invalidate_phys_page_unwind(addr, pc); - } - mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, - prot & PAGE_BITS); - } - mmap_unlock(); - /* If current TB was invalidated return to main loop */ - return current_tb_invalidated ? 2 : 1; - } - mmap_unlock(); - return 0; -} #endif /* CONFIG_USER_ONLY */ =20 /* diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 42a04bdb21..22ef780900 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -135,6 +135,352 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigse= t_t *old_set, } } =20 +/* + * Walks guest process memory "regions" one by one + * and calls callback function 'fn' for each region. + */ +struct walk_memory_regions_data { + walk_memory_regions_fn fn; + void *priv; + target_ulong start; + int prot; +}; + +static int walk_memory_regions_end(struct walk_memory_regions_data *data, + target_ulong end, int new_prot) +{ + if (data->start !=3D -1u) { + int rc =3D data->fn(data->priv, data->start, end, data->prot); + if (rc !=3D 0) { + return rc; + } + } + + data->start =3D (new_prot ? end : -1u); + data->prot =3D new_prot; + + return 0; +} + +static int walk_memory_regions_1(struct walk_memory_regions_data *data, + target_ulong base, int level, void **lp) +{ + target_ulong pa; + int i, rc; + + if (*lp =3D=3D NULL) { + return walk_memory_regions_end(data, base, 0); + } + + if (level =3D=3D 0) { + PageDesc *pd =3D *lp; + + for (i =3D 0; i < V_L2_SIZE; ++i) { + int prot =3D pd[i].flags; + + pa =3D base | (i << TARGET_PAGE_BITS); + if (prot !=3D data->prot) { + rc =3D walk_memory_regions_end(data, pa, prot); + if (rc !=3D 0) { + return rc; + } + } + } + } else { + void **pp =3D *lp; + + for (i =3D 0; i < V_L2_SIZE; ++i) { + pa =3D base | ((target_ulong)i << + (TARGET_PAGE_BITS + V_L2_BITS * level)); + rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); + if (rc !=3D 0) { + return rc; + } + } + } + + return 0; +} + +int walk_memory_regions(void *priv, walk_memory_regions_fn fn) +{ + struct walk_memory_regions_data data; + uintptr_t i, l1_sz =3D v_l1_size; + + data.fn =3D fn; + data.priv =3D priv; + data.start =3D -1u; + data.prot =3D 0; + + for (i =3D 0; i < l1_sz; i++) { + target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); + int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); + if (rc !=3D 0) { + return rc; + } + } + + return walk_memory_regions_end(&data, 0, 0); +} + +static int dump_region(void *priv, target_ulong start, + target_ulong end, unsigned long prot) +{ + FILE *f =3D (FILE *)priv; + + (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx + " "TARGET_FMT_lx" %c%c%c\n", + start, end, end - start, + ((prot & PAGE_READ) ? 'r' : '-'), + ((prot & PAGE_WRITE) ? 'w' : '-'), + ((prot & PAGE_EXEC) ? 'x' : '-')); + + return 0; +} + +/* dump memory mappings */ +void page_dump(FILE *f) +{ + const int length =3D sizeof(target_ulong) * 2; + (void) fprintf(f, "%-*s %-*s %-*s %s\n", + length, "start", length, "end", length, "size", "prot"); + walk_memory_regions(f, dump_region); +} + +int page_get_flags(target_ulong address) +{ + PageDesc *p; + + p =3D page_find(address >> TARGET_PAGE_BITS); + if (!p) { + return 0; + } + return p->flags; +} + +/* + * Allow the target to decide if PAGE_TARGET_[12] may be reset. + * By default, they are not kept. + */ +#ifndef PAGE_TARGET_STICKY +#define PAGE_TARGET_STICKY 0 +#endif +#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) + +/* + * Modify the flags of a page and invalidate the code if necessary. + * The flag PAGE_WRITE_ORG is positioned automatically depending + * on PAGE_WRITE. The mmap_lock should already be held. + */ +void page_set_flags(target_ulong start, target_ulong end, int flags) +{ + target_ulong addr, len; + bool reset, inval_tb =3D false; + + /* This function should never be called with addresses outside the + guest address space. If this assert fires, it probably indicates + a missing call to h2g_valid. */ + assert(end - 1 <=3D GUEST_ADDR_MAX); + assert(start < end); + /* Only set PAGE_ANON with new mappings. */ + assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); + assert_memory_lock(); + + start =3D start & TARGET_PAGE_MASK; + end =3D TARGET_PAGE_ALIGN(end); + + if (flags & PAGE_WRITE) { + flags |=3D PAGE_WRITE_ORG; + } + reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); + if (reset) { + page_reset_target_data(start, end); + } + flags &=3D ~PAGE_RESET; + + for (addr =3D start, len =3D end - start; + len !=3D 0; + len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { + PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); + + /* + * If the page was executable, but is reset, or is no longer + * executable, or has become writable, then invalidate any code. + */ + if ((p->flags & PAGE_EXEC) + && (reset || + !(flags & PAGE_EXEC) || + (flags & ~p->flags & PAGE_WRITE))) { + inval_tb =3D true; + } + /* Using mprotect on a page does not change sticky bits. */ + p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; + } + + if (inval_tb) { + tb_invalidate_phys_range(start, end); + } +} + +int page_check_range(target_ulong start, target_ulong len, int flags) +{ + PageDesc *p; + target_ulong end; + target_ulong addr; + + /* + * This function should never be called with addresses outside the + * guest address space. If this assert fires, it probably indicates + * a missing call to h2g_valid. + */ + if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { + assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); + } + + if (len =3D=3D 0) { + return 0; + } + if (start + len - 1 < start) { + /* We've wrapped around. */ + return -1; + } + + /* must do before we loose bits in the next step */ + end =3D TARGET_PAGE_ALIGN(start + len); + start =3D start & TARGET_PAGE_MASK; + + for (addr =3D start, len =3D end - start; + len !=3D 0; + len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { + p =3D page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + return -1; + } + if (!(p->flags & PAGE_VALID)) { + return -1; + } + + if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { + return -1; + } + if (flags & PAGE_WRITE) { + if (!(p->flags & PAGE_WRITE_ORG)) { + return -1; + } + /* unprotect the page if it was put read-only because it + contains translated code */ + if (!(p->flags & PAGE_WRITE)) { + if (!page_unprotect(addr, 0)) { + return -1; + } + } + } + } + return 0; +} + +void page_protect(tb_page_addr_t page_addr) +{ + target_ulong addr; + PageDesc *p; + int prot; + + p =3D page_find(page_addr >> TARGET_PAGE_BITS); + if (p && (p->flags & PAGE_WRITE)) { + /* + * Force the host page as non writable (writes will have a page fa= ult + + * mprotect overhead). + */ + page_addr &=3D qemu_host_page_mask; + prot =3D 0; + for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; + addr +=3D TARGET_PAGE_SIZE) { + + p =3D page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + continue; + } + prot |=3D p->flags; + p->flags &=3D ~PAGE_WRITE; + } + mprotect(g2h_untagged(page_addr), qemu_host_page_size, + (prot & PAGE_BITS) & ~PAGE_WRITE); + } +} + +/* + * Called from signal handler: invalidate the code and unprotect the + * page. Return 0 if the fault was not handled, 1 if it was handled, + * and 2 if it was handled but the caller must cause the TB to be + * immediately exited. (We can only return 2 if the 'pc' argument is + * non-zero.) + */ +int page_unprotect(target_ulong address, uintptr_t pc) +{ + unsigned int prot; + bool current_tb_invalidated; + PageDesc *p; + target_ulong host_start, host_end, addr; + + /* + * Technically this isn't safe inside a signal handler. However we + * know this only ever happens in a synchronous SEGV handler, so in + * practice it seems to be ok. + */ + mmap_lock(); + + p =3D page_find(address >> TARGET_PAGE_BITS); + if (!p) { + mmap_unlock(); + return 0; + } + + /* + * If the page was really writable, then we change its + * protection back to writable. + */ + if (p->flags & PAGE_WRITE_ORG) { + current_tb_invalidated =3D false; + if (p->flags & PAGE_WRITE) { + /* + * If the page is actually marked WRITE then assume this is be= cause + * this thread raced with another one which got here first and + * set the page to PAGE_WRITE and did the TB invalidate for us. + */ +#ifdef TARGET_HAS_PRECISE_SMC + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + if (current_tb) { + current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; + } +#endif + } else { + host_start =3D address & qemu_host_page_mask; + host_end =3D host_start + qemu_host_page_size; + + prot =3D 0; + for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { + p =3D page_find(addr >> TARGET_PAGE_BITS); + p->flags |=3D PAGE_WRITE; + prot |=3D p->flags; + + /* + * Since the content will be modified, we must invalidate + * the corresponding translated code. + */ + current_tb_invalidated |=3D + tb_invalidate_phys_page_unwind(addr, pc); + } + mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, + prot & PAGE_BITS); + } + mmap_unlock(); + /* If current TB was invalidated return to main loop */ + return current_tb_invalidated ? 2 : 1; + } + mmap_unlock(); + return 0; +} + static int probe_access_internal(CPUArchState *env, target_ulong addr, int fault_size, MMUAccessType access_type, bool nonfault, uintptr_t ra) --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599343; cv=none; d=zohomail.com; s=zohoarc; b=UPmpOg/odBFz+qe0EsJkIMNS4oFy1ph/8SoIOjjaffywlvDa2JwARobbFproxnNOv98n7eCcxoUhHD2u1Xn+gcDoF2F7DtPXY0gUm2C2esB7F3KlEMlTMSdvnTDQ7KeRH13xWUTzwQKihSLMLZLw+WuqHxClOPzRSEA8fAqOBGk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599343; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+us9lauksEcfPHcu7sMk8ugSYxQaW3YKOdx61GwHr/s=; b=hiXdVlzsqaaPsF08+a04GCQu9x5BUgayTazAFD+d/ibwoS9nLbDqNpcDav0kMlwa+Uhrl8+A0AoggpnNfTMHB6znVVHJTORTqhXsyTG4gLSzGuUL2ZIMh77R99MX1XwNaLOnpR7lG6rDEBIJxpAT+PXYW0mRgx0s2R7kzS/LvnA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599343474879.6771387787445; Tue, 20 Dec 2022 21:09:03 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGL-0002HS-Gp; Wed, 21 Dec 2022 00:03:29 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGJ-0002H9-Ku for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:27 -0500 Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGG-0003OD-HZ for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:27 -0500 Received: by mail-pl1-x630.google.com with SMTP id n4so14464282plp.1 for ; Tue, 20 Dec 2022 21:03:24 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+us9lauksEcfPHcu7sMk8ugSYxQaW3YKOdx61GwHr/s=; b=kHQLCB/xqvW/PqxAWubnTFfhCd2WOvhB82FU+KlC0PVP0EjSe9Nsd3ECHSc5bjq75V QNgyv+BKgYheQiBNPCWOF1wosp/Dn2aGLPlfBuuMUoD/weOTB/Xenf0IeAlcPN1M1WGr 0+h0iKwe0ryhRd148VuJJcrXxHeawvRqwqdZpWmafHcQzmyV8yPS7f+HrdZaU1OeHFQi gmqI1x7z0J+eQEdZYov7/1I9E3hTzvLBpiPI+vAgkY1piPzaT7UD4cPvj+qrYWF1f2OU k04QW6V9OpiCJfe+1IoZGNapscw+OujzMenwAas6YkOyBUY+3OW3vHlFZfwqISex0Hxj 47UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+us9lauksEcfPHcu7sMk8ugSYxQaW3YKOdx61GwHr/s=; b=y+kf8UBzDjmfc8UC/n7WnqKGq12aFzDgLwnEfdLznFwBh9MM4xf+BO7nv0PpjJQnCl 04eFTlLq/xeNhLmNJPNCKCpwLoNdSRH7xqq51Lzmn8iE+GPi88H6SIq1aHyRIIpL0cjs u1guuHUf6l779Xcgwh7craHZKIm+bpf/UcgFfKZkOYmB7RmfAXvMATiwUR2rXyqJWRSO wmwtweVvx2DN1nWjbOG5+6Bsa/ZR58BJEIn88d6zEIHAgPjk9yYz/Tp3YPIgXREAMO/y wpLJG+929tDeR7MUiHygUd84j0sV1uJIqVPz2QDrEc3XmQWKUEIT13f71xPcooy+8RsT sjjw== X-Gm-Message-State: AFqh2krUD/ipY2JzaulHFzdC1ZALb2Qar1ITnQGTc0HR8xXteDfvyxNL ROqcxIj51dUbZvKTh3ZNFj++C9c+ySeyxHJl X-Google-Smtp-Source: AMrXdXsnevpNylEd3bUxyIgl/20U3azxAq44C89GCMeBaqivJcI90VHGxlXBg4iQlP/NbkzKv1JEtQ== X-Received: by 2002:a17:90b:3c8d:b0:223:ed95:651c with SMTP id pv13-20020a17090b3c8d00b00223ed95651cmr820860pjb.7.1671599002966; Tue, 20 Dec 2022 21:03:22 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 07/14] accel/tcg: Use interval tree for user-only page tracking Date: Tue, 20 Dec 2022 21:03:06 -0800 Message-Id: <20221221050313.2950701-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::630; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x630.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599344531100001 Finish weaning user-only away from PageDesc. Using an interval tree to track page permissions means that we can represent very large regions efficiently. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/290 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/967 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1214 Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 4 +- accel/tcg/tb-maint.c | 20 +- accel/tcg/user-exec.c | 615 ++++++++++++++++++++++----------- tests/tcg/multiarch/test-vma.c | 22 ++ 4 files changed, 451 insertions(+), 210 deletions(-) create mode 100644 tests/tcg/multiarch/test-vma.c diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index ddd1fa6bdc..be19bdf088 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -24,9 +24,7 @@ #endif =20 typedef struct PageDesc { -#ifdef CONFIG_USER_ONLY - unsigned long flags; -#else +#ifndef CONFIG_USER_ONLY QemuSpin lock; /* list of TBs intersecting this ram page */ uintptr_t first_tb; diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 8da2c64d87..20e86c813d 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -68,15 +68,23 @@ static void tb_remove_all(void) /* Call with mmap_lock held. */ static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) { - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p1->flags & PAGE_WRITE)); - if (p2) { - assert(!(p2->flags & PAGE_WRITE)); - } + target_ulong addr; + int flags; =20 assert_memory_lock(); - tb->itree.last =3D tb->itree.start + tb->size - 1; + + /* translator_loop() must have made all TB pages non-writable */ + addr =3D tb_page_addr0(tb); + flags =3D page_get_flags(addr); + assert(!(flags & PAGE_WRITE)); + + addr =3D tb_page_addr1(tb); + if (addr !=3D -1) { + flags =3D page_get_flags(addr); + assert(!(flags & PAGE_WRITE)); + } + interval_tree_insert(&tb->itree, &tb_root); } =20 diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 22ef780900..a3cecda405 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -135,106 +135,61 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigs= et_t *old_set, } } =20 -/* - * Walks guest process memory "regions" one by one - * and calls callback function 'fn' for each region. - */ -struct walk_memory_regions_data { - walk_memory_regions_fn fn; - void *priv; - target_ulong start; - int prot; -}; +typedef struct PageFlagsNode { + IntervalTreeNode itree; + int flags; +} PageFlagsNode; =20 -static int walk_memory_regions_end(struct walk_memory_regions_data *data, - target_ulong end, int new_prot) +static IntervalTreeRoot pageflags_root; + +static PageFlagsNode *pageflags_find(target_ulong start, target_long last) { - if (data->start !=3D -1u) { - int rc =3D data->fn(data->priv, data->start, end, data->prot); - if (rc !=3D 0) { - return rc; - } - } + IntervalTreeNode *n; =20 - data->start =3D (new_prot ? end : -1u); - data->prot =3D new_prot; - - return 0; + n =3D interval_tree_iter_first(&pageflags_root, start, last); + return n ? container_of(n, PageFlagsNode, itree) : NULL; } =20 -static int walk_memory_regions_1(struct walk_memory_regions_data *data, - target_ulong base, int level, void **lp) +static PageFlagsNode *pageflags_next(PageFlagsNode *p, target_ulong start, + target_long last) { - target_ulong pa; - int i, rc; + IntervalTreeNode *n; =20 - if (*lp =3D=3D NULL) { - return walk_memory_regions_end(data, base, 0); - } - - if (level =3D=3D 0) { - PageDesc *pd =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - int prot =3D pd[i].flags; - - pa =3D base | (i << TARGET_PAGE_BITS); - if (prot !=3D data->prot) { - rc =3D walk_memory_regions_end(data, pa, prot); - if (rc !=3D 0) { - return rc; - } - } - } - } else { - void **pp =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - pa =3D base | ((target_ulong)i << - (TARGET_PAGE_BITS + V_L2_BITS * level)); - rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); - if (rc !=3D 0) { - return rc; - } - } - } - - return 0; + n =3D interval_tree_iter_next(&p->itree, start, last); + return n ? container_of(n, PageFlagsNode, itree) : NULL; } =20 int walk_memory_regions(void *priv, walk_memory_regions_fn fn) { - struct walk_memory_regions_data data; - uintptr_t i, l1_sz =3D v_l1_size; + IntervalTreeNode *n; + int rc =3D 0; =20 - data.fn =3D fn; - data.priv =3D priv; - data.start =3D -1u; - data.prot =3D 0; + mmap_lock(); + for (n =3D interval_tree_iter_first(&pageflags_root, 0, -1); + n !=3D NULL; + n =3D interval_tree_iter_next(n, 0, -1)) { + PageFlagsNode *p =3D container_of(n, PageFlagsNode, itree); =20 - for (i =3D 0; i < l1_sz; i++) { - target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); - int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); + rc =3D fn(priv, n->start, n->last + 1, p->flags); if (rc !=3D 0) { - return rc; + break; } } + mmap_unlock(); =20 - return walk_memory_regions_end(&data, 0, 0); + return rc; } =20 static int dump_region(void *priv, target_ulong start, - target_ulong end, unsigned long prot) + target_ulong end, unsigned long prot) { FILE *f =3D (FILE *)priv; =20 - (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx - " "TARGET_FMT_lx" %c%c%c\n", - start, end, end - start, - ((prot & PAGE_READ) ? 'r' : '-'), - ((prot & PAGE_WRITE) ? 'w' : '-'), - ((prot & PAGE_EXEC) ? 'x' : '-')); - + fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx" "TARGET_FMT_lx" %c%c%c\n", + start, end, end - start, + ((prot & PAGE_READ) ? 'r' : '-'), + ((prot & PAGE_WRITE) ? 'w' : '-'), + ((prot & PAGE_EXEC) ? 'x' : '-')); return 0; } =20 @@ -242,20 +197,131 @@ static int dump_region(void *priv, target_ulong star= t, void page_dump(FILE *f) { const int length =3D sizeof(target_ulong) * 2; - (void) fprintf(f, "%-*s %-*s %-*s %s\n", + + fprintf(f, "%-*s %-*s %-*s %s\n", length, "start", length, "end", length, "size", "prot"); walk_memory_regions(f, dump_region); } =20 int page_get_flags(target_ulong address) { - PageDesc *p; + PageFlagsNode *p =3D pageflags_find(address, address); =20 - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { + /* + * See util/interval-tree.c re lockless lookups: no false positives but + * there are false negatives. If we find nothing, retry with the mmap + * lock acquired. + */ + if (p) { + return p->flags; + } + if (have_mmap_lock()) { return 0; } - return p->flags; + + mmap_lock(); + p =3D pageflags_find(address, address); + mmap_unlock(); + return p ? p->flags : 0; +} + +/* A subroutine of page_set_flags: insert a new node for [start,last]. */ +static void pageflags_create(target_ulong start, target_ulong last, int fl= ags) +{ + PageFlagsNode *p =3D g_new(PageFlagsNode, 1); + + p->itree.start =3D start; + p->itree.last =3D last; + p->flags =3D flags; + interval_tree_insert(&p->itree, &pageflags_root); +} + +/* A subroutine of page_set_flags: remove everything in [start,last]. */ +static bool pageflags_unset(target_ulong start, target_ulong last) +{ + bool inval_tb =3D false; + + while (true) { + PageFlagsNode *p =3D pageflags_find(start, last); + target_ulong p_last; + + if (!p) { + break; + } + + if (p->flags & PAGE_EXEC) { + inval_tb =3D true; + } + + interval_tree_remove(&p->itree, &pageflags_root); + p_last =3D p->itree.last; + + if (p->itree.start < start) { + /* Truncate the node from the end, or split out the middle. */ + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + if (last < p_last) { + pageflags_create(last + 1, p_last, p->flags); + break; + } + } else if (p_last <=3D last) { + /* Range completely covers node -- remove it. */ + g_free(p); + } else { + /* Truncate the node from the start. */ + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + break; + } + } + + return inval_tb; +} + +/* + * A subroutine of page_set_flags: nothing overlaps [start,last], + * but check adjacent mappings and maybe merge into a single range. + */ +static void pageflags_create_merge(target_ulong start, target_ulong last, + int flags) +{ + PageFlagsNode *next =3D NULL, *prev =3D NULL; + + if (start > 0) { + prev =3D pageflags_find(start - 1, start - 1); + if (prev) { + if (prev->flags =3D=3D flags) { + interval_tree_remove(&prev->itree, &pageflags_root); + } else { + prev =3D NULL; + } + } + } + if (last + 1 !=3D 0) { + next =3D pageflags_find(last + 1, last + 1); + if (next) { + if (next->flags =3D=3D flags) { + interval_tree_remove(&next->itree, &pageflags_root); + } else { + next =3D NULL; + } + } + } + + if (prev) { + if (next) { + prev->itree.last =3D next->itree.last; + g_free(next); + } else { + prev->itree.last =3D last; + } + interval_tree_insert(&prev->itree, &pageflags_root); + } else if (next) { + next->itree.start =3D start; + interval_tree_insert(&next->itree, &pageflags_root); + } else { + pageflags_create(start, last, flags); + } } =20 /* @@ -267,6 +333,146 @@ int page_get_flags(target_ulong address) #endif #define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) =20 +/* A subroutine of page_set_flags: add flags to [start,last]. */ +static bool pageflags_set_clear(target_ulong start, target_ulong last, + int set_flags, int clear_flags) +{ + PageFlagsNode *p; + target_ulong p_start, p_last; + int p_flags, merge_flags; + bool inval_tb =3D false; + + restart: + p =3D pageflags_find(start, last); + if (!p) { + if (set_flags) { + pageflags_create_merge(start, last, set_flags); + } + goto done; + } + + p_start =3D p->itree.start; + p_last =3D p->itree.last; + p_flags =3D p->flags; + /* Using mprotect on a page does not change sticky bits. */ + merge_flags =3D (p_flags & ~clear_flags) | set_flags; + + /* + * Need to flush if an overlapping executable region + * removes exec, or adds write. + */ + if ((p_flags & PAGE_EXEC) + && (!(merge_flags & PAGE_EXEC) + || (merge_flags & ~p_flags & PAGE_WRITE))) { + inval_tb =3D true; + } + + /* + * If there is an exact range match, update and return without + * attempting to merge with adjacent regions. + */ + if (start =3D=3D p_start && last =3D=3D p_last) { + if (merge_flags) { + p->flags =3D merge_flags; + } else { + interval_tree_remove(&p->itree, &pageflags_root); + g_free(p); + } + goto done; + } + + /* + * If sticky bits affect the original mapping, then we must be more + * careful about the existing intervals and the separate flags. + */ + if (set_flags !=3D merge_flags) { + if (p_start < start) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + + if (last < p_last) { + if (merge_flags) { + pageflags_create(start, last, merge_flags); + } + pageflags_create(last + 1, p_last, p_flags); + } else { + if (merge_flags) { + pageflags_create(start, p_last, merge_flags); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + } + } else { + if (start < p_start && set_flags) { + pageflags_create(start, p_start - 1, set_flags); + } + if (last < p_last) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + if (merge_flags) { + pageflags_create(start, last, merge_flags); + } + } else { + if (merge_flags) { + p->flags =3D merge_flags; + } else { + interval_tree_remove(&p->itree, &pageflags_root); + g_free(p); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + } + } + goto done; + } + + /* If flags are not changing for this range, incorporate it. */ + if (set_flags =3D=3D p_flags) { + if (start < p_start) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.start =3D start; + interval_tree_insert(&p->itree, &pageflags_root); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + goto done; + } + + /* Maybe split out head and/or tail ranges with the original flags. */ + interval_tree_remove(&p->itree, &pageflags_root); + if (p_start < start) { + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + + if (p_last < last) { + goto restart; + } + if (last < p_last) { + pageflags_create(last + 1, p_last, p_flags); + } + } else if (last < p_last) { + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + } else { + g_free(p); + goto restart; + } + if (set_flags) { + pageflags_create(start, last, set_flags); + } + + done: + return inval_tb; +} + /* * Modify the flags of a page and invalidate the code if necessary. * The flag PAGE_WRITE_ORG is positioned automatically depending @@ -274,49 +480,41 @@ int page_get_flags(target_ulong address) */ void page_set_flags(target_ulong start, target_ulong end, int flags) { - target_ulong addr, len; - bool reset, inval_tb =3D false; + target_ulong last; + bool reset =3D false; + bool inval_tb =3D false; =20 /* This function should never be called with addresses outside the guest address space. If this assert fires, it probably indicates a missing call to h2g_valid. */ - assert(end - 1 <=3D GUEST_ADDR_MAX); assert(start < end); + assert(end - 1 <=3D GUEST_ADDR_MAX); /* Only set PAGE_ANON with new mappings. */ assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); assert_memory_lock(); =20 start =3D start & TARGET_PAGE_MASK; end =3D TARGET_PAGE_ALIGN(end); + last =3D end - 1; =20 - if (flags & PAGE_WRITE) { - flags |=3D PAGE_WRITE_ORG; - } - reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); - if (reset) { - page_reset_target_data(start, end); - } - flags &=3D ~PAGE_RESET; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); - - /* - * If the page was executable, but is reset, or is no longer - * executable, or has become writable, then invalidate any code. - */ - if ((p->flags & PAGE_EXEC) - && (reset || - !(flags & PAGE_EXEC) || - (flags & ~p->flags & PAGE_WRITE))) { - inval_tb =3D true; + if (!(flags & PAGE_VALID)) { + flags =3D 0; + } else { + reset =3D flags & PAGE_RESET; + flags &=3D ~PAGE_RESET; + if (flags & PAGE_WRITE) { + flags |=3D PAGE_WRITE_ORG; } - /* Using mprotect on a page does not change sticky bits. */ - p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; } =20 + if (!flags || reset) { + page_reset_target_data(start, end); + inval_tb |=3D pageflags_unset(start, last); + } + if (flags) { + inval_tb |=3D pageflags_set_clear(start, last, flags, + ~(reset ? 0 : PAGE_STICKY)); + } if (inval_tb) { tb_invalidate_phys_range(start, end); } @@ -324,87 +522,89 @@ void page_set_flags(target_ulong start, target_ulong = end, int flags) =20 int page_check_range(target_ulong start, target_ulong len, int flags) { - PageDesc *p; - target_ulong end; - target_ulong addr; - - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { - assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); - } + target_ulong last; =20 if (len =3D=3D 0) { - return 0; - } - if (start + len - 1 < start) { - /* We've wrapped around. */ - return -1; + return 0; /* trivial length */ } =20 - /* must do before we loose bits in the next step */ - end =3D TARGET_PAGE_ALIGN(start + len); - start =3D start & TARGET_PAGE_MASK; + last =3D start + len - 1; + if (last < start) { + return -1; /* wrap around */ + } + + while (true) { + PageFlagsNode *p =3D pageflags_find(start, last); + int missing; =20 - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); if (!p) { - return -1; + return -1; /* entire region invalid */ } - if (!(p->flags & PAGE_VALID)) { - return -1; + if (start < p->itree.start) { + return -1; /* initial bytes invalid */ } =20 - if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { - return -1; + missing =3D flags & ~p->flags; + if (missing & PAGE_READ) { + return -1; /* page not readable */ } - if (flags & PAGE_WRITE) { + if (missing & PAGE_WRITE) { if (!(p->flags & PAGE_WRITE_ORG)) { + return -1; /* page not writable */ + } + /* Asking about writable, but has been protected: undo. */ + if (!page_unprotect(start, 0)) { return -1; } - /* unprotect the page if it was put read-only because it - contains translated code */ - if (!(p->flags & PAGE_WRITE)) { - if (!page_unprotect(addr, 0)) { - return -1; - } + /* TODO: page_unprotect should take a range, not a single page= . */ + if (last - start < TARGET_PAGE_SIZE) { + return 0; /* ok */ } + start +=3D TARGET_PAGE_SIZE; + continue; } + + if (last <=3D p->itree.last) { + return 0; /* ok */ + } + start =3D p->itree.last + 1; } - return 0; } =20 -void page_protect(tb_page_addr_t page_addr) +void page_protect(tb_page_addr_t address) { - target_ulong addr; - PageDesc *p; + PageFlagsNode *p; + target_ulong start, last; int prot; =20 - p =3D page_find(page_addr >> TARGET_PAGE_BITS); - if (p && (p->flags & PAGE_WRITE)) { - /* - * Force the host page as non writable (writes will have a page fa= ult + - * mprotect overhead). - */ - page_addr &=3D qemu_host_page_mask; - prot =3D 0; - for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; - addr +=3D TARGET_PAGE_SIZE) { + assert_memory_lock(); =20 - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - continue; - } + if (qemu_host_page_size <=3D TARGET_PAGE_SIZE) { + start =3D address & TARGET_PAGE_MASK; + last =3D start + TARGET_PAGE_SIZE - 1; + } else { + start =3D address & qemu_host_page_mask; + last =3D start + qemu_host_page_size - 1; + } + + p =3D pageflags_find(start, last); + if (!p) { + return; + } + prot =3D p->flags; + + if (unlikely(p->itree.last < last)) { + /* More than one protection region covers the one host page. */ + assert(TARGET_PAGE_SIZE < qemu_host_page_size); + while ((p =3D pageflags_next(p, start, last)) !=3D NULL) { prot |=3D p->flags; - p->flags &=3D ~PAGE_WRITE; } - mprotect(g2h_untagged(page_addr), qemu_host_page_size, - (prot & PAGE_BITS) & ~PAGE_WRITE); + } + + if (prot & PAGE_WRITE) { + pageflags_set_clear(start, last, 0, PAGE_WRITE); + mprotect(g2h_untagged(start), qemu_host_page_size, + prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE); } } =20 @@ -417,10 +617,8 @@ void page_protect(tb_page_addr_t page_addr) */ int page_unprotect(target_ulong address, uintptr_t pc) { - unsigned int prot; + PageFlagsNode *p; bool current_tb_invalidated; - PageDesc *p; - target_ulong host_start, host_end, addr; =20 /* * Technically this isn't safe inside a signal handler. However we @@ -429,40 +627,54 @@ int page_unprotect(target_ulong address, uintptr_t pc) */ mmap_lock(); =20 - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { + p =3D pageflags_find(address, address); + + /* If this address was not really writable, nothing to do. */ + if (!p || !(p->flags & PAGE_WRITE_ORG)) { mmap_unlock(); return 0; } =20 - /* - * If the page was really writable, then we change its - * protection back to writable. - */ - if (p->flags & PAGE_WRITE_ORG) { - current_tb_invalidated =3D false; - if (p->flags & PAGE_WRITE) { - /* - * If the page is actually marked WRITE then assume this is be= cause - * this thread raced with another one which got here first and - * set the page to PAGE_WRITE and did the TB invalidate for us. - */ + current_tb_invalidated =3D false; + if (p->flags & PAGE_WRITE) { + /* + * If the page is actually marked WRITE then assume this is because + * this thread raced with another one which got here first and + * set the page to PAGE_WRITE and did the TB invalidate for us. + */ #ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D tcg_tb_lookup(pc); - if (current_tb) { - current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; - } + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + if (current_tb) { + current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVALID; + } #endif + } else { + target_ulong start, len, i; + int prot; + + if (qemu_host_page_size <=3D TARGET_PAGE_SIZE) { + start =3D address & TARGET_PAGE_MASK; + len =3D TARGET_PAGE_SIZE; + prot =3D p->flags | PAGE_WRITE; + pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0); + current_tb_invalidated =3D tb_invalidate_phys_page_unwind(star= t, pc); } else { - host_start =3D address & qemu_host_page_mask; - host_end =3D host_start + qemu_host_page_size; - + start =3D address & qemu_host_page_mask; + len =3D qemu_host_page_size; prot =3D 0; - for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - p->flags |=3D PAGE_WRITE; - prot |=3D p->flags; =20 + for (i =3D 0; i < len; i +=3D TARGET_PAGE_SIZE) { + target_ulong addr =3D start + i; + + p =3D pageflags_find(addr, addr); + if (p) { + prot |=3D p->flags; + if (p->flags & PAGE_WRITE_ORG) { + prot |=3D PAGE_WRITE; + pageflags_set_clear(addr, addr + TARGET_PAGE_SIZE = - 1, + PAGE_WRITE, 0); + } + } /* * Since the content will be modified, we must invalidate * the corresponding translated code. @@ -470,15 +682,16 @@ int page_unprotect(target_ulong address, uintptr_t pc) current_tb_invalidated |=3D tb_invalidate_phys_page_unwind(addr, pc); } - mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, - prot & PAGE_BITS); } - mmap_unlock(); - /* If current TB was invalidated return to main loop */ - return current_tb_invalidated ? 2 : 1; + if (prot & PAGE_EXEC) { + prot =3D (prot & ~PAGE_EXEC) | PAGE_READ; + } + mprotect((void *)g2h_untagged(start), len, prot & PAGE_BITS); } mmap_unlock(); - return 0; + + /* If current TB was invalidated return to main loop */ + return current_tb_invalidated ? 2 : 1; } =20 static int probe_access_internal(CPUArchState *env, target_ulong addr, diff --git a/tests/tcg/multiarch/test-vma.c b/tests/tcg/multiarch/test-vma.c new file mode 100644 index 0000000000..2893d60334 --- /dev/null +++ b/tests/tcg/multiarch/test-vma.c @@ -0,0 +1,22 @@ +/* + * Test very large vma allocations. + * The qemu out-of-memory condition was within the mmap syscall itself. + * If the syscall actually returns with MAP_FAILED, the test succeeded. + */ +#include + +int main() +{ + int n =3D sizeof(size_t) =3D=3D 4 ? 32 : 45; + + for (int i =3D 28; i < n; i++) { + size_t l =3D (size_t)1 << i; + void *p =3D mmap(0, l, PROT_NONE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + if (p =3D=3D MAP_FAILED) { + break; + } + munmap(p, l); + } + return 0; +} --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599380; cv=none; d=zohomail.com; s=zohoarc; b=PZ9dciM8Ln/jyfzh83XJJpHIXYes1B2TH+HNPsjJXWQ/JvVjrQI6H3cM9OLuadjCChQzbsQifv4qDngLw5sE6a1Th3tt62L0vaE48nQsgqD+rBcUYkyLM5q7UH/5PU51Q/ACz7RDaip5rAWxT9cirWb3iY3Wt72t4N3I9d5Afss= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599380; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=FvV7MJDedVWQrG4QpvsE3Qna88KoQ9o16ThmPlgAenE=; b=Mkqsw4GmdPpfjIhDSthKasi23VVU0GU0umi+VXoyeHTazAzzIb58ozFMZyrmabYJsFO/qA3yRb97pSLNG1HFZqswY1UmuXG5RrJVpOfMgtclfwkborGC1gjSia9QC9SgA85gXhZe1jdqVwO7zRCyKmM2B41rQ4SwVS9/yJc4mHA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599380582187.03750469846102; Tue, 20 Dec 2022 21:09:40 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGM-0002IH-G4; Wed, 21 Dec 2022 00:03:30 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGK-0002HJ-7E for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:28 -0500 Received: from mail-pj1-x1030.google.com ([2607:f8b0:4864:20::1030]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGH-0003OZ-AS for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:27 -0500 Received: by mail-pj1-x1030.google.com with SMTP id z8-20020a17090abd8800b00219ed30ce47so1006239pjr.3 for ; Tue, 20 Dec 2022 21:03:24 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FvV7MJDedVWQrG4QpvsE3Qna88KoQ9o16ThmPlgAenE=; b=xIaGxyDxNY5ApnPK0VXV9t7lUufYUnEyUIgQETNMvGA+UiKCHQatr48Dnss42+3O1k 3njeKsD1Cvy/Pl3NonGwu6JquQL7gwejPEzGfNz6FPIqyKQANq68QdYRboaQDxzR6WI0 VA8S9HMDduJps9jbbpXzQGYE8+riAYiGrkbHjzfmfTyD7HcxFWr4PugQdM3KYfAq2vXi 7rfYFSfqF9Z0ZyBDCXtWpXgHq42oWHje+h06XOMyPTTs059fQrEuR/q3fA/p1fgEIXlc Fa29sHbcIB3vnOAHdFnGlU3bRnzhE/alg04cohOr/CTYG/5hDdYp0cyXvmK7+BskCjZJ YfNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FvV7MJDedVWQrG4QpvsE3Qna88KoQ9o16ThmPlgAenE=; b=j5NagGV2QJaKc1SwWdERMvv91f56fF/d6p1TdwVRDrSoompvtnEyx6RsvA3NZQ3Uqw 3iau0v5VblSCKUajV5ajQV6f+QM0XPJ7PPyq2hm5qg8TIEmvuBA1EX9IK2M/nzHjN5t7 GrC/6C5DHDQ3yAV+t4yTyVSQAivq++JTwfjpd1A38Y5PH+S1zlEbimpAdaKRfzb0MmBL d49Krk5lew1w0HUb9c3wcGnj8WC7kCuPmFozGpNO3/TIwACtUPhhx9sX5ODiO8fyuFmS DDo7sBxkWJAHgwDWDi8CF9Z4yoDdpbD+Q4+E28kg/1WaNdRUB1rdIai63b9/i0YQqHpJ YNQg== X-Gm-Message-State: AFqh2kpqGBTECrBqDteJqCvH5jjsr+m2xDcsl4YP5DigN2BFntDTAbJ5 ax7btAe0geJezwA4+Wi3yK2xaNfuu4PymtxN X-Google-Smtp-Source: AMrXdXtWE3ZG9sh1Kj6HaZnm0Y+LOsVhZyiYFj06bdyJghQtZUAXdoZ/I1GfavO/K5V3U7d/yvlHDw== X-Received: by 2002:a17:90a:d801:b0:220:9840:1d41 with SMTP id a1-20020a17090ad80100b0022098401d41mr859480pjv.6.1671599003965; Tue, 20 Dec 2022 21:03:23 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org Subject: [PULL v2 08/14] accel/tcg: Move PageDesc tree into tb-maint.c for system Date: Tue, 20 Dec 2022 21:03:07 -0800 Message-Id: <20221221050313.2950701-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1030; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1030.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599380906100001 Content-Type: text/plain; charset="utf-8" Now that PageDesc is not used for user-only, and for system it is only used for tb maintenance, move the implementation into tb-main.c appropriately ifdefed. We have not yet eliminated all references to PageDesc for user-only, so retain a typedef to the structure without definition. Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 49 +++------------- accel/tcg/tb-maint.c | 120 ++++++++++++++++++++++++++++++++++++-- accel/tcg/translate-all.c | 95 ------------------------------ 3 files changed, 124 insertions(+), 140 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index be19bdf088..14b89c4ee8 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -23,51 +23,13 @@ #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif =20 -typedef struct PageDesc { +typedef struct PageDesc PageDesc; #ifndef CONFIG_USER_ONLY +struct PageDesc { QemuSpin lock; /* list of TBs intersecting this ram page */ uintptr_t first_tb; -#endif -} PageDesc; - -/* - * In system mode we want L1_MAP to be based on ram offsets, - * while in user mode we want it to be based on virtual addresses. - * - * TODO: For user mode, see the caveat re host vs guest virtual - * address spaces near GUEST_ADDR_MAX. - */ -#if !defined(CONFIG_USER_ONLY) -#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS -# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS -#else -# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS -#endif -#else -# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) -#endif - -/* Size of the L2 (and L3, etc) page tables. */ -#define V_L2_BITS 10 -#define V_L2_SIZE (1 << V_L2_BITS) - -/* - * L1 Mapping properties - */ -extern int v_l1_size; -extern int v_l1_shift; -extern int v_l2_levels; - -/* - * The bottom level has pointers to PageDesc, and is indexed by - * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. - */ -#define V_L1_MIN_BITS 4 -#define V_L1_MAX_BITS (V_L2_BITS + 3) -#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) - -extern void *l1_map[V_L1_MAX_SIZE]; +}; =20 PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); =20 @@ -76,6 +38,11 @@ static inline PageDesc *page_find(tb_page_addr_t index) return page_find_alloc(index, false); } =20 +void page_table_config_init(void); +#else +static inline void page_table_config_init(void) { } +#endif + /* list iterators for lists of tagged pointers in TranslationBlock */ #define TB_FOR_EACH_TAGGED(head, tb, n, field) \ for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 20e86c813d..d32e5f80c8 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -127,6 +127,111 @@ static PageForEachNext foreach_tb_next(PageForEachNex= t tb, } =20 #else +/* + * In system mode we want L1_MAP to be based on ram offsets. + */ +#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS +# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS +#else +# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS +#endif + +/* Size of the L2 (and L3, etc) page tables. */ +#define V_L2_BITS 10 +#define V_L2_SIZE (1 << V_L2_BITS) + +/* + * L1 Mapping properties + */ +static int v_l1_size; +static int v_l1_shift; +static int v_l2_levels; + +/* + * The bottom level has pointers to PageDesc, and is indexed by + * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. + */ +#define V_L1_MIN_BITS 4 +#define V_L1_MAX_BITS (V_L2_BITS + 3) +#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) + +static void *l1_map[V_L1_MAX_SIZE]; + +void page_table_config_init(void) +{ + uint32_t v_l1_bits; + + assert(TARGET_PAGE_BITS); + /* The bits remaining after N lower levels of page tables. */ + v_l1_bits =3D (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS; + if (v_l1_bits < V_L1_MIN_BITS) { + v_l1_bits +=3D V_L2_BITS; + } + + v_l1_size =3D 1 << v_l1_bits; + v_l1_shift =3D L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits; + v_l2_levels =3D v_l1_shift / V_L2_BITS - 1; + + assert(v_l1_bits <=3D V_L1_MAX_BITS); + assert(v_l1_shift % V_L2_BITS =3D=3D 0); + assert(v_l2_levels >=3D 0); +} + +PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +{ + PageDesc *pd; + void **lp; + int i; + + /* Level 1. Always allocated. */ + lp =3D l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); + + /* Level 2..N-1. */ + for (i =3D v_l2_levels; i > 0; i--) { + void **p =3D qatomic_rcu_read(lp); + + if (p =3D=3D NULL) { + void *existing; + + if (!alloc) { + return NULL; + } + p =3D g_new0(void *, V_L2_SIZE); + existing =3D qatomic_cmpxchg(lp, NULL, p); + if (unlikely(existing)) { + g_free(p); + p =3D existing; + } + } + + lp =3D p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); + } + + pd =3D qatomic_rcu_read(lp); + if (pd =3D=3D NULL) { + void *existing; + + if (!alloc) { + return NULL; + } + + pd =3D g_new0(PageDesc, V_L2_SIZE); + for (int i =3D 0; i < V_L2_SIZE; i++) { + qemu_spin_init(&pd[i].lock); + } + + existing =3D qatomic_cmpxchg(lp, NULL, pd); + if (unlikely(existing)) { + for (int i =3D 0; i < V_L2_SIZE; i++) { + qemu_spin_destroy(&pd[i].lock); + } + g_free(pd); + pd =3D existing; + } + } + + return pd + (index & (V_L2_SIZE - 1)); +} =20 /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void tb_remove_all_1(int level, void **lp) @@ -420,6 +525,17 @@ static void tb_phys_invalidate__locked(TranslationBloc= k *tb) qemu_thread_jit_execute(); } =20 +#ifdef CONFIG_USER_ONLY +static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, + bool alloc) +{ + *ret_p1 =3D NULL; + *ret_p2 =3D NULL; +} +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } +#else static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, PageDesc **ret_p2, tb_page_addr_t phys2, bool a= lloc) { @@ -460,10 +576,6 @@ static void page_lock_pair(PageDesc **ret_p1, tb_page_= addr_t phys1, } } =20 -#ifdef CONFIG_USER_ONLY -static inline void page_lock_tb(const TranslationBlock *tb) { } -static inline void page_unlock_tb(const TranslationBlock *tb) { } -#else /* lock the page(s) of a TB in the correct acquisition order */ static void page_lock_tb(const TranslationBlock *tb) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index cc3ec36d7a..40f7b91c4b 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -114,37 +114,8 @@ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) * BITS_PER_BYTE); =20 -/* - * L1 Mapping properties - */ -int v_l1_size; -int v_l1_shift; -int v_l2_levels; - -void *l1_map[V_L1_MAX_SIZE]; - TBContext tb_ctx; =20 -static void page_table_config_init(void) -{ - uint32_t v_l1_bits; - - assert(TARGET_PAGE_BITS); - /* The bits remaining after N lower levels of page tables. */ - v_l1_bits =3D (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS; - if (v_l1_bits < V_L1_MIN_BITS) { - v_l1_bits +=3D V_L2_BITS; - } - - v_l1_size =3D 1 << v_l1_bits; - v_l1_shift =3D L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits; - v_l2_levels =3D v_l1_shift / V_L2_BITS - 1; - - assert(v_l1_bits <=3D V_L1_MAX_BITS); - assert(v_l1_shift % V_L2_BITS =3D=3D 0); - assert(v_l2_levels >=3D 0); -} - /* Encode VAL as a signed leb128 sequence at P. Return P incremented past the encoded value. */ static uint8_t *encode_sleb128(uint8_t *p, target_long val) @@ -339,72 +310,6 @@ void page_init(void) page_table_config_init(); } =20 -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) -{ - PageDesc *pd; - void **lp; - int i; - - /* Level 1. Always allocated. */ - lp =3D l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); - - /* Level 2..N-1. */ - for (i =3D v_l2_levels; i > 0; i--) { - void **p =3D qatomic_rcu_read(lp); - - if (p =3D=3D NULL) { - void *existing; - - if (!alloc) { - return NULL; - } - p =3D g_new0(void *, V_L2_SIZE); - existing =3D qatomic_cmpxchg(lp, NULL, p); - if (unlikely(existing)) { - g_free(p); - p =3D existing; - } - } - - lp =3D p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); - } - - pd =3D qatomic_rcu_read(lp); - if (pd =3D=3D NULL) { - void *existing; - - if (!alloc) { - return NULL; - } - pd =3D g_new0(PageDesc, V_L2_SIZE); -#ifndef CONFIG_USER_ONLY - { - int i; - - for (i =3D 0; i < V_L2_SIZE; i++) { - qemu_spin_init(&pd[i].lock); - } - } -#endif - existing =3D qatomic_cmpxchg(lp, NULL, pd); - if (unlikely(existing)) { -#ifndef CONFIG_USER_ONLY - { - int i; - - for (i =3D 0; i < V_L2_SIZE; i++) { - qemu_spin_destroy(&pd[i].lock); - } - } -#endif - g_free(pd); - pd =3D existing; - } - } - - return pd + (index & (V_L2_SIZE - 1)); -} - /* In user-mode page locks aren't used; mmap_lock is enough */ #ifdef CONFIG_USER_ONLY struct page_collection * --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599057; cv=none; d=zohomail.com; s=zohoarc; b=jLA70krjgRUPByAsJeRj1oHEiYzwgog8qDZi5ZCch95y4ZEWm1oHbHS3+O51PGhlrItlTmtcayAZwQwQ9q+SDq6SKOs7IrzqNCooug6v+I/76jzWhjj+Zq8D4K3qBJzeSm8zJyrkqeuhVG0ywSCK5/hVEwikssJ4T5nTutcs4io= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599057; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rpOPi6BQuDvmQYfqf+xh3ICBKUstxEpQlX6El8R8RLs=; b=D7g1/0FS5lsgXt2zTTQltrMdtaycf4GS3FfeWFkcWrKSzDcoyPckhfJQx0qyd+5IIwEnfHE039bE9ppjF++gCmNQdDqUWsa8m3WBs8Wf2fHx1bOkEH2XgD/DGEqApWtQY0wF70jwDq4e4wuMcXJa0VKiADi7mlEyHawhH55p3qo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599057399696.8223397146271; Tue, 20 Dec 2022 21:04:17 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGP-0002JI-Cf; Wed, 21 Dec 2022 00:03:33 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGL-0002Hb-Nl for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:29 -0500 Received: from mail-pl1-x62b.google.com ([2607:f8b0:4864:20::62b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGI-0003Of-E2 for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:29 -0500 Received: by mail-pl1-x62b.google.com with SMTP id d7so14420638pll.9 for ; Tue, 20 Dec 2022 21:03:25 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rpOPi6BQuDvmQYfqf+xh3ICBKUstxEpQlX6El8R8RLs=; b=zjmXth06HaPhx3vVEKGciWZrexxA8gueW4zrxlTF41fqOwjQwU48DvOqH8wfEmCEM4 Gj+TyRKxRHLveO0vQZLIeOfXyxC1dpe/J/KOTz0pUL7wK9RGTC7iX+eS6HRiPFPWHn2d k4qB7iOtgrYihhXmeX92l4WTDMq6FsXgJAQW1HboDHUy1ahxX0pBlC3i8KAX1Xv1UZ4/ oIXilprmmNaFf/xfCeHTot7w3lmk1EHXMLe+8qhxzNLE/gM0ClafucYkRV2Q4GS0rd22 lUqtW2mdPYRWEEHnAf89S4p5ols5PIxPl6a5M/hTzxkPJu4ly8kdtYE5NQduF0cKphna QTBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rpOPi6BQuDvmQYfqf+xh3ICBKUstxEpQlX6El8R8RLs=; b=W/Oj9Jaex/mOG21INOiUNRRe6mmH9uB3jYuh/a5fEeakXU5UCozOYyxzxblROvciiD QBH5Tl3kdzHNQtwa47dtoyuNozKKZeNtztvpNg8FVIjFRqqXIon6eG8SPv6gNQI6BujY ItkO6t0CNT3YRFABRuqb1DX9pT2gLM5ko2ClSFrZdoOJXgbR04cCVb4PnQZq46PsZ97V 5iKux3ZeqWPQ745aotjJMyQYZyOa6JuCjMkmgvLpoUxWP9yJRh3fR0QRyGhirG2UOirr IuNl0W6SX3GeE1uLakKXGK1AzLx5OBTGNHej4xseyubZ/iq58Q1F5VB1/NtqfMrx0gEn 0IlA== X-Gm-Message-State: AFqh2krlagi27AjF+M5AlS+jW8c6hKoO4KBRbOoN4Kh0u1MZoetbC5Fd DQAElJTrLBznpZmc/OB6qtoZ6kzyLejoNQfD X-Google-Smtp-Source: AMrXdXusgP2WAbEqxmiVkh6t0AcyHEpZDLCOnfB7G7jSrkh1bmQKoArLpeUXaBXsyZ7P0QeBNyMfgA== X-Received: by 2002:a17:90a:138b:b0:21a:dc1:e805 with SMTP id i11-20020a17090a138b00b0021a0dc1e805mr671286pja.31.1671599004963; Tue, 20 Dec 2022 21:03:24 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 09/14] accel/tcg: Move remainder of page locking to tb-maint.c Date: Tue, 20 Dec 2022 21:03:08 -0800 Message-Id: <20221221050313.2950701-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::62b; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599059094100001 The only thing that still touches PageDesc in translate-all.c are some locking routines related to tb-maint.c which have not yet been moved. Do so now. Move some code up in tb-maint.c as well, to untangle the maze of ifdefs, and allow a sensible final ordering. Move some declarations from exec/translate-all.h to internal.h, as they are only used within accel/tcg/. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 68 ++--- include/exec/translate-all.h | 6 - accel/tcg/tb-maint.c | 473 +++++++++++++++++++++++++++++------ accel/tcg/translate-all.c | 301 ---------------------- 4 files changed, 411 insertions(+), 437 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 14b89c4ee8..e1429a53ac 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -23,62 +23,28 @@ #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif =20 -typedef struct PageDesc PageDesc; -#ifndef CONFIG_USER_ONLY -struct PageDesc { - QemuSpin lock; - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; -}; - -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); - -static inline PageDesc *page_find(tb_page_addr_t index) -{ - return page_find_alloc(index, false); -} - -void page_table_config_init(void); -#else -static inline void page_table_config_init(void) { } -#endif - -/* list iterators for lists of tagged pointers in TranslationBlock */ -#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ - for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ - tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ - tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) - -#define TB_FOR_EACH_JMP(head_tb, tb, n) \ - TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) - -/* In user-mode page locks aren't used; mmap_lock is enough */ -#ifdef CONFIG_USER_ONLY -#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) -static inline void page_lock(PageDesc *pd) { } -static inline void page_unlock(PageDesc *pd) { } -#else -#ifdef CONFIG_DEBUG_TCG -void do_assert_page_locked(const PageDesc *pd, const char *file, int line); -#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE_= _) -#else -#define assert_page_locked(pd) -#endif -void page_lock(PageDesc *pd); -void page_unlock(PageDesc *pd); - -/* TODO: For now, still shared with translate-all.c for system mode. */ -typedef int PageForEachNext; -#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - -#endif -#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) +#if defined(CONFIG_SOFTMMU) && defined(CONFIG_DEBUG_TCG) void assert_no_pages_locked(void); #else static inline void assert_no_pages_locked(void) { } #endif =20 +#ifdef CONFIG_USER_ONLY +static inline void page_table_config_init(void) { } +#else +void page_table_config_init(void); +#endif + +#ifdef CONFIG_SOFTMMU +struct page_collection; +void tb_invalidate_phys_page_fast(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr); +struct page_collection *page_collection_lock(tb_page_addr_t start, + tb_page_addr_t end); +void page_collection_unlock(struct page_collection *set); +#endif /* CONFIG_SOFTMMU */ + TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, int cflags); diff --git a/include/exec/translate-all.h b/include/exec/translate-all.h index 3e9cb91565..88602ae8d8 100644 --- a/include/exec/translate-all.h +++ b/include/exec/translate-all.h @@ -23,12 +23,6 @@ =20 =20 /* translate-all.c */ -struct page_collection *page_collection_lock(tb_page_addr_t start, - tb_page_addr_t end); -void page_collection_unlock(struct page_collection *set); -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr); void tb_invalidate_phys_page(tb_page_addr_t addr); void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr); =20 diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index d32e5f80c8..1676d359f2 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -30,6 +30,15 @@ #include "internal.h" =20 =20 +/* List iterators for lists of tagged pointers in TranslationBlock. */ +#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ + for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ + tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ + tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) + +#define TB_FOR_EACH_JMP(head_tb, tb, n) \ + TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) + static bool tb_cmp(const void *ap, const void *bp) { const TranslationBlock *a =3D ap; @@ -51,7 +60,27 @@ void tb_htable_init(void) qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } =20 +typedef struct PageDesc PageDesc; + #ifdef CONFIG_USER_ONLY + +/* + * In user-mode page locks aren't used; mmap_lock is enough. + */ +#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) + +static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, + bool alloc) +{ + *ret_p1 =3D NULL; + *ret_p2 =3D NULL; +} + +static inline void page_unlock(PageDesc *pd) { } +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } + /* * For user-only, since we are protecting all of memory with a single lock, * and because the two pages of a TranslationBlock are always contiguous, @@ -157,6 +186,12 @@ static int v_l2_levels; =20 static void *l1_map[V_L1_MAX_SIZE]; =20 +struct PageDesc { + QemuSpin lock; + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; +}; + void page_table_config_init(void) { uint32_t v_l1_bits; @@ -177,7 +212,7 @@ void page_table_config_init(void) assert(v_l2_levels >=3D 0); } =20 -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) { PageDesc *pd; void **lp; @@ -233,6 +268,303 @@ PageDesc *page_find_alloc(tb_page_addr_t index, bool = alloc) return pd + (index & (V_L2_SIZE - 1)); } =20 +static inline PageDesc *page_find(tb_page_addr_t index) +{ + return page_find_alloc(index, false); +} + +/** + * struct page_entry - page descriptor entry + * @pd: pointer to the &struct PageDesc of the page this entry represe= nts + * @index: page index of the page + * @locked: whether the page is locked + * + * This struct helps us keep track of the locked state of a page, without + * bloating &struct PageDesc. + * + * A page lock protects accesses to all fields of &struct PageDesc. + * + * See also: &struct page_collection. + */ +struct page_entry { + PageDesc *pd; + tb_page_addr_t index; + bool locked; +}; + +/** + * struct page_collection - tracks a set of pages (i.e. &struct page_entry= 's) + * @tree: Binary search tree (BST) of the pages, with key =3D=3D page in= dex + * @max: Pointer to the page in @tree with the highest page index + * + * To avoid deadlock we lock pages in ascending order of page index. + * When operating on a set of pages, we need to keep track of them so that + * we can lock them in order and also unlock them later. For this we colle= ct + * pages (i.e. &struct page_entry's) in a binary search @tree. Given that = the + * @tree implementation we use does not provide an O(1) operation to obtai= n the + * highest-ranked element, we use @max to keep track of the inserted page + * with the highest index. This is valuable because if a page is not in + * the tree and its index is higher than @max's, then we can lock it + * without breaking the locking order rule. + * + * Note on naming: 'struct page_set' would be shorter, but we already have= a few + * page_set_*() helpers, so page_collection is used instead to avoid confu= sion. + * + * See also: page_collection_lock(). + */ +struct page_collection { + GTree *tree; + struct page_entry *max; +}; + +typedef int PageForEachNext; +#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + +#ifdef CONFIG_DEBUG_TCG + +static __thread GHashTable *ht_pages_locked_debug; + +static void ht_pages_locked_debug_init(void) +{ + if (ht_pages_locked_debug) { + return; + } + ht_pages_locked_debug =3D g_hash_table_new(NULL, NULL); +} + +static bool page_is_locked(const PageDesc *pd) +{ + PageDesc *found; + + ht_pages_locked_debug_init(); + found =3D g_hash_table_lookup(ht_pages_locked_debug, pd); + return !!found; +} + +static void page_lock__debug(PageDesc *pd) +{ + ht_pages_locked_debug_init(); + g_assert(!page_is_locked(pd)); + g_hash_table_insert(ht_pages_locked_debug, pd, pd); +} + +static void page_unlock__debug(const PageDesc *pd) +{ + bool removed; + + ht_pages_locked_debug_init(); + g_assert(page_is_locked(pd)); + removed =3D g_hash_table_remove(ht_pages_locked_debug, pd); + g_assert(removed); +} + +static void do_assert_page_locked(const PageDesc *pd, + const char *file, int line) +{ + if (unlikely(!page_is_locked(pd))) { + error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", + pd, file, line); + abort(); + } +} +#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE_= _) + +void assert_no_pages_locked(void) +{ + ht_pages_locked_debug_init(); + g_assert(g_hash_table_size(ht_pages_locked_debug) =3D=3D 0); +} + +#else /* !CONFIG_DEBUG_TCG */ + +static inline void page_lock__debug(const PageDesc *pd) { } +static inline void page_unlock__debug(const PageDesc *pd) { } +static inline void assert_page_locked(const PageDesc *pd) { } + +#endif /* CONFIG_DEBUG_TCG */ + +static void page_lock(PageDesc *pd) +{ + page_lock__debug(pd); + qemu_spin_lock(&pd->lock); +} + +static void page_unlock(PageDesc *pd) +{ + qemu_spin_unlock(&pd->lock); + page_unlock__debug(pd); +} + +static inline struct page_entry * +page_entry_new(PageDesc *pd, tb_page_addr_t index) +{ + struct page_entry *pe =3D g_malloc(sizeof(*pe)); + + pe->index =3D index; + pe->pd =3D pd; + pe->locked =3D false; + return pe; +} + +static void page_entry_destroy(gpointer p) +{ + struct page_entry *pe =3D p; + + g_assert(pe->locked); + page_unlock(pe->pd); + g_free(pe); +} + +/* returns false on success */ +static bool page_entry_trylock(struct page_entry *pe) +{ + bool busy; + + busy =3D qemu_spin_trylock(&pe->pd->lock); + if (!busy) { + g_assert(!pe->locked); + pe->locked =3D true; + page_lock__debug(pe->pd); + } + return busy; +} + +static void do_page_entry_lock(struct page_entry *pe) +{ + page_lock(pe->pd); + g_assert(!pe->locked); + pe->locked =3D true; +} + +static gboolean page_entry_lock(gpointer key, gpointer value, gpointer dat= a) +{ + struct page_entry *pe =3D value; + + do_page_entry_lock(pe); + return FALSE; +} + +static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer d= ata) +{ + struct page_entry *pe =3D value; + + if (pe->locked) { + pe->locked =3D false; + page_unlock(pe->pd); + } + return FALSE; +} + +/* + * Trylock a page, and if successful, add the page to a collection. + * Returns true ("busy") if the page could not be locked; false otherwise. + */ +static bool page_trylock_add(struct page_collection *set, tb_page_addr_t a= ddr) +{ + tb_page_addr_t index =3D addr >> TARGET_PAGE_BITS; + struct page_entry *pe; + PageDesc *pd; + + pe =3D g_tree_lookup(set->tree, &index); + if (pe) { + return false; + } + + pd =3D page_find(index); + if (pd =3D=3D NULL) { + return false; + } + + pe =3D page_entry_new(pd, index); + g_tree_insert(set->tree, &pe->index, pe); + + /* + * If this is either (1) the first insertion or (2) a page whose index + * is higher than any other so far, just lock the page and move on. + */ + if (set->max =3D=3D NULL || pe->index > set->max->index) { + set->max =3D pe; + do_page_entry_lock(pe); + return false; + } + /* + * Try to acquire out-of-order lock; if busy, return busy so that we a= cquire + * locks in order. + */ + return page_entry_trylock(pe); +} + +static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer = udata) +{ + tb_page_addr_t a =3D *(const tb_page_addr_t *)ap; + tb_page_addr_t b =3D *(const tb_page_addr_t *)bp; + + if (a =3D=3D b) { + return 0; + } else if (a < b) { + return -1; + } + return 1; +} + +/* + * Lock a range of pages ([@start,@end[) as well as the pages of all + * intersecting TBs. + * Locking order: acquire locks in ascending order of page index. + */ +struct page_collection * +page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) +{ + struct page_collection *set =3D g_malloc(sizeof(*set)); + tb_page_addr_t index; + PageDesc *pd; + + start >>=3D TARGET_PAGE_BITS; + end >>=3D TARGET_PAGE_BITS; + g_assert(start <=3D end); + + set->tree =3D g_tree_new_full(tb_page_addr_cmp, NULL, NULL, + page_entry_destroy); + set->max =3D NULL; + assert_no_pages_locked(); + + retry: + g_tree_foreach(set->tree, page_entry_lock, NULL); + + for (index =3D start; index <=3D end; index++) { + TranslationBlock *tb; + PageForEachNext n; + + pd =3D page_find(index); + if (pd =3D=3D NULL) { + continue; + } + if (page_trylock_add(set, index << TARGET_PAGE_BITS)) { + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + assert_page_locked(pd); + PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { + if (page_trylock_add(set, tb_page_addr0(tb)) || + (tb_page_addr1(tb) !=3D -1 && + page_trylock_add(set, tb_page_addr1(tb)))) { + /* drop all locks, and reacquire in order */ + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + } + } + return set; +} + +void page_collection_unlock(struct page_collection *set) +{ + /* entries are unlocked and freed via page_entry_destroy */ + g_tree_destroy(set->tree); + g_free(set); +} + /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void tb_remove_all_1(int level, void **lp) { @@ -329,6 +661,66 @@ static void tb_remove(TranslationBlock *tb) tb_page_remove(pd, tb); } } + +static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, bool a= lloc) +{ + PageDesc *p1, *p2; + tb_page_addr_t page1; + tb_page_addr_t page2; + + assert_memory_lock(); + g_assert(phys1 !=3D -1); + + page1 =3D phys1 >> TARGET_PAGE_BITS; + page2 =3D phys2 >> TARGET_PAGE_BITS; + + p1 =3D page_find_alloc(page1, alloc); + if (ret_p1) { + *ret_p1 =3D p1; + } + if (likely(phys2 =3D=3D -1)) { + page_lock(p1); + return; + } else if (page1 =3D=3D page2) { + page_lock(p1); + if (ret_p2) { + *ret_p2 =3D p1; + } + return; + } + p2 =3D page_find_alloc(page2, alloc); + if (ret_p2) { + *ret_p2 =3D p2; + } + if (page1 < page2) { + page_lock(p1); + page_lock(p2); + } else { + page_lock(p2); + page_lock(p1); + } +} + +/* lock the page(s) of a TB in the correct acquisition order */ +static void page_lock_tb(const TranslationBlock *tb) +{ + page_lock_pair(NULL, tb_page_addr0(tb), NULL, tb_page_addr1(tb), false= ); +} + +static void page_unlock_tb(const TranslationBlock *tb) +{ + PageDesc *p1 =3D page_find(tb_page_addr0(tb) >> TARGET_PAGE_BITS); + + page_unlock(p1); + if (unlikely(tb_page_addr1(tb) !=3D -1)) { + PageDesc *p2 =3D page_find(tb_page_addr1(tb) >> TARGET_PAGE_BITS); + + if (p2 !=3D p1) { + page_unlock(p2); + } + } +} #endif /* CONFIG_USER_ONLY */ =20 /* flush all the translation blocks */ @@ -525,78 +917,6 @@ static void tb_phys_invalidate__locked(TranslationBloc= k *tb) qemu_thread_jit_execute(); } =20 -#ifdef CONFIG_USER_ONLY -static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, - PageDesc **ret_p2, tb_page_addr_t phys2, - bool alloc) -{ - *ret_p1 =3D NULL; - *ret_p2 =3D NULL; -} -static inline void page_lock_tb(const TranslationBlock *tb) { } -static inline void page_unlock_tb(const TranslationBlock *tb) { } -#else -static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, - PageDesc **ret_p2, tb_page_addr_t phys2, bool a= lloc) -{ - PageDesc *p1, *p2; - tb_page_addr_t page1; - tb_page_addr_t page2; - - assert_memory_lock(); - g_assert(phys1 !=3D -1); - - page1 =3D phys1 >> TARGET_PAGE_BITS; - page2 =3D phys2 >> TARGET_PAGE_BITS; - - p1 =3D page_find_alloc(page1, alloc); - if (ret_p1) { - *ret_p1 =3D p1; - } - if (likely(phys2 =3D=3D -1)) { - page_lock(p1); - return; - } else if (page1 =3D=3D page2) { - page_lock(p1); - if (ret_p2) { - *ret_p2 =3D p1; - } - return; - } - p2 =3D page_find_alloc(page2, alloc); - if (ret_p2) { - *ret_p2 =3D p2; - } - if (page1 < page2) { - page_lock(p1); - page_lock(p2); - } else { - page_lock(p2); - page_lock(p1); - } -} - -/* lock the page(s) of a TB in the correct acquisition order */ -static void page_lock_tb(const TranslationBlock *tb) -{ - page_lock_pair(NULL, tb_page_addr0(tb), NULL, tb_page_addr1(tb), false= ); -} - -static void page_unlock_tb(const TranslationBlock *tb) -{ - PageDesc *p1 =3D page_find(tb_page_addr0(tb) >> TARGET_PAGE_BITS); - - page_unlock(p1); - if (unlikely(tb_page_addr1(tb) !=3D -1)) { - PageDesc *p2 =3D page_find(tb_page_addr1(tb) >> TARGET_PAGE_BITS); - - if (p2 !=3D p1) { - page_unlock(p2); - } - } -} -#endif - /* * Invalidate one TB. * Called with mmap_lock held in user-mode. @@ -746,8 +1066,7 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t add= r, uintptr_t pc) #else /* * @p must be non-NULL. - * user-mode: call with mmap_lock held. - * !user-mode: call with all @pages locked. + * Call with all @pages locked. */ static void tb_invalidate_phys_page_range__locked(struct page_collection *pages, @@ -817,8 +1136,6 @@ tb_invalidate_phys_page_range__locked(struct page_coll= ection *pages, /* * Invalidate all TBs which intersect with the target physical * address page @addr. - * - * Called with mmap_lock held for user-mode emulation */ void tb_invalidate_phys_page(tb_page_addr_t addr) { @@ -844,8 +1161,6 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) * 'is_cpu_write_access' should be true if called from a real cpu write * access: the virtual CPU will exit the current TB if code is modified in= side * this TB. - * - * Called with mmap_lock held for user-mode emulation. */ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 40f7b91c4b..51ac1f6c84 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -63,52 +63,6 @@ #include "tb-context.h" #include "internal.h" =20 -/* make various TB consistency checks */ - -/** - * struct page_entry - page descriptor entry - * @pd: pointer to the &struct PageDesc of the page this entry represe= nts - * @index: page index of the page - * @locked: whether the page is locked - * - * This struct helps us keep track of the locked state of a page, without - * bloating &struct PageDesc. - * - * A page lock protects accesses to all fields of &struct PageDesc. - * - * See also: &struct page_collection. - */ -struct page_entry { - PageDesc *pd; - tb_page_addr_t index; - bool locked; -}; - -/** - * struct page_collection - tracks a set of pages (i.e. &struct page_entry= 's) - * @tree: Binary search tree (BST) of the pages, with key =3D=3D page in= dex - * @max: Pointer to the page in @tree with the highest page index - * - * To avoid deadlock we lock pages in ascending order of page index. - * When operating on a set of pages, we need to keep track of them so that - * we can lock them in order and also unlock them later. For this we colle= ct - * pages (i.e. &struct page_entry's) in a binary search @tree. Given that = the - * @tree implementation we use does not provide an O(1) operation to obtai= n the - * highest-ranked element, we use @max to keep track of the inserted page - * with the highest index. This is valuable because if a page is not in - * the tree and its index is higher than @max's, then we can lock it - * without breaking the locking order rule. - * - * Note on naming: 'struct page_set' would be shorter, but we already have= a few - * page_set_*() helpers, so page_collection is used instead to avoid confu= sion. - * - * See also: page_collection_lock(). - */ -struct page_collection { - GTree *tree; - struct page_entry *max; -}; - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -310,261 +264,6 @@ void page_init(void) page_table_config_init(); } =20 -/* In user-mode page locks aren't used; mmap_lock is enough */ -#ifdef CONFIG_USER_ONLY -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) -{ - return NULL; -} - -void page_collection_unlock(struct page_collection *set) -{ } -#else /* !CONFIG_USER_ONLY */ - -#ifdef CONFIG_DEBUG_TCG - -static __thread GHashTable *ht_pages_locked_debug; - -static void ht_pages_locked_debug_init(void) -{ - if (ht_pages_locked_debug) { - return; - } - ht_pages_locked_debug =3D g_hash_table_new(NULL, NULL); -} - -static bool page_is_locked(const PageDesc *pd) -{ - PageDesc *found; - - ht_pages_locked_debug_init(); - found =3D g_hash_table_lookup(ht_pages_locked_debug, pd); - return !!found; -} - -static void page_lock__debug(PageDesc *pd) -{ - ht_pages_locked_debug_init(); - g_assert(!page_is_locked(pd)); - g_hash_table_insert(ht_pages_locked_debug, pd, pd); -} - -static void page_unlock__debug(const PageDesc *pd) -{ - bool removed; - - ht_pages_locked_debug_init(); - g_assert(page_is_locked(pd)); - removed =3D g_hash_table_remove(ht_pages_locked_debug, pd); - g_assert(removed); -} - -void do_assert_page_locked(const PageDesc *pd, const char *file, int line) -{ - if (unlikely(!page_is_locked(pd))) { - error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", - pd, file, line); - abort(); - } -} - -void assert_no_pages_locked(void) -{ - ht_pages_locked_debug_init(); - g_assert(g_hash_table_size(ht_pages_locked_debug) =3D=3D 0); -} - -#else /* !CONFIG_DEBUG_TCG */ - -static inline void page_lock__debug(const PageDesc *pd) { } -static inline void page_unlock__debug(const PageDesc *pd) { } - -#endif /* CONFIG_DEBUG_TCG */ - -void page_lock(PageDesc *pd) -{ - page_lock__debug(pd); - qemu_spin_lock(&pd->lock); -} - -void page_unlock(PageDesc *pd) -{ - qemu_spin_unlock(&pd->lock); - page_unlock__debug(pd); -} - -static inline struct page_entry * -page_entry_new(PageDesc *pd, tb_page_addr_t index) -{ - struct page_entry *pe =3D g_malloc(sizeof(*pe)); - - pe->index =3D index; - pe->pd =3D pd; - pe->locked =3D false; - return pe; -} - -static void page_entry_destroy(gpointer p) -{ - struct page_entry *pe =3D p; - - g_assert(pe->locked); - page_unlock(pe->pd); - g_free(pe); -} - -/* returns false on success */ -static bool page_entry_trylock(struct page_entry *pe) -{ - bool busy; - - busy =3D qemu_spin_trylock(&pe->pd->lock); - if (!busy) { - g_assert(!pe->locked); - pe->locked =3D true; - page_lock__debug(pe->pd); - } - return busy; -} - -static void do_page_entry_lock(struct page_entry *pe) -{ - page_lock(pe->pd); - g_assert(!pe->locked); - pe->locked =3D true; -} - -static gboolean page_entry_lock(gpointer key, gpointer value, gpointer dat= a) -{ - struct page_entry *pe =3D value; - - do_page_entry_lock(pe); - return FALSE; -} - -static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer d= ata) -{ - struct page_entry *pe =3D value; - - if (pe->locked) { - pe->locked =3D false; - page_unlock(pe->pd); - } - return FALSE; -} - -/* - * Trylock a page, and if successful, add the page to a collection. - * Returns true ("busy") if the page could not be locked; false otherwise. - */ -static bool page_trylock_add(struct page_collection *set, tb_page_addr_t a= ddr) -{ - tb_page_addr_t index =3D addr >> TARGET_PAGE_BITS; - struct page_entry *pe; - PageDesc *pd; - - pe =3D g_tree_lookup(set->tree, &index); - if (pe) { - return false; - } - - pd =3D page_find(index); - if (pd =3D=3D NULL) { - return false; - } - - pe =3D page_entry_new(pd, index); - g_tree_insert(set->tree, &pe->index, pe); - - /* - * If this is either (1) the first insertion or (2) a page whose index - * is higher than any other so far, just lock the page and move on. - */ - if (set->max =3D=3D NULL || pe->index > set->max->index) { - set->max =3D pe; - do_page_entry_lock(pe); - return false; - } - /* - * Try to acquire out-of-order lock; if busy, return busy so that we a= cquire - * locks in order. - */ - return page_entry_trylock(pe); -} - -static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer = udata) -{ - tb_page_addr_t a =3D *(const tb_page_addr_t *)ap; - tb_page_addr_t b =3D *(const tb_page_addr_t *)bp; - - if (a =3D=3D b) { - return 0; - } else if (a < b) { - return -1; - } - return 1; -} - -/* - * Lock a range of pages ([@start,@end[) as well as the pages of all - * intersecting TBs. - * Locking order: acquire locks in ascending order of page index. - */ -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) -{ - struct page_collection *set =3D g_malloc(sizeof(*set)); - tb_page_addr_t index; - PageDesc *pd; - - start >>=3D TARGET_PAGE_BITS; - end >>=3D TARGET_PAGE_BITS; - g_assert(start <=3D end); - - set->tree =3D g_tree_new_full(tb_page_addr_cmp, NULL, NULL, - page_entry_destroy); - set->max =3D NULL; - assert_no_pages_locked(); - - retry: - g_tree_foreach(set->tree, page_entry_lock, NULL); - - for (index =3D start; index <=3D end; index++) { - TranslationBlock *tb; - PageForEachNext n; - - pd =3D page_find(index); - if (pd =3D=3D NULL) { - continue; - } - if (page_trylock_add(set, index << TARGET_PAGE_BITS)) { - g_tree_foreach(set->tree, page_entry_unlock, NULL); - goto retry; - } - assert_page_locked(pd); - PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { - if (page_trylock_add(set, tb_page_addr0(tb)) || - (tb_page_addr1(tb) !=3D -1 && - page_trylock_add(set, tb_page_addr1(tb)))) { - /* drop all locks, and reacquire in order */ - g_tree_foreach(set->tree, page_entry_unlock, NULL); - goto retry; - } - } - } - return set; -} - -void page_collection_unlock(struct page_collection *set) -{ - /* entries are unlocked and freed via page_entry_destroy */ - g_tree_destroy(set->tree); - g_free(set); -} - -#endif /* !CONFIG_USER_ONLY */ - /* * Isolate the portion of code gen which can setjmp/longjmp. * Return the size of the generated code, or negative on error. --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599121; cv=none; d=zohomail.com; s=zohoarc; b=mD88YMaioU2cDZIb92eehHXn3JSaX4XYpw+o/wgxhoYWV2WvIleqTvcx/vNa9k/MqCvvoE4b18W25wKNrZz0qU3XBRkHqAUkdV/SQR6CpHI4HCIThvl7gGBq2JE1AJcQfkxzyR0MQUubas7po106sNEqknW9kcMISFYFJuvV08Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599121; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=znZNd3WG/BhN4qCSHGo7fJ9BmZ3G69LA+YD4Q3QuGKE=; b=UAytc3qSfGiGWDdSpWjYnS3pYiNo3SCMNK+ZcTtmEHt/kTBRPvCghdvydzPN1j5iZx7hlRacCg5bIt/eFagpZEn6v5GAt52E8WNCXetxV8YcH+LvQh5cLsfox76JZIqtD73gSK7o8uPBJJKNjaoeNDNe4FN3P80UvldLHyHrYRY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599121491835.5578636859302; Tue, 20 Dec 2022 21:05:21 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGN-0002Iu-Vn; Wed, 21 Dec 2022 00:03:32 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGL-0002HT-0i for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:29 -0500 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGJ-0003Ov-FF for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:28 -0500 Received: by mail-pl1-x632.google.com with SMTP id b2so1752362pld.7 for ; Tue, 20 Dec 2022 21:03:27 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=znZNd3WG/BhN4qCSHGo7fJ9BmZ3G69LA+YD4Q3QuGKE=; b=QQb4qA9mlJQuPb5HL3ire0ouceqMrntlm1GE9yMPMDocvkDfq42rMgcxdyFKj2PYoX TxNg0xGvSenZ9PSkr8dKBlv++xOcmLV+hgdQ6LIdCFmEuAio9j1diIiPr7OBiNGaUPkG 3kzfQL0HJ3VHjdJ4nSmg/ao2Nmw9834CMDKirnkVfbnfmbDHY3/TDk8qAX5RIQ463d2/ JS9hqp9GWv/B/Am1ytfnt7yXDBWw6H3vsADyIlsprUSCtinE7LYWzaFOp5m3Tl7oA1wU SFtB/UwKQrBPX9QcA347eoglDwxVP6HUejE3Db3b9J9sdRoPm57R7wSp1sdDJWC4E1BI tk8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=znZNd3WG/BhN4qCSHGo7fJ9BmZ3G69LA+YD4Q3QuGKE=; b=w4mhdvQbh6DWYhlHahY7P4aw7VAJN0lzwQ2PPVKiA4WED1eK6deEAADdVb2DWACgn6 WahY7qu/xIJdmiuHTn8hZIXPyrXE4gs11uTtUcqukl87KGfmYML9LHKEvA6C6lPhbEso 4PCtUjKS+ZLmQl824NUngDv5g0Xm2LUpaiANybSQVrUjYRcg0QemMdbRPrKOE2k0wqnC 7iMKfGkA+LilSeDOklxJbWbY2qs+T8U3BKAV/cvOsMczpejOUHp21Y/kIMzJxzFOzd7Q GSCH2aLPGbvHmbdWRqwbQUkG2sDWQ1l/h5P38ULAkZpSIC/IjILatO7uFgRyYt67pIol 82SA== X-Gm-Message-State: AFqh2krXcj+porSinzYv3DIVZNdcl1EYyisizvUjQ/iVSTrfNyjRT+bC VyH9gG2rbRZcd33EPnpOglj2vEHjMh3XU08B X-Google-Smtp-Source: AMrXdXst8cLgKj9PQFWwLyyabgYnX0xglToL8CAEQGchf4ngu4z1ha9KZ4xNsr5Br8TZNFAYAXMrxQ== X-Received: by 2002:a17:90b:3689:b0:223:7922:654d with SMTP id mj9-20020a17090b368900b002237922654dmr823581pjb.5.1671599006094; Tue, 20 Dec 2022 21:03:26 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 10/14] accel/tcg: Restrict cpu_io_recompile() to system emulation Date: Tue, 20 Dec 2022 21:03:09 -0800 Message-Id: <20221221050313.2950701-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::632; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x632.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599123378100003 From: Philippe Mathieu-Daud=C3=A9 Missed in commit 6526919224 ("accel/tcg: Restrict cpu_io_recompile() from other accelerators"). Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-2-philmd@linaro.org> Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index e1429a53ac..35419f3fe1 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -43,12 +43,12 @@ void tb_invalidate_phys_page_fast(struct page_collectio= n *pages, struct page_collection *page_collection_lock(tb_page_addr_t start, tb_page_addr_t end); void page_collection_unlock(struct page_collection *set); +G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); #endif /* CONFIG_SOFTMMU */ =20 TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, int cflags); -G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); void page_init(void); void tb_htable_init(void); void tb_reset_jump(TranslationBlock *tb, int n); --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599372; cv=none; d=zohomail.com; s=zohoarc; b=aWY6V1hMaujEr6O/+CoJv3VjiLXhuqV06sJyxfBPFs6lUGBmqqn5KLGxxFQ4r+nXZMWRSbAg697wcsL8y+cEjcPtWiO0FxQbzhnGXInGTHFGvC5zDgTb1bmLYzncqI6vu5A95cOpZd2hsIUmhW7qZz2gYSHcfSsTTI00v4jmyxc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599372; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=L1M2wRoJOi+BqzGwIPM2hSiFC08op8oobTL1vRrCJmc=; b=VvNxkOHqwzA5KhmaO/DnBkaVk9ljoFPno5/vk/Cu5SnK/DsRE59VnZYEX07H20es1XGv8dqKjLBIW4ooFbr4odS+8BaQOOuTjuAUoAMkCerBYIP6EANMZmuVVztnZgyY/lJU3fWdT9QwiYb3QJDSBUUjdmgf9h/iO5oCo/eQ0NM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599372849363.7276760595645; Tue, 20 Dec 2022 21:09:32 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGQ-0002Jy-2V; Wed, 21 Dec 2022 00:03:34 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGM-0002II-8D for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:30 -0500 Received: from mail-pj1-x102c.google.com ([2607:f8b0:4864:20::102c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGK-0003P9-Fj for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:29 -0500 Received: by mail-pj1-x102c.google.com with SMTP id t11-20020a17090a024b00b0021932afece4so995743pje.5 for ; Tue, 20 Dec 2022 21:03:28 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=L1M2wRoJOi+BqzGwIPM2hSiFC08op8oobTL1vRrCJmc=; b=xYODdjB9+Fmmdza1GhwZWsKcYARE/16mGbfGVghWwPCx1jYOU66Il4kjUbnuoRf4+r j9jdx7QYorP81e4uUiN5/wOL2HKJb2tQVDdab1ivyS1Zq5WwwKdSvsZ0UIauaR0/R0c/ xlgT6HuBLLzBQ74HEURnE/f7F3SuFlvprw3Dn6xS4aVV57fo/qxbXj9SENaRpnX6W8Ch dWSPk8I6gUnQOP9h/wsGjB4m7QFylouJQYJ8vPTGAs3X8S4o28wiFFDU2FxWibHaZDEo Yr3OV05WtTT0TFaoRWjSs3BVDgaQebDRLogS0p+dW8IntkSiB0TAXGxsbyq19oWBKs4T qtXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=L1M2wRoJOi+BqzGwIPM2hSiFC08op8oobTL1vRrCJmc=; b=M+7+sbCzU25HE/VlmGGESEddL0OA5DsI0vdMKdhwb/lRkyZU71K47Enqz0RQyxkCGt 7I1hh79UNXFqFJeDcNTHfjU2rzmDHW1zomOLaEomfL2EZLW7aat94RV/wQho4heUlfu/ lNrnTiKjFLPcLdiTGKQRc2UEiCajYFm6TWmzcT06fXEHwAFl4/yfELV4aYEvBskcB6g0 O8BquNbrC6Xy72XjBmiCITSp7+fMWeIP8G0vuTpEJ+1RbNZkIGe8mZDinbekhvsG5AH0 KmKowt0BSmYO81DZsPJdQoO25ceVKY7E6GZu9WzGKg1hGjkFoKkJ1/YLiRymblzCBN3L chWw== X-Gm-Message-State: AFqh2kr0JwJYGCAb//ODj2GkHBCBOtrCkmtmtmiiOQrIW3qcfVy9ujqo tKjFJRAuA8r+pA5yDqYReAbKWuzL/AVi3T0o X-Google-Smtp-Source: AMrXdXsWbbr6FjXFXrr5rDGVPiTJ1mbwRs0+t0PuGlNmIMQv5eDMRG3Ozie/9NeeatjsuJhbGk16uQ== X-Received: by 2002:a17:90a:cc18:b0:219:4011:b836 with SMTP id b24-20020a17090acc1800b002194011b836mr15338874pju.23.1671599007105; Tue, 20 Dec 2022 21:03:27 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 11/14] accel/tcg: Remove trace events from trace-root.h Date: Tue, 20 Dec 2022 21:03:10 -0800 Message-Id: <20221221050313.2950701-12-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102c; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599374823100003 From: Philippe Mathieu-Daud=C3=A9 Commit d9bb58e510 ("tcg: move tcg related files into accel/tcg/ subdirectory") introduced accel/tcg/trace-events, so we don't need to use the root trace-events anymore. Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-3-philmd@linaro.org> Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 2 +- accel/tcg/trace-events | 4 ++++ trace-events | 4 ---- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 6f1c00682b..ac459478f4 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -33,7 +33,7 @@ #include "qemu/atomic.h" #include "qemu/atomic128.h" #include "exec/translate-all.h" -#include "trace/trace-root.h" +#include "trace.h" #include "tb-hash.h" #include "internal.h" #ifdef CONFIG_PLUGIN diff --git a/accel/tcg/trace-events b/accel/tcg/trace-events index 59eab96f26..4e9b450520 100644 --- a/accel/tcg/trace-events +++ b/accel/tcg/trace-events @@ -6,5 +6,9 @@ exec_tb(void *tb, uintptr_t pc) "tb:%p pc=3D0x%"PRIxPTR exec_tb_nocache(void *tb, uintptr_t pc) "tb:%p pc=3D0x%"PRIxPTR exec_tb_exit(void *last_tb, unsigned int flags) "tb:%p flags=3D0x%x" =20 +# cputlb.c +memory_notdirty_write_access(uint64_t vaddr, uint64_t ram_addr, unsigned s= ize) "0x%" PRIx64 " ram_addr 0x%" PRIx64 " size %u" +memory_notdirty_set_dirty(uint64_t vaddr) "0x%" PRIx64 + # translate-all.c translate_block(void *tb, uintptr_t pc, const void *tb_code) "tb:%p, pc:0x= %"PRIxPTR", tb_code:%p" diff --git a/trace-events b/trace-events index 035f3d570d..b6b84b175e 100644 --- a/trace-events +++ b/trace-events @@ -42,10 +42,6 @@ find_ram_offset(uint64_t size, uint64_t offset) "size: 0= x%" PRIx64 " @ 0x%" PRIx find_ram_offset_loop(uint64_t size, uint64_t candidate, uint64_t offset, u= int64_t next, uint64_t mingap) "trying size: 0x%" PRIx64 " @ 0x%" PRIx64 ",= offset: 0x%" PRIx64" next: 0x%" PRIx64 " mingap: 0x%" PRIx64 ram_block_discard_range(const char *rbname, void *hva, size_t length, bool= need_madvise, bool need_fallocate, int ret) "%s@%p + 0x%zx: madvise: %d fa= llocate: %d ret: %d" =20 -# accel/tcg/cputlb.c -memory_notdirty_write_access(uint64_t vaddr, uint64_t ram_addr, unsigned s= ize) "0x%" PRIx64 " ram_addr 0x%" PRIx64 " size %u" -memory_notdirty_set_dirty(uint64_t vaddr) "0x%" PRIx64 - # job.c job_state_transition(void *job, int ret, const char *legal, const char *s= 0, const char *s1) "job %p (ret: %d) attempting %s transition (%s-->%s)" job_apply_verb(void *job, const char *state, const char *verb, const char = *legal) "job %p in state %s; applying verb %s (%s)" --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 167159935133824.315345307580174; Tue, 20 Dec 2022 21:09:11 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGR-0002KP-3F; Wed, 21 Dec 2022 00:03:35 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGP-0002Ju-VB for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:33 -0500 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGL-0003NB-8j for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:33 -0500 Received: by mail-pl1-x631.google.com with SMTP id u7so6249122plq.11 for ; Tue, 20 Dec 2022 21:03:28 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KNHQY/4RRXvJU0V3H4/K7OQJyFUdroRHt7gYQp+f6XQ=; b=vbluLpy9B0k95RrQGhOzSiacC2gpg7TmcfHKdq0jwtwjLEhMnGIom/ZK9D+rruaAho fs3TN7oCJgP9N9U3YKHfWCdL2mRHU3jmWNnjuCHZvWbQdYHXp1xYOz8m0u7lPED3Rpmc aJKCte0Xen3ZYZKNfOo/Hfxp3yFo+ZUEmqSfssE3pIpo/UQ4KmZ9Q71YWRiVmjreuWix +s3CKNc2UMlkzc5ebyIjjX/11dnDg0AtJvJ5at3Y8HVBDTj2zy+u719VqqcIN8Xd9su2 rHKPhz94Aipd6U1Q1Kypd5P4H6Hl+2rDBv+lvbs9W2cGHLPZobJ4zkEKANrpbj5K+2S7 J6FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KNHQY/4RRXvJU0V3H4/K7OQJyFUdroRHt7gYQp+f6XQ=; b=7Z5irC0pvJ3rZtcNT/JNCkFh30kF5VA8svz29EQbVqySRc//hNXQ46a+LiNG9FxAk/ jjvJCwwycfpzgyB9QTNKk+pFUlVUt3l1sdFTv+P1hetf/1UtGCVN2mGgZXMbVPUfhtPr EqTWJmF78dJ0A05sBUyAvJ889V52dbW0hVFs0H2wZ0rgE4LGxr2mtstqSzyyiLF3qtMh 7cESkGyiX8SaPLrp7SDAyUNgIVVOqCfrDe59dF6F/DQphm57C7py1HSqmLnfitSqeeFu CwQz3+OOsxRF5xjcu6xRmWciw2z+EkrPq8KKxFTqhUutdJQUu3ZtMun3RYLHIz/pi3UO jkjQ== X-Gm-Message-State: AFqh2kqRtPsD9HLU0LllkmIqaKKLy3z9V3CRMTrnQxbiY2/lQXeGieqB vnWUoIHxDcoHTRLFXDnhd/0ORo1Y+gtsqt+l X-Google-Smtp-Source: AMrXdXscZr7hG+J4GLuOejVa+w6zgRTBu5AlkBs4RR22jJ7AvDJFnBTnBaQdT7bXLlNlMs9t5jxv0w== X-Received: by 2002:a17:90b:b0c:b0:223:9e00:85bd with SMTP id bf12-20020a17090b0b0c00b002239e0085bdmr712224pjb.40.1671599008202; Tue, 20 Dec 2022 21:03:28 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 12/14] accel/tcg: Rename tb_invalidate_phys_page_fast{, __locked}() Date: Tue, 20 Dec 2022 21:03:11 -0800 Message-Id: <20221221050313.2950701-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::631; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x631.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1671599352896100003 From: Philippe Mathieu-Daud=C3=A9 Emphasize this function is called with pages locked. Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-4-philmd@linaro.org> [rth: Use "__locked" suffix, to match other instances.] Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 6 +++--- accel/tcg/cputlb.c | 2 +- accel/tcg/tb-maint.c | 6 +++--- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 35419f3fe1..d10ab69ed0 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -37,9 +37,9 @@ void page_table_config_init(void); =20 #ifdef CONFIG_SOFTMMU struct page_collection; -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr); +void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr); struct page_collection *page_collection_lock(tb_page_addr_t start, tb_page_addr_t end); void page_collection_unlock(struct page_collection *set); diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index ac459478f4..f7963d3af8 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1510,7 +1510,7 @@ static void notdirty_write(CPUState *cpu, vaddr mem_v= addr, unsigned size, if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { struct page_collection *pages =3D page_collection_lock(ram_addr, ram_addr + size); - tb_invalidate_phys_page_fast(pages, ram_addr, size, retaddr); + tb_invalidate_phys_page_fast__locked(pages, ram_addr, size, retadd= r); page_collection_unlock(pages); } =20 diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 1676d359f2..8edfd910c4 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -1190,9 +1190,9 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) * * Call with all @pages in the range [@start, @start + len[ locked. */ -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr) +void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr) { PageDesc *p; =20 --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599255; cv=none; d=zohomail.com; s=zohoarc; b=ivkkcK6mNmUvY9w3R1L+xfUW5Pf9fBIE0L9sBc1ajwGHPODXMj/ipmsjwTkfSiEB9bTRrC8YpkqH9tBb4s7fnP5GsFqffVtTyEcD1kh/vOltSlNTk8jVbMw4q9CljqOHlTzyvTQWGA1eelkYDHewcfKJU83DJ7vBlbtNRVWzQfU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599255; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=cckxZafOwtkUwQnY45Ez0scvGBO1+RBqrS/Kwfy+GPI=; b=FddG4xhqsgXu1W8SMm/OPxjadzvtD8kQLxK/scr5suOQVh2Y8KeciaXgnZtnw9Uz+PKLSvCt/YVJJKwFiFeBnuq2YDPL6XV2QoR/7KSgK0srL7SNQKse7IGf9n7lxDZtKeTrKUPq8nDY8c4OGyH4xOSsomFXDYo9pTCfK+BVZCQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599255655392.6600757138151; Tue, 20 Dec 2022 21:07:35 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGQ-0002KL-HT; Wed, 21 Dec 2022 00:03:34 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGO-0002JG-DW for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:32 -0500 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGM-0003Pe-P0 for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:32 -0500 Received: by mail-pj1-x1036.google.com with SMTP id u5so14575479pjy.5 for ; Tue, 20 Dec 2022 21:03:30 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cckxZafOwtkUwQnY45Ez0scvGBO1+RBqrS/Kwfy+GPI=; b=I9pc2pd1N5GMj61jiDYr4cOo0OCsiMxBntpUKyLfNeynUxUKHDCZCriQAU1/QigYfY k2UIyhg4/pE7tFwH+F+5nvzd7Dtl7KPF8oljZDK8Y9r8YAhBwJdCLiC+DOod19JsA2yh th0eKTFtnD82nn0WO31Z3as9W8pTeYP056iKF/OGNADjHueOqS4oHraxlnrls+Q9ZRsS WZ/vGRVdpSX7KCIJR44U+31csP7BilwgNehwQ3kQk4MX31KiCJ5Kbw0Fz74fb1GKARID fVg1lCFFCuMA5g0cIm38nhMXnC81aJUzxXs4ST5hEwa/eHDKj7R7Q1IJfutliKoj97/I JD8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cckxZafOwtkUwQnY45Ez0scvGBO1+RBqrS/Kwfy+GPI=; b=qSBwxXdTXq+IuxUrovdHMhs2toi4NvskOvzVTpbyd5ZMR9j8ELMdxqe43zVPvJZlQD ccJfpUWCxPFAtSN9qQP6D1pqGwBiZrCuf+9kG3JJM2D/wn4CNlDjCy9vvtxkgySBuzqm /498Bkz2tSNbMJSwGf+hTEn2IpZA/eHgo3FcxfXLoElm7yys9AZarTGSGkMJMYwnhNtm xilTDiGFlT77MOFTt72H89aVb8v8fNKdBaoDsZLD9YzbMa6TOivVKh0421Nyrp1lPKlp EvyP+khVGTlVCRxzyREIhcx39EFJLjMNVi7Hfee7GPGdGHtTZW97h1LSPe66A6XB28qF oK7w== X-Gm-Message-State: AFqh2kpvZnAVD24UELj/yXqDBWjQ7rdknATGmCrel5XFjnT+EVwIkUMe LOKyMtR/scDiDJFKHty5rMOuAbHR3gbSQC+B X-Google-Smtp-Source: AMrXdXvc4U4bvVxfdLBWsD1uUpnOTMu7gRrUoMOD5FXRcM/2oMn4BTn0EBz2sQLzfNZXZsFKEmphEg== X-Received: by 2002:a17:90a:d911:b0:219:ffc6:4040 with SMTP id c17-20020a17090ad91100b00219ffc64040mr17248338pjv.38.1671599009248; Tue, 20 Dec 2022 21:03:29 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 13/14] accel/tcg: Factor tb_invalidate_phys_range_fast() out Date: Tue, 20 Dec 2022 21:03:12 -0800 Message-Id: <20221221050313.2950701-14-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1036; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1036.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599255989100001 From: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-5-philmd@linaro.org> Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 3 +++ accel/tcg/cputlb.c | 5 +---- accel/tcg/tb-maint.c | 21 +++++++++++++++++---- 3 files changed, 21 insertions(+), 8 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index d10ab69ed0..8f8c44d06b 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -42,6 +42,9 @@ void tb_invalidate_phys_page_fast__locked(struct page_col= lection *pages, uintptr_t retaddr); struct page_collection *page_collection_lock(tb_page_addr_t start, tb_page_addr_t end); +void tb_invalidate_phys_range_fast(ram_addr_t ram_addr, + unsigned size, + uintptr_t retaddr); void page_collection_unlock(struct page_collection *set); G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); #endif /* CONFIG_SOFTMMU */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index f7963d3af8..03674d598f 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1508,10 +1508,7 @@ static void notdirty_write(CPUState *cpu, vaddr mem_= vaddr, unsigned size, trace_memory_notdirty_write_access(mem_vaddr, ram_addr, size); =20 if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { - struct page_collection *pages - =3D page_collection_lock(ram_addr, ram_addr + size); - tb_invalidate_phys_page_fast__locked(pages, ram_addr, size, retadd= r); - page_collection_unlock(pages); + tb_invalidate_phys_range_fast(ram_addr, size, retaddr); } =20 /* diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 8edfd910c4..d557013f00 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -1184,10 +1184,6 @@ void tb_invalidate_phys_range(tb_page_addr_t start, = tb_page_addr_t end) } =20 /* - * len must be <=3D 8 and start must be a multiple of len. - * Called via softmmu_template.h when code areas are written to with - * iothread mutex not held. - * * Call with all @pages in the range [@start, @start + len[ locked. */ void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, @@ -1205,4 +1201,21 @@ void tb_invalidate_phys_page_fast__locked(struct pag= e_collection *pages, tb_invalidate_phys_page_range__locked(pages, p, start, start + len, retaddr); } + +/* + * len must be <=3D 8 and start must be a multiple of len. + * Called via softmmu_template.h when code areas are written to with + * iothread mutex not held. + */ +void tb_invalidate_phys_range_fast(ram_addr_t ram_addr, + unsigned size, + uintptr_t retaddr) +{ + struct page_collection *pages; + + pages =3D page_collection_lock(ram_addr, ram_addr + size); + tb_invalidate_phys_page_fast__locked(pages, ram_addr, size, retaddr); + page_collection_unlock(pages); +} + #endif /* CONFIG_USER_ONLY */ --=20 2.34.1 From nobody Mon May 20 19:51:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671599317; cv=none; d=zohomail.com; s=zohoarc; b=WY/M9acMys7I1zkZmR/4wqNiVEYnYw+ZvAglkusRQDQJumt7qv7w/39/xpuI9lhJt5zWasn3O3VfAyHjaxhRh+MmiO/QHl270JH/lPTKD0++tLMNjKNdA2a9d0nE+sFcB/4qdTZlgpHv7tBuKwlrYw7qcQbW5JKBTGzple75tPE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671599317; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+vlIhK8RBdnajCC8bpvbYLAUqtt9KGeNEpdZJB0XJn4=; b=GiPGALiWmYo56M6NtZl+1/eL0B23/aYAWeB/sbXRQiirp3/of8WbxpJN8PjXqIgvUTwuENugE5s+wKB2/wNRsnA49gTHAfgBubY5EeXnOdM1m8p2s55glH2WraINUSicsPrZJtMf/YpXr9rmuP35g0mPW2zyHugVIt0Y0id/yrQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671599317580186.12001117087345; Tue, 20 Dec 2022 21:08:37 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p7rGR-0002Kx-N1; Wed, 21 Dec 2022 00:03:35 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p7rGP-0002JT-U8 for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:33 -0500 Received: from mail-pj1-x1032.google.com ([2607:f8b0:4864:20::1032]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p7rGN-0003Pq-QQ for qemu-devel@nongnu.org; Wed, 21 Dec 2022 00:03:33 -0500 Received: by mail-pj1-x1032.google.com with SMTP id gt4so14611102pjb.1 for ; Tue, 20 Dec 2022 21:03:31 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:3efa:624c:5fb:32c0]) by smtp.gmail.com with ESMTPSA id a8-20020a17090a688800b002135e8074b1sm390645pjd.55.2022.12.20.21.03.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Dec 2022 21:03:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+vlIhK8RBdnajCC8bpvbYLAUqtt9KGeNEpdZJB0XJn4=; b=zf+Teg/eq0T4CQrn51amcRmgxqMrvA2gXPX2I9UeZMfxHqeQKydAa8QLoGn97mPixy WIlkmJkF6gm8KMp+nJK1EcP78PbLmSG0lkWBnZ6eHv6yFwTCvtkfvfx6b18WESEh/vPU WoBIMQ3bOVC0Dw8evEwRHOY4nusQw+jNTTn5QQGqWiPz/Vz4fQZpwzdmnQSJMbabXjyJ HwEvHKVpsmzXMtUicNtLi55T1tPeQrvG5Ie13TiYXGaP2cJbIJtXQyr2yjisnl7nfZGN CYwroiKTNj6GbzFfjw6mQIGRxtwS+yt3nMHbV1NjSVg6dLL83Es3xdksO6d6F6UKlzhB DdwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+vlIhK8RBdnajCC8bpvbYLAUqtt9KGeNEpdZJB0XJn4=; b=Fseopc/RZAZVaUI5bfxpquVBISGeCOXr4EOeZ+UAHmI5/58z9RGPDyviX0AeveQMdr e/dHWje4iwB01jmeBxk2y/xWbDBQi4Kq9KzxAKsupUBAQpLb1uwjBBDCGkxoBDbM+++q 1hDl7Irket5nJ2c4HcmgsXhqfIODYbrSCPzfNtfa7SCglHK5U/PQ0/Gg9qIZrB/Ge7hP lBefxEf3khgtqmHywT/CF1jHpR1KDXD4NxlJAYYMSG8Us4fMMk2nnnslz+Kf8G7pI2Fe /lOWloV/K7DqKCJXmmU4bEKRRJOZN7wVhujFLIrNFeVUbm209hiuh/sH8jtF2uw+VctT JGSQ== X-Gm-Message-State: AFqh2kqG1Uf1tGHNe3xbgu67Bwc7xWVI+cyrlFl5BAS9lt+yrag8On4i fhDZOd7M9/75ZNBHlhI7IedUFBnFq5SD6cBP X-Google-Smtp-Source: AMrXdXv7Z/hTZ5gpvxl/hK8ZeB8caVQJR6eM3XRyN+NKly+F36fIjt/8/Z5rIreNbQ+BH2ZzMTdEhQ== X-Received: by 2002:a17:90a:730a:b0:225:a132:f193 with SMTP id m10-20020a17090a730a00b00225a132f193mr686737pjk.6.1671599010289; Tue, 20 Dec 2022 21:03:30 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL v2 14/14] accel/tcg: Restrict page_collection structure to system TB maintainance Date: Tue, 20 Dec 2022 21:03:13 -0800 Message-Id: <20221221050313.2950701-15-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221050313.2950701-1-richard.henderson@linaro.org> References: <20221221050313.2950701-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1032; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1032.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671599318338100004 From: Philippe Mathieu-Daud=C3=A9 Only the system emulation part of TB maintainance uses the page_collection structure. Restrict its declaration (and the functions requiring it) to tb-maint.c. Convert the 'len' argument of tb_invalidate_phys_page_fast__locked() from signed to unsigned. Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-6-philmd@linaro.org> Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 7 ------- accel/tcg/tb-maint.c | 15 +++++++-------- 2 files changed, 7 insertions(+), 15 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 8f8c44d06b..6edff16fb0 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -36,16 +36,9 @@ void page_table_config_init(void); #endif =20 #ifdef CONFIG_SOFTMMU -struct page_collection; -void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr); -struct page_collection *page_collection_lock(tb_page_addr_t start, - tb_page_addr_t end); void tb_invalidate_phys_range_fast(ram_addr_t ram_addr, unsigned size, uintptr_t retaddr); -void page_collection_unlock(struct page_collection *set); G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); #endif /* CONFIG_SOFTMMU */ =20 diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index d557013f00..1b8e860647 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -513,8 +513,8 @@ static gint tb_page_addr_cmp(gconstpointer ap, gconstpo= inter bp, gpointer udata) * intersecting TBs. * Locking order: acquire locks in ascending order of page index. */ -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) +static struct page_collection *page_collection_lock(tb_page_addr_t start, + tb_page_addr_t end) { struct page_collection *set =3D g_malloc(sizeof(*set)); tb_page_addr_t index; @@ -558,7 +558,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) return set; } =20 -void page_collection_unlock(struct page_collection *set) +static void page_collection_unlock(struct page_collection *set) { /* entries are unlocked and freed via page_entry_destroy */ g_tree_destroy(set->tree); @@ -1186,9 +1186,9 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) /* * Call with all @pages in the range [@start, @start + len[ locked. */ -void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr) +static void tb_invalidate_phys_page_fast__locked(struct page_collection *p= ages, + tb_page_addr_t start, + unsigned len, uintptr_t r= a) { PageDesc *p; =20 @@ -1198,8 +1198,7 @@ void tb_invalidate_phys_page_fast__locked(struct page= _collection *pages, } =20 assert_page_locked(p); - tb_invalidate_phys_page_range__locked(pages, p, start, start + len, - retaddr); + tb_invalidate_phys_page_range__locked(pages, p, start, start + len, ra= ); } =20 /* --=20 2.34.1