From nobody Thu Oct 31 23:50:58 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1666869248; cv=none; d=zohomail.com; s=zohoarc; b=boi9n5BzFxS4rtM51cFJ07oaLnt9jrrnhBcl50wzLNAgVzCymvzjM7vU2yzsd5hACo/OmHMliG337kZKHravzm0z+/jPt9wnSOoYeV0EatcwAhKHMeLBv6Kt+Uz4OVggczmqPq6FRqzZfiCYolYUQydMYTNEUPAcyX8ybMNpaFo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1666869248; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=mXN7N6fq5k3qM/yqvukj886GvHtnQ5Lx/doESqyky+A=; b=JuK8993Dl2GUJIKS7fiEU/MmWXF5tQbW/EnJRTkbaA5n4WNpit9kGSzwJk/KG2cVqD0YEwcVS3CQvJcLlvcmjNh8vFrtEecooF9GKb/j3tWmBZCBbg+/B7BhYjUDyvQqNRwJASV/q576CS6N+ack/67wmYy0WARfCvjS4W0Vx2M= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1666869248166902.1214241527183; Thu, 27 Oct 2022 04:14:08 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oo0pG-00078v-7L; Thu, 27 Oct 2022 07:13:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oo0pC-0006yp-Ae for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:26 -0400 Received: from mail-pg1-x52b.google.com ([2607:f8b0:4864:20::52b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oo0ox-0008M7-Me for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:25 -0400 Received: by mail-pg1-x52b.google.com with SMTP id f9so1034737pgj.2 for ; Thu, 27 Oct 2022 04:13:10 -0700 (PDT) Received: from localhost.localdomain ([2001:8003:501a:d301:3a91:9408:3918:55a]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b00172ea8ff334sm969621plj.7.2022.10.27.04.13.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 04:13:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mXN7N6fq5k3qM/yqvukj886GvHtnQ5Lx/doESqyky+A=; b=jfWV6GxXdH+qmd3K69x2H5pzxIHrN/V+M480rejShaCV45lIE5/f7kaXDQxHElN9uQ bfrEOJSR0CO748OeUeqfWj6NpVpKQN1u8pNa0YYyOM6mBxTFynR3Bc3BA01cbtHbUde3 9lQoNv/D5WQkhnMDgMRC2covXgeJ7sKp7BI46557f/4XWD0cACi/iow8qPgst52arqHH KZjpzfCI8N4wQknyb2I44q6eW0DHsEmEJo1DLqZS1J/MR2FY6XpFVIaskMYIGn35UJY6 s3MzhBi8VwTTp2GYYqqeIZAmN/ApEbGzbth3JnsV/hNm7Z7/DfNqrNGt4GAU7S0tsZVV qpMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mXN7N6fq5k3qM/yqvukj886GvHtnQ5Lx/doESqyky+A=; b=ZQeM5HszDfMzC0Z8bhdnKtWNC+p5WpVIyxJcVTBQlcNLS4NmAy7fG+XAQvXHGv8WPc HQFe7iLz2yPjEuOQAhvktlcuD21nEX40SDHxCmjZzhv7afgTJjNBhnDWX/JXbb0Tb5Qx jBEI6vYbIfGsvdfUhDX0ENng75IMnzOAj/JPpu+ZRIvvRNPHv5ijgVCwNksCddg6iw+w b9s1lATMXLlmZPGQaKdh/skiDJ+ZU+vB4Lo1FtmdYSyyBBHrONgV9af33XaI/O952XCO nmXwMo45wvq8wWc3baPNI0BbOSDfeIkDSZ4LKtklJKPf0HjWY6U+frGZIcxWHc3th2si jm+A== X-Gm-Message-State: ACrzQf3XKEfZwFTAhPPzQGdCcNl0E6fCbWLri0cVmvmedrt/5Ebe72Uj V7e6bVfVZGKOn524qQtV8nrVRPznrv0GrUTX X-Google-Smtp-Source: AMsMyM6O1GkF4kzFoAjAUavdXsBve3oxBtDkIv8bgkRKEbfDxEQoPb9KDIKTu7S93YvwrXQq6OYSqw== X-Received: by 2002:aa7:8e03:0:b0:56b:f5af:1d1 with SMTP id c3-20020aa78e03000000b0056bf5af01d1mr17780265pfr.16.1666869187657; Thu, 27 Oct 2022 04:13:07 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu Subject: [PATCH v2 1/7] util: Add interval-tree.c Date: Thu, 27 Oct 2022 22:12:52 +1100 Message-Id: <20221027111258.348196-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027111258.348196-1-richard.henderson@linaro.org> References: <20221027111258.348196-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::52b; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x52b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, T_SPF_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Qemu-devel" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1666869250192100007 Copy and simplify the Linux kernel's interval_tree_generic.h, instantiating for uint64_t. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- include/qemu/interval-tree.h | 99 ++++ tests/unit/test-interval-tree.c | 209 ++++++++ util/interval-tree.c | 882 ++++++++++++++++++++++++++++++++ tests/unit/meson.build | 1 + util/meson.build | 1 + 5 files changed, 1192 insertions(+) create mode 100644 include/qemu/interval-tree.h create mode 100644 tests/unit/test-interval-tree.c create mode 100644 util/interval-tree.c diff --git a/include/qemu/interval-tree.h b/include/qemu/interval-tree.h new file mode 100644 index 0000000000..25006debe8 --- /dev/null +++ b/include/qemu/interval-tree.h @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Interval trees. + * + * Derived from include/linux/interval_tree.h and its dependencies. + */ + +#ifndef QEMU_INTERVAL_TREE_H +#define QEMU_INTERVAL_TREE_H + +/* + * For now, don't expose Linux Red-Black Trees separately, but retain the + * separate type definitions to keep the implementation sane, and allow + * the possibility of disentangling them later. + */ +typedef struct RBNode +{ + /* Encodes parent with color in the lsb. */ + uintptr_t rb_parent_color; + struct RBNode *rb_right; + struct RBNode *rb_left; +} RBNode; + +typedef struct RBRoot +{ + RBNode *rb_node; +} RBRoot; + +typedef struct RBRootLeftCached { + RBRoot rb_root; + RBNode *rb_leftmost; +} RBRootLeftCached; + +typedef struct IntervalTreeNode +{ + RBNode rb; + + uint64_t start; /* Start of interval */ + uint64_t last; /* Last location _in_ interval */ + uint64_t subtree_last; +} IntervalTreeNode; + +typedef RBRootLeftCached IntervalTreeRoot; + +/** + * interval_tree_is_empty + * @root: root of the tree. + * + * Returns true if the tree contains no nodes. + */ +static inline bool interval_tree_is_empty(const IntervalTreeRoot *root) +{ + return root->rb_root.rb_node =3D=3D NULL; +} + +/** + * interval_tree_insert + * @node: node to insert, + * @root: root of the tree. + * + * Insert @node into @root, and rebalance. + */ +void interval_tree_insert(IntervalTreeNode *node, IntervalTreeRoot *root); + +/** + * interval_tree_remove + * @node: node to remove, + * @root: root of the tree. + * + * Remove @node from @root, and rebalance. + */ +void interval_tree_remove(IntervalTreeNode *node, IntervalTreeRoot *root); + +/** + * interval_tree_iter_first: + * @root: root of the tree, + * @start, @last: the inclusive interval [start, last]. + * + * Locate the "first" of a set of nodes within the tree at @root + * that overlap the interval, where "first" is sorted by start. + * Returns NULL if no overlap found. + */ +IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, + uint64_t start, uint64_t last); + +/** + * interval_tree_iter_next: + * @node: previous search result + * @start, @last: the inclusive interval [start, last]. + * + * Locate the "next" of a set of nodes within the tree that overlap the + * interval; @next is the result of a previous call to + * interval_tree_iter_{first,next}. Returns NULL if @next was the last + * node in the set. + */ +IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, + uint64_t start, uint64_t last); + +#endif /* QEMU_INTERVAL_TREE_H */ diff --git a/tests/unit/test-interval-tree.c b/tests/unit/test-interval-tre= e.c new file mode 100644 index 0000000000..119817a019 --- /dev/null +++ b/tests/unit/test-interval-tree.c @@ -0,0 +1,209 @@ +/* + * Test interval trees + * + * This work is licensed under the terms of the GNU LGPL, version 2 or lat= er. + * See the COPYING.LIB file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include "qemu/interval-tree.h" + +static IntervalTreeNode nodes[20]; +static IntervalTreeRoot root; + +static void rand_interval(IntervalTreeNode *n, uint64_t start, uint64_t la= st) +{ + gint32 s_ofs, l_ofs, l_max; + + if (last - start > INT32_MAX) { + l_max =3D INT32_MAX; + } else { + l_max =3D last - start; + } + s_ofs =3D g_test_rand_int_range(0, l_max); + l_ofs =3D g_test_rand_int_range(s_ofs, l_max); + + n->start =3D start + s_ofs; + n->last =3D start + l_ofs; +} + +static void test_empty(void) +{ + g_assert(root.rb_root.rb_node =3D=3D NULL); + g_assert(root.rb_leftmost =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, UINT64_MAX) =3D=3D NULL); +} + +static void test_find_one_point(void) +{ + /* Create a tree of a single node, which is the point [1,1]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 1; + + interval_tree_insert(&nodes[0], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_next(&nodes[0], 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 2) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 2, 2) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); + g_assert(root.rb_root.rb_node =3D=3D NULL); + g_assert(root.rb_leftmost =3D=3D NULL); +} + +static void test_find_two_point(void) +{ + IntervalTreeNode *find0, *find1; + + /* Create a tree of a two nodes, which are both the point [1,1]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 1; + nodes[1] =3D nodes[0]; + + interval_tree_insert(&nodes[0], &root); + interval_tree_insert(&nodes[1], &root); + + find0 =3D interval_tree_iter_first(&root, 0, 9); + g_assert(find0 =3D=3D &nodes[0] || find0 =3D=3D &nodes[1]); + + find1 =3D interval_tree_iter_next(find0, 0, 9); + g_assert(find1 =3D=3D &nodes[0] || find1 =3D=3D &nodes[1]); + g_assert(find0 !=3D find1); + + interval_tree_remove(&nodes[1], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); +} + +static void test_find_one_range(void) +{ + /* Create a tree of a single node, which is the range [1,8]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 8; + + interval_tree_insert(&nodes[0], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 4, 6) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 8, 8) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 9, 9) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); +} + +static void test_find_one_range_many(void) +{ + int i; + + /* + * Create a tree of many nodes in [0,99] and [200,299], + * but only one node with exactly [110,190]. + */ + nodes[0].start =3D 110; + nodes[0].last =3D 190; + + for (i =3D 1; i < ARRAY_SIZE(nodes) / 2; ++i) { + rand_interval(&nodes[i], 0, 99); + } + for (; i < ARRAY_SIZE(nodes); ++i) { + rand_interval(&nodes[i], 200, 299); + } + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_insert(&nodes[i], &root); + } + + /* Test that we find exactly the one node. */ + g_assert(interval_tree_iter_first(&root, 100, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 100, 199) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 100, 109) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 100, 110) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 111, 120) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 111, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 190, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 192, 199) =3D=3D NULL); + + /* + * Test that if there are multiple matches, we return the one + * with the minimal start. + */ + g_assert(interval_tree_iter_first(&root, 100, 300) =3D=3D &nodes[0]); + + /* Test that we don't find it after it is removed. */ + interval_tree_remove(&nodes[0], &root); + g_assert(interval_tree_iter_first(&root, 100, 199) =3D=3D NULL); + + for (i =3D 1; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_remove(&nodes[i], &root); + } +} + +static void test_find_many_range(void) +{ + IntervalTreeNode *find; + int i, n; + + n =3D g_test_rand_int_range(ARRAY_SIZE(nodes) / 3, ARRAY_SIZE(nodes) /= 2); + + /* + * Create a fair few nodes in [2000,2999], with the others + * distributed around. + */ + for (i =3D 0; i < n; ++i) { + rand_interval(&nodes[i], 2000, 2999); + } + for (; i < ARRAY_SIZE(nodes) * 2 / 3; ++i) { + rand_interval(&nodes[i], 1000, 1899); + } + for (; i < ARRAY_SIZE(nodes); ++i) { + rand_interval(&nodes[i], 3100, 3999); + } + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_insert(&nodes[i], &root); + } + + /* Test that we find all of the nodes. */ + find =3D interval_tree_iter_first(&root, 2000, 2999); + for (i =3D 0; find !=3D NULL; i++) { + find =3D interval_tree_iter_next(find, 2000, 2999); + } + g_assert_cmpint(i, =3D=3D, n); + + g_assert(interval_tree_iter_first(&root, 0, 999) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 1900, 1999) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 3000, 3099) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 4000, UINT64_MAX) =3D=3D NULL= ); + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_remove(&nodes[i], &root); + } +} + +int main(int argc, char **argv) +{ + g_test_init(&argc, &argv, NULL); + + g_test_add_func("/interval-tree/empty", test_empty); + g_test_add_func("/interval-tree/find-one-point", test_find_one_point); + g_test_add_func("/interval-tree/find-two-point", test_find_two_point); + g_test_add_func("/interval-tree/find-one-range", test_find_one_range); + g_test_add_func("/interval-tree/find-one-range-many", + test_find_one_range_many); + g_test_add_func("/interval-tree/find-many-range", test_find_many_range= ); + + return g_test_run(); +} diff --git a/util/interval-tree.c b/util/interval-tree.c new file mode 100644 index 0000000000..4c0baf108f --- /dev/null +++ b/util/interval-tree.c @@ -0,0 +1,882 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#include "qemu/osdep.h" +#include "qemu/interval-tree.h" +#include "qemu/atomic.h" + +/* + * Red Black Trees. + * + * For now, don't expose Linux Red-Black Trees separately, but retain the + * separate type definitions to keep the implementation sane, and allow + * the possibility of separating them later. + * + * Derived from include/linux/rbtree_augmented.h and its dependencies. + */ + +/* + * red-black trees properties: https://en.wikipedia.org/wiki/Rbtree + * + * 1) A node is either red or black + * 2) The root is black + * 3) All leaves (NULL) are black + * 4) Both children of every red node are black + * 5) Every simple path from root to leaves contains the same number + * of black nodes. + * + * 4 and 5 give the O(log n) guarantee, since 4 implies you cannot have t= wo + * consecutive red nodes in a path and every red node is therefore follow= ed by + * a black. So if B is the number of black nodes on every simple path (as= per + * 5), then the longest possible path due to 4 is 2B. + * + * We shall indicate color with case, where black nodes are uppercase and= red + * nodes will be lowercase. Unknown color nodes shall be drawn as red wit= hin + * parentheses and have some accompanying text comment. + * + * Notes on lockless lookups: + * + * All stores to the tree structure (rb_left and rb_right) must be done us= ing + * WRITE_ONCE [qatomic_set for QEMU]. And we must not inadvertently cause + * (temporary) loops in the tree structure as seen in program order. + * + * These two requirements will allow lockless iteration of the tree -- not + * correct iteration mind you, tree rotations are not atomic so a lookup m= ight + * miss entire subtrees. + * + * But they do guarantee that any such traversal will only see valid eleme= nts + * and that it will indeed complete -- does not get stuck in a loop. + * + * It also guarantees that if the lookup returns an element it is the 'cor= rect' + * one. But not returning an element does _NOT_ mean it's not present. + * + * NOTE: + * + * Stores to __rb_parent_color are not important for simple lookups so tho= se + * are left undone as of now. Nor did I check for loops involving parent + * pointers. + */ + +typedef enum RBColor +{ + RB_RED, + RB_BLACK, +} RBColor; + +typedef struct RBAugmentCallbacks { + void (*propagate)(RBNode *node, RBNode *stop); + void (*copy)(RBNode *old, RBNode *new); + void (*rotate)(RBNode *old, RBNode *new); +} RBAugmentCallbacks; + +static inline RBNode *rb_parent(const RBNode *n) +{ + return (RBNode *)(n->rb_parent_color & ~1); +} + +static inline RBNode *rb_red_parent(const RBNode *n) +{ + return (RBNode *)n->rb_parent_color; +} + +static inline RBColor pc_color(uintptr_t pc) +{ + return (RBColor)(pc & 1); +} + +static inline bool pc_is_red(uintptr_t pc) +{ + return pc_color(pc) =3D=3D RB_RED; +} + +static inline bool pc_is_black(uintptr_t pc) +{ + return !pc_is_red(pc); +} + +static inline RBColor rb_color(const RBNode *n) +{ + return pc_color(n->rb_parent_color); +} + +static inline bool rb_is_red(const RBNode *n) +{ + return pc_is_red(n->rb_parent_color); +} + +static inline bool rb_is_black(const RBNode *n) +{ + return pc_is_black(n->rb_parent_color); +} + +static inline void rb_set_black(RBNode *n) +{ + n->rb_parent_color |=3D RB_BLACK; +} + +static inline void rb_set_parent_color(RBNode *n, RBNode *p, RBColor color) +{ + n->rb_parent_color =3D (uintptr_t)p | color; +} + +static inline void rb_set_parent(RBNode *n, RBNode *p) +{ + rb_set_parent_color(n, p, rb_color(n)); +} + +static inline void rb_link_node(RBNode *node, RBNode *parent, RBNode **rb_= link) +{ + node->rb_parent_color =3D (uintptr_t)parent; + node->rb_left =3D node->rb_right =3D NULL; + + qatomic_set(rb_link, node); +} + +static RBNode *rb_next(RBNode *node) +{ + RBNode *parent; + + /* OMIT: if empty node, return null. */ + + /* + * If we have a right-hand child, go down and then left as far as we c= an. + */ + if (node->rb_right) { + node =3D node->rb_right; + while (node->rb_left) { + node =3D node->rb_left; + } + return node; + } + + /* + * No right-hand children. Everything down and left is smaller than us, + * so any 'next' node must be in the general direction of our parent. + * Go up the tree; any time the ancestor is a right-hand child of its + * parent, keep going up. First time it's a left-hand child of its + * parent, said parent is our 'next' node. + */ + while ((parent =3D rb_parent(node)) && node =3D=3D parent->rb_right) { + node =3D parent; + } + + return parent; +} + +static inline void rb_change_child(RBNode *old, RBNode *new, + RBNode *parent, RBRoot *root) +{ + if (!parent) { + qatomic_set(&root->rb_node, new); + } else if (parent->rb_left =3D=3D old) { + qatomic_set(&parent->rb_left, new); + } else { + qatomic_set(&parent->rb_right, new); + } +} + +static inline void rb_rotate_set_parents(RBNode *old, RBNode *new, + RBRoot *root, RBColor color) +{ + RBNode *parent =3D rb_parent(old); + + new->rb_parent_color =3D old->rb_parent_color; + rb_set_parent_color(old, new, color); + rb_change_child(old, new, parent, root); +} + +static void rb_insert_augmented(RBNode *node, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *parent =3D rb_red_parent(node), *gparent, *tmp; + + while (true) { + /* + * Loop invariant: node is red. + */ + if (unlikely(!parent)) { + /* + * The inserted node is root. Either this is the first node, or + * we recursed at Case 1 below and are no longer violating 4). + */ + rb_set_parent_color(node, NULL, RB_BLACK); + break; + } + + /* + * If there is a black parent, we are done. Otherwise, take some + * corrective action as, per 4), we don't want a red root or two + * consecutive red nodes. + */ + if (rb_is_black(parent)) { + break; + } + + gparent =3D rb_red_parent(parent); + + tmp =3D gparent->rb_right; + if (parent !=3D tmp) { /* parent =3D=3D gparent->rb_left */ + if (tmp && rb_is_red(tmp)) { + /* + * Case 1 - node's uncle is red (color flips). + * + * G g + * / \ / \ + * p u --> P U + * / / + * n n + * + * However, since g's parent might be red, and 4) does not + * allow this, we need to recurse at g. + */ + rb_set_parent_color(tmp, gparent, RB_BLACK); + rb_set_parent_color(parent, gparent, RB_BLACK); + node =3D gparent; + parent =3D rb_parent(node); + rb_set_parent_color(node, parent, RB_RED); + continue; + } + + tmp =3D parent->rb_right; + if (node =3D=3D tmp) { + /* + * Case 2 - node's uncle is black and node is + * the parent's right child (left rotate at parent). + * + * G G + * / \ / \ + * p U --> n U + * \ / + * n p + * + * This still leaves us in violation of 4), the + * continuation into Case 3 will fix that. + */ + tmp =3D node->rb_left; + qatomic_set(&parent->rb_right, tmp); + qatomic_set(&node->rb_left, parent); + if (tmp) { + rb_set_parent_color(tmp, parent, RB_BLACK); + } + rb_set_parent_color(parent, node, RB_RED); + augment->rotate(parent, node); + parent =3D node; + tmp =3D node->rb_right; + } + + /* + * Case 3 - node's uncle is black and node is + * the parent's left child (right rotate at gparent). + * + * G P + * / \ / \ + * p U --> n g + * / \ + * n U + */ + qatomic_set(&gparent->rb_left, tmp); /* =3D=3D parent->rb_righ= t */ + qatomic_set(&parent->rb_right, gparent); + if (tmp) { + rb_set_parent_color(tmp, gparent, RB_BLACK); + } + rb_rotate_set_parents(gparent, parent, root, RB_RED); + augment->rotate(gparent, parent); + break; + } else { + tmp =3D gparent->rb_left; + if (tmp && rb_is_red(tmp)) { + /* Case 1 - color flips */ + rb_set_parent_color(tmp, gparent, RB_BLACK); + rb_set_parent_color(parent, gparent, RB_BLACK); + node =3D gparent; + parent =3D rb_parent(node); + rb_set_parent_color(node, parent, RB_RED); + continue; + } + + tmp =3D parent->rb_left; + if (node =3D=3D tmp) { + /* Case 2 - right rotate at parent */ + tmp =3D node->rb_right; + qatomic_set(&parent->rb_left, tmp); + qatomic_set(&node->rb_right, parent); + if (tmp) { + rb_set_parent_color(tmp, parent, RB_BLACK); + } + rb_set_parent_color(parent, node, RB_RED); + augment->rotate(parent, node); + parent =3D node; + tmp =3D node->rb_left; + } + + /* Case 3 - left rotate at gparent */ + qatomic_set(&gparent->rb_right, tmp); /* =3D=3D parent->rb_lef= t */ + qatomic_set(&parent->rb_left, gparent); + if (tmp) { + rb_set_parent_color(tmp, gparent, RB_BLACK); + } + rb_rotate_set_parents(gparent, parent, root, RB_RED); + augment->rotate(gparent, parent); + break; + } + } +} + +static void rb_insert_augmented_cached(RBNode *node, + RBRootLeftCached *root, bool newlef= t, + const RBAugmentCallbacks *augment) +{ + if (newleft) { + root->rb_leftmost =3D node; + } + rb_insert_augmented(node, &root->rb_root, augment); +} + +static void rb_erase_color(RBNode *parent, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *node =3D NULL, *sibling, *tmp1, *tmp2; + + while (true) { + /* + * Loop invariants: + * - node is black (or NULL on first iteration) + * - node is not the root (parent is not NULL) + * - All leaf paths going through parent and node have a + * black node count that is 1 lower than other leaf paths. + */ + sibling =3D parent->rb_right; + if (node !=3D sibling) { /* node =3D=3D parent->rb_left */ + if (rb_is_red(sibling)) { + /* + * Case 1 - left rotate at parent + * + * P S + * / \ / \=20 + * N s --> p Sr + * / \ / \=20 + * Sl Sr N Sl + */ + tmp1 =3D sibling->rb_left; + qatomic_set(&parent->rb_right, tmp1); + qatomic_set(&sibling->rb_left, parent); + rb_set_parent_color(tmp1, parent, RB_BLACK); + rb_rotate_set_parents(parent, sibling, root, RB_RED); + augment->rotate(parent, sibling); + sibling =3D tmp1; + } + tmp1 =3D sibling->rb_right; + if (!tmp1 || rb_is_black(tmp1)) { + tmp2 =3D sibling->rb_left; + if (!tmp2 || rb_is_black(tmp2)) { + /* + * Case 2 - sibling color flip + * (p could be either color here) + * + * (p) (p) + * / \ / \=20 + * N S --> N s + * / \ / \=20 + * Sl Sr Sl Sr + * + * This leaves us violating 5) which + * can be fixed by flipping p to black + * if it was red, or by recursing at p. + * p is red when coming from Case 1. + */ + rb_set_parent_color(sibling, parent, RB_RED); + if (rb_is_red(parent)) { + rb_set_black(parent); + } else { + node =3D parent; + parent =3D rb_parent(node); + if (parent) { + continue; + } + } + break; + } + /* + * Case 3 - right rotate at sibling + * (p could be either color here) + * + * (p) (p) + * / \ / \ + * N S --> N sl + * / \ \ + * sl Sr S + * \ + * Sr + * + * Note: p might be red, and then bot + * p and sl are red after rotation (which + * breaks property 4). This is fixed in + * Case 4 (in rb_rotate_set_parents() + * which set sl the color of p + * and set p RB_BLACK) + * + * (p) (sl) + * / \ / \ + * N sl --> P S + * \ / \ + * S N Sr + * \ + * Sr + */ + tmp1 =3D tmp2->rb_right; + qatomic_set(&sibling->rb_left, tmp1); + qatomic_set(&tmp2->rb_right, sibling); + qatomic_set(&parent->rb_right, tmp2); + if (tmp1) { + rb_set_parent_color(tmp1, sibling, RB_BLACK); + } + augment->rotate(sibling, tmp2); + tmp1 =3D sibling; + sibling =3D tmp2; + } + /* + * Case 4 - left rotate at parent + color flips + * (p and sl could be either color here. + * After rotation, p becomes black, s acquires + * p's color, and sl keeps its color) + * + * (p) (s) + * / \ / \ + * N S --> P Sr + * / \ / \ + * (sl) sr N (sl) + */ + tmp2 =3D sibling->rb_left; + qatomic_set(&parent->rb_right, tmp2); + qatomic_set(&sibling->rb_left, parent); + rb_set_parent_color(tmp1, sibling, RB_BLACK); + if (tmp2) { + rb_set_parent(tmp2, parent); + } + rb_rotate_set_parents(parent, sibling, root, RB_BLACK); + augment->rotate(parent, sibling); + break; + } else { + sibling =3D parent->rb_left; + if (rb_is_red(sibling)) { + /* Case 1 - right rotate at parent */ + tmp1 =3D sibling->rb_right; + qatomic_set(&parent->rb_left, tmp1); + qatomic_set(&sibling->rb_right, parent); + rb_set_parent_color(tmp1, parent, RB_BLACK); + rb_rotate_set_parents(parent, sibling, root, RB_RED); + augment->rotate(parent, sibling); + sibling =3D tmp1; + } + tmp1 =3D sibling->rb_left; + if (!tmp1 || rb_is_black(tmp1)) { + tmp2 =3D sibling->rb_right; + if (!tmp2 || rb_is_black(tmp2)) { + /* Case 2 - sibling color flip */ + rb_set_parent_color(sibling, parent, RB_RED); + if (rb_is_red(parent)) { + rb_set_black(parent); + } else { + node =3D parent; + parent =3D rb_parent(node); + if (parent) { + continue; + } + } + break; + } + /* Case 3 - left rotate at sibling */ + tmp1 =3D tmp2->rb_left; + qatomic_set(&sibling->rb_right, tmp1); + qatomic_set(&tmp2->rb_left, sibling); + qatomic_set(&parent->rb_left, tmp2); + if (tmp1) { + rb_set_parent_color(tmp1, sibling, RB_BLACK); + } + augment->rotate(sibling, tmp2); + tmp1 =3D sibling; + sibling =3D tmp2; + } + /* Case 4 - right rotate at parent + color flips */ + tmp2 =3D sibling->rb_right; + qatomic_set(&parent->rb_left, tmp2); + qatomic_set(&sibling->rb_right, parent); + rb_set_parent_color(tmp1, sibling, RB_BLACK); + if (tmp2) { + rb_set_parent(tmp2, parent); + } + rb_rotate_set_parents(parent, sibling, root, RB_BLACK); + augment->rotate(parent, sibling); + break; + } + } +} + +static void rb_erase_augmented(RBNode *node, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *child =3D node->rb_right; + RBNode *tmp =3D node->rb_left; + RBNode *parent, *rebalance; + uintptr_t pc; + + if (!tmp) { + /* + * Case 1: node to erase has no more than 1 child (easy!) + * + * Note that if there is one child it must be red due to 5) + * and node must be black due to 4). We adjust colors locally + * so as to bypass rb_erase_color() later on. + */ + pc =3D node->rb_parent_color; + parent =3D rb_parent(node); + rb_change_child(node, child, parent, root); + if (child) { + child->rb_parent_color =3D pc; + rebalance =3D NULL; + } else { + rebalance =3D pc_is_black(pc) ? parent : NULL; + } + tmp =3D parent; + } else if (!child) { + /* Still case 1, but this time the child is node->rb_left */ + pc =3D node->rb_parent_color; + parent =3D rb_parent(node); + tmp->rb_parent_color =3D pc; + rb_change_child(node, tmp, parent, root); + rebalance =3D NULL; + tmp =3D parent; + } else { + RBNode *successor =3D child, *child2; + tmp =3D child->rb_left; + if (!tmp) { + /* + * Case 2: node's successor is its right child + * + * (n) (s) + * / \ / \ + * (x) (s) -> (x) (c) + * \ + * (c) + */ + parent =3D successor; + child2 =3D successor->rb_right; + + augment->copy(node, successor); + } else { + /* + * Case 3: node's successor is leftmost under + * node's right child subtree + * + * (n) (s) + * / \ / \ + * (x) (y) -> (x) (y) + * / / + * (p) (p) + * / / + * (s) (c) + * \ + * (c) + */ + do { + parent =3D successor; + successor =3D tmp; + tmp =3D tmp->rb_left; + } while (tmp); + child2 =3D successor->rb_right; + qatomic_set(&parent->rb_left, child2); + qatomic_set(&successor->rb_right, child); + rb_set_parent(child, successor); + + augment->copy(node, successor); + augment->propagate(parent, successor); + } + + tmp =3D node->rb_left; + qatomic_set(&successor->rb_left, tmp); + rb_set_parent(tmp, successor); + + pc =3D node->rb_parent_color; + tmp =3D rb_parent(node); + rb_change_child(node, successor, tmp, root); + + if (child2) { + rb_set_parent_color(child2, parent, RB_BLACK); + rebalance =3D NULL; + } else { + rebalance =3D rb_is_black(successor) ? parent : NULL; + } + successor->rb_parent_color =3D pc; + tmp =3D successor; + } + + augment->propagate(tmp, NULL); + + if (rebalance) { + rb_erase_color(rebalance, root, augment); + } +} + +static void rb_erase_augmented_cached(RBNode *node, RBRootLeftCached *root, + const RBAugmentCallbacks *augment) +{ + if (root->rb_leftmost =3D=3D node) { + root->rb_leftmost =3D rb_next(node); + } + rb_erase_augmented(node, &root->rb_root, augment); +} + + +/* + * Interval trees. + * + * Derived from lib/interval_tree.c and its dependencies, + * especially include/linux/interval_tree_generic.h. + */ + +#define rb_to_itree(N) container_of(N, IntervalTreeNode, rb) + +static bool interval_tree_compute_max(IntervalTreeNode *node, bool exit) +{ + IntervalTreeNode *child; + uint64_t max =3D node->last; + + if (node->rb.rb_left) { + child =3D rb_to_itree(node->rb.rb_left); + if (child->subtree_last > max) { + max =3D child->subtree_last; + } + } + if (node->rb.rb_right) { + child =3D rb_to_itree(node->rb.rb_right); + if (child->subtree_last > max) { + max =3D child->subtree_last; + } + } + if (exit && node->subtree_last =3D=3D max) { + return true; + } + node->subtree_last =3D max; + return false; +} + +static void interval_tree_propagate(RBNode *rb, RBNode *stop) +{ + while (rb !=3D stop) { + IntervalTreeNode *node =3D rb_to_itree(rb); + if (interval_tree_compute_max(node, true)) { + break; + } + rb =3D rb_parent(&node->rb); + } +} + +static void interval_tree_copy(RBNode *rb_old, RBNode *rb_new) +{ + IntervalTreeNode *old =3D rb_to_itree(rb_old); + IntervalTreeNode *new =3D rb_to_itree(rb_new); + + new->subtree_last =3D old->subtree_last; +} + +static void interval_tree_rotate(RBNode *rb_old, RBNode *rb_new) +{ + IntervalTreeNode *old =3D rb_to_itree(rb_old); + IntervalTreeNode *new =3D rb_to_itree(rb_new); + + new->subtree_last =3D old->subtree_last; + interval_tree_compute_max(old, false); +} + +static const RBAugmentCallbacks interval_tree_augment =3D { + .propagate =3D interval_tree_propagate, + .copy =3D interval_tree_copy, + .rotate =3D interval_tree_rotate, +}; + +/* Insert / remove interval nodes from the tree */ +void interval_tree_insert(IntervalTreeNode *node, IntervalTreeRoot *root) +{ + RBNode **link =3D &root->rb_root.rb_node, *rb_parent =3D NULL; + uint64_t start =3D node->start, last =3D node->last; + IntervalTreeNode *parent; + bool leftmost =3D true; + + while (*link) { + rb_parent =3D *link; + parent =3D rb_to_itree(rb_parent); + + if (parent->subtree_last < last) { + parent->subtree_last =3D last; + } + if (start < parent->start) { + link =3D &parent->rb.rb_left; + } else { + link =3D &parent->rb.rb_right; + leftmost =3D false; + } + } + + node->subtree_last =3D last; + rb_link_node(&node->rb, rb_parent, link); + rb_insert_augmented_cached(&node->rb, root, leftmost, + &interval_tree_augment); +} + +void interval_tree_remove(IntervalTreeNode *node, IntervalTreeRoot *root) +{ + rb_erase_augmented_cached(&node->rb, root, &interval_tree_augment); +} + +/* + * Iterate over intervals intersecting [start;last] + * + * Note that a node's interval intersects [start;last] iff: + * Cond1: node->start <=3D last + * and + * Cond2: start <=3D node->last + */ + +static IntervalTreeNode *interval_tree_subtree_search(IntervalTreeNode *no= de, + uint64_t start, + uint64_t last) +{ + while (true) { + /* + * Loop invariant: start <=3D node->subtree_last + * (Cond2 is satisfied by one of the subtree nodes) + */ + if (node->rb.rb_left) { + IntervalTreeNode *left =3D rb_to_itree(node->rb.rb_left); + + if (start <=3D left->subtree_last) { + /* + * Some nodes in left subtree satisfy Cond2. + * Iterate to find the leftmost such node N. + * If it also satisfies Cond1, that's the + * match we are looking for. Otherwise, there + * is no matching interval as nodes to the + * right of N can't satisfy Cond1 either. + */ + node =3D left; + continue; + } + } + if (node->start <=3D last) { /* Cond1 */ + if (start <=3D node->last) { /* Cond2 */ + return node; /* node is leftmost match */ + } + if (node->rb.rb_right) { + node =3D rb_to_itree(node->rb.rb_right); + if (start <=3D node->subtree_last) { + continue; + } + } + } + return NULL; /* no match */ + } +} + +IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, + uint64_t start, uint64_t last) +{ + IntervalTreeNode *node, *leftmost; + + if (!root->rb_root.rb_node) { + return NULL; + } + + /* + * Fastpath range intersection/overlap between A: [a0, a1] and + * B: [b0, b1] is given by: + * + * a0 <=3D b1 && b0 <=3D a1 + * + * ... where A holds the lock range and B holds the smallest + * 'start' and largest 'last' in the tree. For the later, we + * rely on the root node, which by augmented interval tree + * property, holds the largest value in its last-in-subtree. + * This allows mitigating some of the tree walk overhead for + * for non-intersecting ranges, maintained and consulted in O(1). + */ + node =3D rb_to_itree(root->rb_root.rb_node); + if (node->subtree_last < start) { + return NULL; + } + + leftmost =3D rb_to_itree(root->rb_leftmost); + if (leftmost->start > last) { + return NULL; + } + + return interval_tree_subtree_search(node, start, last); +} + +IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, + uint64_t start, uint64_t last) +{ + RBNode *rb =3D node->rb.rb_right, *prev; + + while (true) { + /* + * Loop invariants: + * Cond1: node->start <=3D last + * rb =3D=3D node->rb.rb_right + * + * First, search right subtree if suitable + */ + if (rb) { + IntervalTreeNode *right =3D rb_to_itree(rb); + + if (start <=3D right->subtree_last) { + return interval_tree_subtree_search(right, start, last); + } + } + + /* Move up the tree until we come from a node's left child */ + do { + rb =3D rb_parent(&node->rb); + if (!rb) { + return NULL; + } + prev =3D &node->rb; + node =3D rb_to_itree(rb); + rb =3D node->rb.rb_right; + } while (prev =3D=3D rb); + + /* Check if the node intersects [start;last] */ + if (last < node->start) { /* !Cond1 */ + return NULL; + } + if (start <=3D node->last) { /* Cond2 */ + return node; + } + } +} + +/* Occasionally useful for calling from within the debugger. */ +#if 0 +static void debug_interval_tree_int(IntervalTreeNode *node, + const char *dir, int level) +{ + printf("%4d %*s %s [%" PRIu64 ",%" PRIu64 "] subtree_last:%" PRIu64 "\= n", + level, level + 1, dir, rb_is_red(&node->rb) ? "r" : "b", + node->start, node->last, node->subtree_last); + + if (node->rb.rb_left) { + debug_interval_tree_int(rb_to_itree(node->rb.rb_left), "<", level = + 1); + } + if (node->rb.rb_right) { + debug_interval_tree_int(rb_to_itree(node->rb.rb_right), ">", level= + 1); + } +} + +void debug_interval_tree(IntervalTreeNode *node); +void debug_interval_tree(IntervalTreeNode *node) +{ + if (node) { + debug_interval_tree_int(node, "*", 0); + } else { + printf("null\n"); + } +} +#endif diff --git a/tests/unit/meson.build b/tests/unit/meson.build index b497a41378..ffa444f432 100644 --- a/tests/unit/meson.build +++ b/tests/unit/meson.build @@ -47,6 +47,7 @@ tests =3D { 'ptimer-test': ['ptimer-test-stubs.c', meson.project_source_root() / 'hw= /core/ptimer.c'], 'test-qapi-util': [], 'test-smp-parse': [qom, meson.project_source_root() / 'hw/core/machine-s= mp.c'], + 'test-interval-tree': [], } =20 if have_system or have_tools diff --git a/util/meson.build b/util/meson.build index 5e282130df..46a0d017a6 100644 --- a/util/meson.build +++ b/util/meson.build @@ -55,6 +55,7 @@ util_ss.add(files('guest-random.c')) util_ss.add(files('yank.c')) util_ss.add(files('int128.c')) util_ss.add(files('memalign.c')) +util_ss.add(files('interval-tree.c')) =20 if have_user util_ss.add(files('selfmap.c')) --=20 2.34.1 From nobody Thu Oct 31 23:50:58 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1666869248; cv=none; d=zohomail.com; s=zohoarc; b=dSpX2ROjfGsfydjx3pHwRr3BO3aupd47Vko+c8U9vOwaFrbnsJfHQrxeR4odJKMmqlx3zC1+q0z0Ykt5eJ16vvEFwL95NWGyccneW0Uf9sNlsvoVzLXTfk6jfuWPWxcYZE9/kAD5t1DZ8/fOsOAUrmr3UdSFgwDWtxFAxx1GHRo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1666869248; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZOtzS266npLl/j0xSrfsENKxD4JPV1+Gw6LTBzBWDvc=; b=oAR3o4zLF8J/lcwWlwLMjlJkBWXgKXoAPajC1AQq8rv/Wa+bfd6SmkItredTgJN+4wgeuesDVu6D+bidLq6RqL7b9iJDrYxctxoeR0Uc7mfiUNTLtaw2usj5L0u2Zv2DsbcZ4iBGqk9mwzl9Prp1ncI9uMVBoRj96shOJi2AOSU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1666869248238137.56173558749845; Thu, 27 Oct 2022 04:14:08 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oo0p6-0005aT-A7; Thu, 27 Oct 2022 07:13:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oo0p1-0005Bn-TJ for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:15 -0400 Received: from mail-pg1-x52d.google.com ([2607:f8b0:4864:20::52d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oo0oy-0008MJ-JZ for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:15 -0400 Received: by mail-pg1-x52d.google.com with SMTP id h185so1004596pgc.10 for ; Thu, 27 Oct 2022 04:13:11 -0700 (PDT) Received: from localhost.localdomain ([2001:8003:501a:d301:3a91:9408:3918:55a]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b00172ea8ff334sm969621plj.7.2022.10.27.04.13.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 04:13:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZOtzS266npLl/j0xSrfsENKxD4JPV1+Gw6LTBzBWDvc=; b=PZiToDiDklhVqTaRucdiVY50X1lrVeAwDKmOLqGN54xRAs4lcIs1m1CVSQ/J73Qtus br6om1ZjdyOGftzngaqNDzDijeuIRCCiNXD0leq+KQNx1g5jVnxaO4JWiqGq0tnfJemF qzuJxsTloxvBpfSQwF3WW5ZhcCrQ/3rhyovCrpm39aQV9NaTsStVg2C15HKPduRDVPNe kCS+WXYFo3CEOe5W9t8JhYSU/gITClsufEjurHNbyoWtaEYSwatY+EbONY1FdwWYCb0p PDQmv7yN6cMaDfj5cjU01gZEJX8XvI6SdFK62Psaq0Lh7qXipAe4FuyogS4J/+cMfSC3 hYUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZOtzS266npLl/j0xSrfsENKxD4JPV1+Gw6LTBzBWDvc=; b=wlOdqxx29hqXwo3vUYgeEcSdv052O1mzqMVYcvUT2Xa+hwFOjMYp70fdoy6ctPYg/z rlOcB7KAkH51b1lEbQ1EkwQdIaFCAuytx0aeXeI6Akdxi8e2xnHS7CTBLaTD8Oum/O6v ndvvmjv3LrY3PZnHAWNNoZA+w0GX+iqkp2hdJibE4Ff24IXjLNNo8YOlscmT0gPQNqyF 0a75vJikkYPQqadYBpm/YTM/kTRZPPpbyVnWyp5FyA28avqJfw4VRVoq9zBxTAlJv8xh v59pjLG7DwyWQ4B7l8T5xDqbgADgvTgFkBTMrChvUpDRh3md4TD8hh3u3ip1JKJ2jEwX Id/w== X-Gm-Message-State: ACrzQf3aStvnbMPH869+eQA1oC7RM8hyDLT3T2TZudHz4wEUmYrXcpDW BXTYOm6gF7r+5IWsStiVh0RKOWZ2UCCwIDlW X-Google-Smtp-Source: AMsMyM5Xvr0gps0hfXI8geKuVpKVvDl+9Ef+cIGcaDlBAJy1Ap2uLYcxZoodiyDL7/KQ5In8q81t0Q== X-Received: by 2002:aa7:88c9:0:b0:56b:e851:5b67 with SMTP id k9-20020aa788c9000000b0056be8515b67mr18461046pff.78.1666869190416; Thu, 27 Oct 2022 04:13:10 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu Subject: [PATCH v2 2/7] accel/tcg: Use interval tree for TBs in user-only mode Date: Thu, 27 Oct 2022 22:12:53 +1100 Message-Id: <20221027111258.348196-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027111258.348196-1-richard.henderson@linaro.org> References: <20221027111258.348196-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::52d; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x52d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Qemu-devel" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1666869250120100005 Begin weaning user-only away from PageDesc. Since, for user-only, all TB (and page) manipulation is done with a single mutex, and there is no virtual/physical discontinuity to split a TB across discontinuous pages, place all of the TBs into a single IntervalTree. This makes it trivial to find all of the TBs intersecting a range. Retain the existing PageDesc + linked list implementation for system mode. Move the portion of the implementation that overlaps the new user-only code behind the common ifdef. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 16 +- include/exec/exec-all.h | 43 ++++- accel/tcg/tb-maint.c | 388 ++++++++++++++++++++++---------------- accel/tcg/translate-all.c | 4 +- 4 files changed, 280 insertions(+), 171 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 1227bb69bd..1bd5a02911 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -24,14 +24,13 @@ #endif =20 typedef struct PageDesc { - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; #ifdef CONFIG_USER_ONLY unsigned long flags; void *target_data; -#endif -#ifdef CONFIG_SOFTMMU +#else QemuSpin lock; + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; #endif } PageDesc; =20 @@ -69,9 +68,6 @@ static inline PageDesc *page_find(tb_page_addr_t index) tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) =20 -#define PAGE_FOR_EACH_TB(pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - #define TB_FOR_EACH_JMP(head_tb, tb, n) \ TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) =20 @@ -89,6 +85,12 @@ void do_assert_page_locked(const PageDesc *pd, const cha= r *file, int line); #endif void page_lock(PageDesc *pd); void page_unlock(PageDesc *pd); + +/* TODO: For now, still shared with translate-all.c for system mode. */ +typedef int PageForEachNext; +#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + #endif #if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) void assert_no_pages_locked(void); diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index e948992a80..11fd635fdc 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -24,6 +24,7 @@ #ifdef CONFIG_TCG #include "exec/cpu_ldst.h" #endif +#include "qemu/interval-tree.h" =20 /* allow to see translation results - the slowdown should be negligible, s= o we leave it */ #define DEBUG_DISAS @@ -549,11 +550,20 @@ struct TranslationBlock { =20 struct tb_tc tc; =20 - /* first and second physical page containing code. The lower bit - of the pointer tells the index in page_next[]. - The list is protected by the TB's page('s) lock(s) */ + /* + * Track tb_page_addr_t intervals that intersect this TB. + * For user-only, the virtual addresses are always contiguous, + * and we use a unified interval tree. For system, we use a + * linked list headed in each PageDesc. Within the list, the lsb + * of the previous pointer tells the index of page_next[], and the + * list is protected by the PageDesc lock(s). + */ +#ifdef CONFIG_USER_ONLY + IntervalTreeNode itree; +#else uintptr_t page_next[2]; tb_page_addr_t page_addr[2]; +#endif =20 /* jmp_lock placed here to fill a 4-byte hole. Its documentation is be= low */ QemuSpin jmp_lock; @@ -609,24 +619,51 @@ static inline uint32_t tb_cflags(const TranslationBlo= ck *tb) =20 static inline tb_page_addr_t tb_page_addr0(const TranslationBlock *tb) { +#ifdef CONFIG_USER_ONLY + return tb->itree.start; +#else return tb->page_addr[0]; +#endif } =20 static inline tb_page_addr_t tb_page_addr1(const TranslationBlock *tb) { +#ifdef CONFIG_USER_ONLY + tb_page_addr_t next =3D tb->itree.last & TARGET_PAGE_MASK; + return next =3D=3D (tb->itree.start & TARGET_PAGE_MASK) ? -1 : next; +#else return tb->page_addr[1]; +#endif } =20 static inline void tb_set_page_addr0(TranslationBlock *tb, tb_page_addr_t addr) { +#ifdef CONFIG_USER_ONLY + tb->itree.start =3D addr; + /* + * To begin, we record an interval of one byte. When the translation + * loop encounters a second page, the interval will be extended to + * include the first byte of the second page, which is sufficient to + * allow tb_page_addr1() above to work properly. The final corrected + * interval will be set by tb_page_add() from tb->size before the + * node is added to the interval tree. + */ + tb->itree.last =3D addr; +#else tb->page_addr[0] =3D addr; +#endif } =20 static inline void tb_set_page_addr1(TranslationBlock *tb, tb_page_addr_t addr) { +#ifdef CONFIG_USER_ONLY + /* Extend the interval to the first byte of the second page. See abov= e. */ + tb->itree.last =3D addr; +#else tb->page_addr[1] =3D addr; +#endif } =20 /* current cflags for hashing/comparison */ diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index c8e921089d..14e8e47a6a 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -18,6 +18,7 @@ */ =20 #include "qemu/osdep.h" +#include "qemu/interval-tree.h" #include "exec/cputlb.h" #include "exec/log.h" #include "exec/exec-all.h" @@ -50,6 +51,75 @@ void tb_htable_init(void) qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } =20 +#ifdef CONFIG_USER_ONLY +/* + * For user-only, since we are protecting all of memory with a single lock, + * and because the two pages of a TranslationBlock are always contiguous, + * use a single data structure to record all TranslationBlocks. + */ +static IntervalTreeRoot tb_root; + +static void page_flush_tb(void) +{ + assert_memory_lock(); + memset(&tb_root, 0, sizeof(tb_root)); +} + +/* Call with mmap_lock held. */ +static void tb_page_add(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) +{ + /* translator_loop() must have made all TB pages non-writable */ + assert(!(p1->flags & PAGE_WRITE)); + if (p2) { + assert(!(p2->flags & PAGE_WRITE)); + } + + assert_memory_lock(); + + tb->itree.last =3D tb->itree.start + tb->size - 1; + interval_tree_insert(&tb->itree, &tb_root); +} + +/* Call with mmap_lock held. */ +static void tb_page_remove(TranslationBlock *tb) +{ + assert_memory_lock(); + interval_tree_remove(&tb->itree, &tb_root); +} + +/* TODO: For now, still shared with translate-all.c for system mode. */ +#define PAGE_FOR_EACH_TB(start, end, pagedesc, T, N) \ + for (T =3D foreach_tb_first(start, end), \ + N =3D foreach_tb_next(T, start, end); \ + T !=3D NULL; \ + T =3D N, N =3D foreach_tb_next(N, start, end)) + +typedef TranslationBlock *PageForEachNext; + +static PageForEachNext foreach_tb_first(tb_page_addr_t start, + tb_page_addr_t end) +{ + IntervalTreeNode *n =3D interval_tree_iter_first(&tb_root, start, end = - 1); + return n ? container_of(n, TranslationBlock, itree) : NULL; +} + +static PageForEachNext foreach_tb_next(PageForEachNext tb, + tb_page_addr_t start, + tb_page_addr_t end) +{ + IntervalTreeNode *n; + + if (tb) { + n =3D interval_tree_iter_next(&tb->itree, start, end - 1); + if (n) { + return container_of(n, TranslationBlock, itree); + } + } + return NULL; +} + +#else + /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void page_flush_tb_1(int level, void **lp) { @@ -84,6 +154,70 @@ static void page_flush_tb(void) } } =20 +/* + * Add the tb in the target page and protect it if necessary. + * Called with @p->lock held. + */ +static void tb_page_add(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) +{ + /* + * If some code is already present, then the pages are already + * protected. So we handle the case where only the first TB is + * allocated in a physical page. + */ + assert_page_locked(p1); + if (p1->first_tb) { + tb->page_next[0] =3D p1->first_tb; + } else { + tlb_protect_code(tb->page_addr[0] & TARGET_PAGE_MASK); + tb->page_next[0] =3D 0; + } + p1->first_tb =3D (uintptr_t)tb | 0; + + if (unlikely(p2)) { + assert_page_locked(p2); + if (p2->first_tb) { + tb->page_next[1] =3D p2->first_tb; + } else { + tlb_protect_code(tb->page_addr[1] & TARGET_PAGE_MASK); + tb->page_next[1] =3D 0; + } + p2->first_tb =3D (uintptr_t)tb | 1; + } +} + +static void tb_page_remove1(TranslationBlock *tb, PageDesc *pd) +{ + TranslationBlock *i; + PageForEachNext n; + uintptr_t *pprev; + + assert_page_locked(pd); + pprev =3D &pd->first_tb; + PAGE_FOR_EACH_TB(unused, unused, pd, i, n) { + if (i =3D=3D tb) { + *pprev =3D i->page_next[n]; + return; + } + pprev =3D &i->page_next[n]; + } + g_assert_not_reached(); +} + +static void tb_page_remove(TranslationBlock *tb) +{ + PageDesc *pd; + + pd =3D page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + tb_page_remove1(tb, pd); + if (unlikely(tb->page_addr[1] !=3D -1)) { + pd =3D page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + tb_page_remove1(tb, pd); + } +} + +#endif + /* flush all the translation blocks */ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) { @@ -128,28 +262,6 @@ void tb_flush(CPUState *cpu) } } =20 -/* - * user-mode: call with mmap_lock held - * !user-mode: call with @pd->lock held - */ -static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) -{ - TranslationBlock *tb1; - uintptr_t *pprev; - unsigned int n1; - - assert_page_locked(pd); - pprev =3D &pd->first_tb; - PAGE_FOR_EACH_TB(pd, tb1, n1) { - if (tb1 =3D=3D tb) { - *pprev =3D tb1->page_next[n1]; - return; - } - pprev =3D &tb1->page_next[n1]; - } - g_assert_not_reached(); -} - /* remove @orig from its @n_orig-th jump list */ static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_o= rig) { @@ -255,7 +367,6 @@ static void tb_jmp_cache_inval_tb(TranslationBlock *tb) */ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_= list) { - PageDesc *p; uint32_t h; tb_page_addr_t phys_pc; uint32_t orig_cflags =3D tb_cflags(tb); @@ -277,13 +388,7 @@ static void do_tb_phys_invalidate(TranslationBlock *tb= , bool rm_from_page_list) =20 /* remove the TB from the page list */ if (rm_from_page_list) { - p =3D page_find(phys_pc >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - phys_pc =3D tb_page_addr1(tb); - if (phys_pc !=3D -1) { - p =3D page_find(phys_pc >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - } + tb_page_remove(tb); } =20 /* remove the TB from the hash list */ @@ -387,41 +492,6 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_= addr_t page_addr) } } =20 -/* - * Add the tb in the target page and protect it if necessary. - * Called with mmap_lock held for user-mode emulation. - * Called with @p->lock held in !user-mode. - */ -static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, - unsigned int n, tb_page_addr_t page_addr) -{ -#ifndef CONFIG_USER_ONLY - bool page_already_protected; -#endif - - assert_page_locked(p); - - tb->page_next[n] =3D p->first_tb; -#ifndef CONFIG_USER_ONLY - page_already_protected =3D p->first_tb !=3D (uintptr_t)NULL; -#endif - p->first_tb =3D (uintptr_t)tb | n; - -#if defined(CONFIG_USER_ONLY) - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p->flags & PAGE_WRITE)); -#else - /* - * If some code is already present, then the pages are already - * protected. So we handle the case where only the first TB is - * allocated in a physical page. - */ - if (!page_already_protected) { - tlb_protect_code(page_addr); - } -#endif -} - /* * Add a new TB and link it to the physical page tables. phys_page2 is * (-1) to indicate that only one page contains the TB. @@ -453,10 +523,7 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, * we can only insert TBs that are fully initialized. */ page_lock_pair(&p, phys_pc, &p2, phys_page2, true); - tb_page_add(p, tb, 0, phys_pc); - if (p2) { - tb_page_add(p2, tb, 1, phys_page2); - } + tb_page_add(tb, p, p2); =20 /* add in the hash table */ h =3D tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), @@ -465,10 +532,7 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, =20 /* remove TB from the page(s) if we couldn't insert it */ if (unlikely(existing_tb)) { - tb_page_remove(p, tb); - if (p2) { - tb_page_remove(p2, tb); - } + tb_page_remove(tb); tb =3D existing_tb; } =20 @@ -479,6 +543,87 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, return tb; } =20 +#ifdef CONFIG_USER_ONLY +/* + * Invalidate all TBs which intersect with the target address range. + * Called with mmap_lock held for user-mode emulation. + * NOTE: this function must not be called while a TB is running. + */ +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +{ + TranslationBlock *tb; + PageForEachNext n; + + assert_memory_lock(); + + PAGE_FOR_EACH_TB(start, end, unused, tb, n) { + tb_phys_invalidate__locked(tb); + } +} + +/* + * Invalidate all TBs which intersect with the target address page @addr. + * Called with mmap_lock held for user-mode emulation + * NOTE: this function must not be called while a TB is running. + */ +void tb_invalidate_phys_page(tb_page_addr_t addr) +{ + tb_page_addr_t start, end; + + start =3D addr & TARGET_PAGE_MASK; + end =3D start + TARGET_PAGE_SIZE; + tb_invalidate_phys_range(start, end); +} + +/* + * Called with mmap_lock held. If pc is not 0 then it indicates the + * host PC of the faulting store instruction that caused this invalidate. + * Returns true if the caller needs to abort execution of the current + * TB (because it was modified by this store and the guest CPU has + * precise-SMC semantics). + */ +bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) +{ + assert(pc !=3D 0); +#ifdef TARGET_HAS_PRECISE_SMC + assert_memory_lock(); + { + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + bool current_tb_modified =3D false; + TranslationBlock *tb; + PageForEachNext n; + + addr &=3D TARGET_PAGE_MASK; + + PAGE_FOR_EACH_TB(addr, addr + TARGET_PAGE_SIZE, unused, tb, n) { + if (current_tb =3D=3D tb && + (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { + /* + * If we are modifying the current TB, we must stop its + * execution. We could be more precise by checking that + * the modification is after the current PC, but it would + * require a specialized function to partially restore + * the CPU state. + */ + current_tb_modified =3D true; + cpu_restore_state_from_tb(current_cpu, current_tb, pc, tru= e); + } + tb_phys_invalidate__locked(tb); + } + + if (current_tb_modified) { + /* Force execution of one insn next time. */ + CPUState *cpu =3D current_cpu; + cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(current_cpu= ); + return true; + } + } +#else + tb_invalidate_phys_page(addr); +#endif /* TARGET_HAS_PRECISE_SMC */ + return false; +} +#else /* * @p must be non-NULL. * user-mode: call with mmap_lock held. @@ -492,22 +637,17 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, { TranslationBlock *tb; tb_page_addr_t tb_start, tb_end; - int n; + PageForEachNext n; #ifdef TARGET_HAS_PRECISE_SMC - CPUState *cpu =3D current_cpu; - bool current_tb_not_found =3D retaddr !=3D 0; bool current_tb_modified =3D false; - TranslationBlock *current_tb =3D NULL; + TranslationBlock *current_tb =3D retaddr ? tcg_tb_lookup(retaddr) : NU= LL; #endif /* TARGET_HAS_PRECISE_SMC */ =20 - assert_page_locked(p); - /* * We remove all the TBs in the range [start, end[. * XXX: see if in some cases it could be faster to invalidate all the = code */ - PAGE_FOR_EACH_TB(p, tb, n) { - assert_page_locked(p); + PAGE_FOR_EACH_TB(start, end, p, tb, n) { /* NOTE: this is subtle as a TB may span two physical pages */ if (n =3D=3D 0) { /* NOTE: tb_end may be after the end of the page, but @@ -521,11 +661,6 @@ tb_invalidate_phys_page_range__locked(struct page_coll= ection *pages, } if (!(tb_end <=3D start || tb_start >=3D end)) { #ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_not_found) { - current_tb_not_found =3D false; - /* now we have a real cpu fault */ - current_tb =3D tcg_tb_lookup(retaddr); - } if (current_tb =3D=3D tb && (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { /* @@ -536,25 +671,26 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, * restore the CPU state. */ current_tb_modified =3D true; - cpu_restore_state_from_tb(cpu, current_tb, retaddr, true); + cpu_restore_state_from_tb(current_cpu, current_tb, + retaddr, true); } #endif /* TARGET_HAS_PRECISE_SMC */ tb_phys_invalidate__locked(tb); } } -#if !defined(CONFIG_USER_ONLY) + /* if no code remaining, no need to continue to use slow writes */ if (!p->first_tb) { tlb_unprotect_code(start); } -#endif + #ifdef TARGET_HAS_PRECISE_SMC if (current_tb_modified) { page_collection_unlock(pages); /* Force execution of one insn next time. */ - cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(cpu); + current_cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(current= _cpu); mmap_unlock(); - cpu_loop_exit_noexc(cpu); + cpu_loop_exit_noexc(current_cpu); } #endif } @@ -571,8 +707,6 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) tb_page_addr_t start, end; PageDesc *p; =20 - assert_memory_lock(); - p =3D page_find(addr >> TARGET_PAGE_BITS); if (p =3D=3D NULL) { return; @@ -599,8 +733,6 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_= page_addr_t end) struct page_collection *pages; tb_page_addr_t next; =20 - assert_memory_lock(); - pages =3D page_collection_lock(start, end); for (next =3D (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; start < end; @@ -611,12 +743,12 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) if (pd =3D=3D NULL) { continue; } + assert_page_locked(pd); tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0); } page_collection_unlock(pages); } =20 -#ifdef CONFIG_SOFTMMU /* * len must be <=3D 8 and start must be a multiple of len. * Called via softmmu_template.h when code areas are written to with @@ -630,8 +762,6 @@ void tb_invalidate_phys_page_fast(struct page_collectio= n *pages, { PageDesc *p; =20 - assert_memory_lock(); - p =3D page_find(start >> TARGET_PAGE_BITS); if (!p) { return; @@ -641,64 +771,4 @@ void tb_invalidate_phys_page_fast(struct page_collecti= on *pages, tb_invalidate_phys_page_range__locked(pages, p, start, start + len, retaddr); } -#else -/* - * Called with mmap_lock held. If pc is not 0 then it indicates the - * host PC of the faulting store instruction that caused this invalidate. - * Returns true if the caller needs to abort execution of the current - * TB (because it was modified by this store and the guest CPU has - * precise-SMC semantics). - */ -bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) -{ - TranslationBlock *tb; - PageDesc *p; - int n; -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D NULL; - CPUState *cpu =3D current_cpu; - bool current_tb_modified =3D false; -#endif - - assert_memory_lock(); - - addr &=3D TARGET_PAGE_MASK; - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return false; - } - -#ifdef TARGET_HAS_PRECISE_SMC - if (p->first_tb && pc !=3D 0) { - current_tb =3D tcg_tb_lookup(pc); - } -#endif - assert_page_locked(p); - PAGE_FOR_EACH_TB(p, tb, n) { -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb =3D=3D tb && - (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { - /* - * If we are modifying the current TB, we must stop its execut= ion. - * We could be more precise by checking that the modification = is - * after the current PC, but it would require a specialized - * function to partially restore the CPU state. - */ - current_tb_modified =3D true; - cpu_restore_state_from_tb(cpu, current_tb, pc, true); - } -#endif /* TARGET_HAS_PRECISE_SMC */ - tb_phys_invalidate(tb, addr); - } - p->first_tb =3D (uintptr_t)NULL; -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_modified) { - /* Force execution of one insn next time. */ - cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(cpu); - return true; - } -#endif - - return false; -} -#endif +#endif /* CONFIG_USER_ONLY */ diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index f185356a36..dc7973eb3b 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -694,7 +694,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) =20 for (index =3D start; index <=3D end; index++) { TranslationBlock *tb; - int n; + PageForEachNext n; =20 pd =3D page_find(index); if (pd =3D=3D NULL) { @@ -705,7 +705,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) goto retry; } assert_page_locked(pd); - PAGE_FOR_EACH_TB(pd, tb, n) { + PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { if (page_trylock_add(set, tb_page_addr0(tb)) || (tb_page_addr1(tb) !=3D -1 && page_trylock_add(set, tb_page_addr1(tb)))) { --=20 2.34.1 From nobody Thu Oct 31 23:50:58 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1666869275; cv=none; d=zohomail.com; s=zohoarc; b=TNuZ8kX4jUC0XNTUjPMf/QZYMTUGt53Cxc5XiTJUahax1J2ETXFaGvByROIydvFE+/BW5/D7AbbrEzZTBqhQbOztlUMLXxsepa3QtLlraf56gL260WHVDLEfHs8cOQKHd/nEKAcXZJTlTa2REmSdUaU22ZRhQnZUx7TiHitpjek= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1666869275; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=N5gJDBJrC0M+T7l6w/7vi7ZpQaw4NHSxWx3wQd1ZGd4=; b=lJIG7s1f+NaxzbmC3W6xEoV54Z29kEJdbMAoNonnHELyOHwU+b66rTlKrWUivSFCU5IkCXXSUD4tBARNxKwGDHDHuwxVEE3q8WJqHZITDuyZ+KtRgXozCfcuKGze9LYPye7kf5iuJjonW4qIpnTplAAwhA526R7hphofJaJaw+8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1666869275880505.35332413817855; Thu, 27 Oct 2022 04:14:35 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oo0p8-0006Zh-CF; Thu, 27 Oct 2022 07:13:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oo0p2-0005QM-Q7 for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:16 -0400 Received: from mail-pf1-x429.google.com ([2607:f8b0:4864:20::429]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oo0p0-0008Mi-JB for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:16 -0400 Received: by mail-pf1-x429.google.com with SMTP id g62so1134451pfb.10 for ; Thu, 27 Oct 2022 04:13:14 -0700 (PDT) Received: from localhost.localdomain ([2001:8003:501a:d301:3a91:9408:3918:55a]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b00172ea8ff334sm969621plj.7.2022.10.27.04.13.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 04:13:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=N5gJDBJrC0M+T7l6w/7vi7ZpQaw4NHSxWx3wQd1ZGd4=; b=m7997IWTIsex9XbqhI2Ow/lnvBJzMOoMFSw0y/AouaCvuxxoTNjXBJVr8OXMaYE61U 3K42ZWMUH4vsuUrtRBR/7QZoTxjh/56WZDFkvCb+6Jwo4bdQXR/1EpgyHnbxVuFoDAeB 0ZcFZRfIjIL5pAzlMUUor5CAJuKt0WO5KKspC63r63bguFpn565HifqgGAOrzLYRiOuv HOW6VC2YZI2lWV0i7vRjRCSAd7C0poC6hDqnCVp9hAxR1O++HGaps48RQnFfOsRLo1HE TkyPr9szSAOtnoS/TcJ2tQWA/79vUOOSduh2/v6xq/fY/0eg9qZu5HCQTDQ+Hk3C6bVh JWmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N5gJDBJrC0M+T7l6w/7vi7ZpQaw4NHSxWx3wQd1ZGd4=; b=6vCBipeHKOZqLFLCCIcSW4alCs6l5YOrQsQMzwCrloLmRssWAsmCFviyUUIWnFmUVG V8ET6aCT7zOghtWFFxXQ5JLUWpusOSrxpiloB8TAFGdHJ5thNRNOQlYadbhUIzgrfgQt ppTHxpmWjzLT2Ps6PGdeZskzJS3A5FimLycf7tTgecrHKlj6w193N/Blqmce9GduQCU1 Z9H1XjQbJDr71XtUma/1jXbNmTXNzXdUZBqYd00aVygPTINmkPqjA6afQ0H6VlvJcWC5 UDmOlA3TUznT0qrI4fS/UYda8iw8i5AzNG73H+VgWeBxe4r8oLzElP1XdEyxpkIsGSlC ueJw== X-Gm-Message-State: ACrzQf3Yrbpqwn/+cqRjoD5qyTi3XSFmAbosrEpFDiaX5QUeP8kWdvQ6 GPVoOSuYteRhkjW4I3sgEHKf9WL2UPqo/7DH X-Google-Smtp-Source: AMsMyM78IvxyDR+ycOwDUCuuzen+LRWlqb3c7016glZzGgA/UzCsRBWUletNm3UrogB65WSRIyVRRQ== X-Received: by 2002:a05:6a00:1482:b0:56c:6a9c:3d7 with SMTP id v2-20020a056a00148200b0056c6a9c03d7mr7153166pfu.0.1666869193138; Thu, 27 Oct 2022 04:13:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu Subject: [PATCH v2 3/7] accel/tcg: Use interval tree for TARGET_PAGE_DATA_SIZE Date: Thu, 27 Oct 2022 22:12:54 +1100 Message-Id: <20221027111258.348196-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027111258.348196-1-richard.henderson@linaro.org> References: <20221027111258.348196-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::429; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x429.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Qemu-devel" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1666869276237100001 Content-Type: text/plain; charset="utf-8" Continue weaning user-only away from PageDesc. Use an interval tree to record target data. Chunk the data, to minimize allocation overhead. Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 1 - accel/tcg/user-exec.c | 99 ++++++++++++++++++++++++++++++++----------- 2 files changed, 74 insertions(+), 26 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 1bd5a02911..8731dc52e2 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -26,7 +26,6 @@ typedef struct PageDesc { #ifdef CONFIG_USER_ONLY unsigned long flags; - void *target_data; #else QemuSpin lock; /* list of TBs intersecting this ram page */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index fb7d6ee9e9..42a04bdb21 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -210,47 +210,96 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState = *env, target_ulong addr, return addr; } =20 +#ifdef TARGET_PAGE_DATA_SIZE +/* + * Allocate chunks of target data together. For the only current user, + * if we allocate one hunk per page, we have overhead of 40/128 or 40%. + * Therefore, allocate memory for 64 pages at a time for overhead < 1%. + */ +#define TPD_PAGES 64 +#define TBD_MASK (TARGET_PAGE_MASK * TPD_PAGES) + +typedef struct TargetPageDataNode { + IntervalTreeNode itree; + char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned)); +} TargetPageDataNode; + +static IntervalTreeRoot targetdata_root; + void page_reset_target_data(target_ulong start, target_ulong end) { -#ifdef TARGET_PAGE_DATA_SIZE - target_ulong addr, len; + IntervalTreeNode *n, *next; + target_ulong last; =20 - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - assert(end - 1 <=3D GUEST_ADDR_MAX); - assert(start < end); assert_memory_lock(); =20 start =3D start & TARGET_PAGE_MASK; - end =3D TARGET_PAGE_ALIGN(end); + last =3D TARGET_PAGE_ALIGN(end) - 1; =20 - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, 1); + for (n =3D interval_tree_iter_first(&targetdata_root, start, last), + next =3D n ? interval_tree_iter_next(n, start, last) : NULL; + n !=3D NULL; + n =3D next, + next =3D next ? interval_tree_iter_next(n, start, last) : NULL) { + target_ulong n_start, n_last, p_ofs, p_len; + TargetPageDataNode *t; =20 - g_free(p->target_data); - p->target_data =3D NULL; + if (n->start >=3D start && n->last <=3D last) { + interval_tree_remove(n, &targetdata_root); + g_free(n); + continue; + } + + if (n->start < start) { + n_start =3D start; + p_ofs =3D (start - n->start) >> TARGET_PAGE_BITS; + } else { + n_start =3D n->start; + p_ofs =3D 0; + } + n_last =3D MIN(last, n->last); + p_len =3D (n_last + 1 - n_start) >> TARGET_PAGE_BITS; + + t =3D container_of(n, TargetPageDataNode, itree); + memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE); } -#endif } =20 -#ifdef TARGET_PAGE_DATA_SIZE void *page_get_target_data(target_ulong address) { - PageDesc *p =3D page_find(address >> TARGET_PAGE_BITS); - void *ret =3D p->target_data; + IntervalTreeNode *n; + TargetPageDataNode *t; + target_ulong page, region; =20 - if (!ret) { - ret =3D g_malloc0(TARGET_PAGE_DATA_SIZE); - p->target_data =3D ret; + page =3D address & TARGET_PAGE_MASK; + region =3D address & TBD_MASK; + + n =3D interval_tree_iter_first(&targetdata_root, page, page); + if (!n) { + /* + * See util/interval-tree.c re lockless lookups: no false positives + * but there are false negatives. If we find nothing, retry with + * the mmap lock acquired. We also need the lock for the + * allocation + insert. + */ + mmap_lock(); + n =3D interval_tree_iter_first(&targetdata_root, page, page); + if (!n) { + t =3D g_new0(TargetPageDataNode, 1); + n =3D &t->itree; + n->start =3D region; + n->last =3D region | ~TBD_MASK; + interval_tree_insert(n, &targetdata_root); + } + mmap_unlock(); } - return ret; + + t =3D container_of(n, TargetPageDataNode, itree); + return t->data[(page - region) >> TARGET_PAGE_BITS]; } -#endif +#else +void page_reset_target_data(target_ulong start, target_ulong end) { } +#endif /* TARGET_PAGE_DATA_SIZE */ =20 /* The softmmu versions of these helpers are in cputlb.c. */ =20 --=20 2.34.1 From nobody Thu Oct 31 23:50:58 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1666869312; cv=none; d=zohomail.com; s=zohoarc; b=T81GIjtnW66KMD8SlQCDJ813BOl6TpMtT7qSkOPpkdoU7ZPAerS2qqE8yXA2efGefzXORzn9+w0+wBkYbR5j+1q6cQXUcoMBtP2Q+qM7kPXrxxozfypn6E0VOs1fyNHfSA/2Ap2t4+RaWBERqIRcsgZB6J3HIZO98Uk7rdqjt/k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1666869312; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=k5qNhSL3kMUVJoVjQYHVgz9UDskdweotSXs/RKjB8Gg=; b=e7ieB6DholySRC8i+QUG5BGb8omy7yCsYTbYkG73L/Tkd+L2iny4Rjv6nguRX5uqEMk4q+qGlYZoGOynXMm9tOx/yiOVmnCgpj6GvU6xoBf0AzqjFudrGew8V/LR2ddoj9ujKLTr8eE8v+Eg1Mg5Noj/0/awdjM7l1MLz6mnTgk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1666869312747341.74542032072725; Thu, 27 Oct 2022 04:15:12 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oo0pC-0006zg-N2; Thu, 27 Oct 2022 07:13:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oo0pB-0006uz-9e for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:25 -0400 Received: from mail-pf1-x432.google.com ([2607:f8b0:4864:20::432]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oo0p7-0008N2-D2 for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:24 -0400 Received: by mail-pf1-x432.google.com with SMTP id v28so959422pfi.12 for ; Thu, 27 Oct 2022 04:13:18 -0700 (PDT) Received: from localhost.localdomain ([2001:8003:501a:d301:3a91:9408:3918:55a]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b00172ea8ff334sm969621plj.7.2022.10.27.04.13.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 04:13:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k5qNhSL3kMUVJoVjQYHVgz9UDskdweotSXs/RKjB8Gg=; b=FLol80kU4HE/DFX/0g+f4q4VzpWxK4xn0oacFUtcZ51CGG3f/d6YMCrkLzlfhQ6/2T Ta8MW0Uwbp1IE4Hgk8IoAheDU607j4/7lpp8a0PdFrybstoVK72LH8irUWzDxyTcNKAe rqJgBmi95iAV5cTiOTsP+UahNpojABbFGC7H+DSIed11AIXUOpEoe8s2karKTonxviu/ QkHmLTHwukF+Lezpq2v89Mv45tLPnCRDHUz5juqnnUyfgyzVfnyy0gym5PEJ4nkUyfTS 7NyAUvGhK7T35u0JQ8WWq0eYasH2CQQxIA4oX75lH52lK4y8q17NyPYj/T95h5C0WyqJ K+ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k5qNhSL3kMUVJoVjQYHVgz9UDskdweotSXs/RKjB8Gg=; b=G26aGxFnbmP4rAMao843lW6J+aniKUipKtXzc+/sbbS2KkmMTHMAf0qd8fru3qlO7m WFy3ZpJVQ2oFzTAYrSobwXG53zlft0pz/9DvY9poZEDt+wi2ucblesire+NtzDJIOJNZ oUtp8TkW/GnHIqPpBIZ7OCvAp9QjCc88PnIdAoxfMqlvIQJFosMyDSZauXEmtMvggb+x eq8IV2annrg3/pmmfPVH8OgQk2zb0kTGYDqyYEKoKHokudscOw3k9R+RKZCfOM9R70Yh RE4ZFtVbyhreT5/GfqF6IAbtllgbuI2yCnenWERViFDGnsDxVNOYIZhUU75GeCAD1xMt VB9Q== X-Gm-Message-State: ACrzQf0aARWITujNldPhB8cAPKzKxmr+lUkGXH/vAy+Qtw66o+hCrCJN s7Su7YNuoGPZqf+48yQqynC7rLHlgHkotfvZ X-Google-Smtp-Source: AMsMyM6b5g7eUOzcbCuV7O8jWrm8QMaPkXPszllbSTuXoaaVUxxOU19s7ZNIUs6yt3GdOcWwRw371g== X-Received: by 2002:a62:19cd:0:b0:56b:6a55:ffba with SMTP id 196-20020a6219cd000000b0056b6a55ffbamr30338390pfz.85.1666869195803; Thu, 27 Oct 2022 04:13:15 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu Subject: [PATCH v2 4/7] accel/tcg: Move page_{get,set}_flags to user-exec.c Date: Thu, 27 Oct 2022 22:12:55 +1100 Message-Id: <20221027111258.348196-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027111258.348196-1-richard.henderson@linaro.org> References: <20221027111258.348196-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::432; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Qemu-devel" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1666869314478100001 This page tracking implementation is specific to user-only, since the system softmmu version is in cputlb.c. Move it out of translate-all.c to user-exec.c. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 17 ++ accel/tcg/translate-all.c | 350 -------------------------------------- accel/tcg/user-exec.c | 346 +++++++++++++++++++++++++++++++++++++ 3 files changed, 363 insertions(+), 350 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 8731dc52e2..250f0daac9 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -33,6 +33,23 @@ typedef struct PageDesc { #endif } PageDesc; =20 +/* + * In system mode we want L1_MAP to be based on ram offsets, + * while in user mode we want it to be based on virtual addresses. + * + * TODO: For user mode, see the caveat re host vs guest virtual + * address spaces near GUEST_ADDR_MAX. + */ +#if !defined(CONFIG_USER_ONLY) +#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS +# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS +#else +# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS +#endif +#else +# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) +#endif + /* Size of the L2 (and L3, etc) page tables. */ #define V_L2_BITS 10 #define V_L2_SIZE (1 << V_L2_BITS) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index dc7973eb3b..0f8f8e5bef 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -109,23 +109,6 @@ struct page_collection { struct page_entry *max; }; =20 -/* - * In system mode we want L1_MAP to be based on ram offsets, - * while in user mode we want it to be based on virtual addresses. - * - * TODO: For user mode, see the caveat re host vs guest virtual - * address spaces near GUEST_ADDR_MAX. - */ -#if !defined(CONFIG_USER_ONLY) -#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS -# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS -#else -# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS -#endif -#else -# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) -#endif - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -1222,339 +1205,6 @@ void cpu_interrupt(CPUState *cpu, int mask) qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); } =20 -/* - * Walks guest process memory "regions" one by one - * and calls callback function 'fn' for each region. - */ -struct walk_memory_regions_data { - walk_memory_regions_fn fn; - void *priv; - target_ulong start; - int prot; -}; - -static int walk_memory_regions_end(struct walk_memory_regions_data *data, - target_ulong end, int new_prot) -{ - if (data->start !=3D -1u) { - int rc =3D data->fn(data->priv, data->start, end, data->prot); - if (rc !=3D 0) { - return rc; - } - } - - data->start =3D (new_prot ? end : -1u); - data->prot =3D new_prot; - - return 0; -} - -static int walk_memory_regions_1(struct walk_memory_regions_data *data, - target_ulong base, int level, void **lp) -{ - target_ulong pa; - int i, rc; - - if (*lp =3D=3D NULL) { - return walk_memory_regions_end(data, base, 0); - } - - if (level =3D=3D 0) { - PageDesc *pd =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - int prot =3D pd[i].flags; - - pa =3D base | (i << TARGET_PAGE_BITS); - if (prot !=3D data->prot) { - rc =3D walk_memory_regions_end(data, pa, prot); - if (rc !=3D 0) { - return rc; - } - } - } - } else { - void **pp =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - pa =3D base | ((target_ulong)i << - (TARGET_PAGE_BITS + V_L2_BITS * level)); - rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); - if (rc !=3D 0) { - return rc; - } - } - } - - return 0; -} - -int walk_memory_regions(void *priv, walk_memory_regions_fn fn) -{ - struct walk_memory_regions_data data; - uintptr_t i, l1_sz =3D v_l1_size; - - data.fn =3D fn; - data.priv =3D priv; - data.start =3D -1u; - data.prot =3D 0; - - for (i =3D 0; i < l1_sz; i++) { - target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); - int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); - if (rc !=3D 0) { - return rc; - } - } - - return walk_memory_regions_end(&data, 0, 0); -} - -static int dump_region(void *priv, target_ulong start, - target_ulong end, unsigned long prot) -{ - FILE *f =3D (FILE *)priv; - - (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx - " "TARGET_FMT_lx" %c%c%c\n", - start, end, end - start, - ((prot & PAGE_READ) ? 'r' : '-'), - ((prot & PAGE_WRITE) ? 'w' : '-'), - ((prot & PAGE_EXEC) ? 'x' : '-')); - - return 0; -} - -/* dump memory mappings */ -void page_dump(FILE *f) -{ - const int length =3D sizeof(target_ulong) * 2; - (void) fprintf(f, "%-*s %-*s %-*s %s\n", - length, "start", length, "end", length, "size", "prot"); - walk_memory_regions(f, dump_region); -} - -int page_get_flags(target_ulong address) -{ - PageDesc *p; - - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { - return 0; - } - return p->flags; -} - -/* - * Allow the target to decide if PAGE_TARGET_[12] may be reset. - * By default, they are not kept. - */ -#ifndef PAGE_TARGET_STICKY -#define PAGE_TARGET_STICKY 0 -#endif -#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) - -/* Modify the flags of a page and invalidate the code if necessary. - The flag PAGE_WRITE_ORG is positioned automatically depending - on PAGE_WRITE. The mmap_lock should already be held. */ -void page_set_flags(target_ulong start, target_ulong end, int flags) -{ - target_ulong addr, len; - bool reset, inval_tb =3D false; - - /* This function should never be called with addresses outside the - guest address space. If this assert fires, it probably indicates - a missing call to h2g_valid. */ - assert(end - 1 <=3D GUEST_ADDR_MAX); - assert(start < end); - /* Only set PAGE_ANON with new mappings. */ - assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); - assert_memory_lock(); - - start =3D start & TARGET_PAGE_MASK; - end =3D TARGET_PAGE_ALIGN(end); - - if (flags & PAGE_WRITE) { - flags |=3D PAGE_WRITE_ORG; - } - reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); - if (reset) { - page_reset_target_data(start, end); - } - flags &=3D ~PAGE_RESET; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); - - /* - * If the page was executable, but is reset, or is no longer - * executable, or has become writable, then invalidate any code. - */ - if ((p->flags & PAGE_EXEC) - && (reset || - !(flags & PAGE_EXEC) || - (flags & ~p->flags & PAGE_WRITE))) { - inval_tb =3D true; - } - /* Using mprotect on a page does not change sticky bits. */ - p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; - } - - if (inval_tb) { - tb_invalidate_phys_range(start, end); - } -} - -int page_check_range(target_ulong start, target_ulong len, int flags) -{ - PageDesc *p; - target_ulong end; - target_ulong addr; - - /* This function should never be called with addresses outside the - guest address space. If this assert fires, it probably indicates - a missing call to h2g_valid. */ - if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { - assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); - } - - if (len =3D=3D 0) { - return 0; - } - if (start + len - 1 < start) { - /* We've wrapped around. */ - return -1; - } - - /* must do before we loose bits in the next step */ - end =3D TARGET_PAGE_ALIGN(start + len); - start =3D start & TARGET_PAGE_MASK; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return -1; - } - if (!(p->flags & PAGE_VALID)) { - return -1; - } - - if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { - return -1; - } - if (flags & PAGE_WRITE) { - if (!(p->flags & PAGE_WRITE_ORG)) { - return -1; - } - /* unprotect the page if it was put read-only because it - contains translated code */ - if (!(p->flags & PAGE_WRITE)) { - if (!page_unprotect(addr, 0)) { - return -1; - } - } - } - } - return 0; -} - -void page_protect(tb_page_addr_t page_addr) -{ - target_ulong addr; - PageDesc *p; - int prot; - - p =3D page_find(page_addr >> TARGET_PAGE_BITS); - if (p && (p->flags & PAGE_WRITE)) { - /* - * Force the host page as non writable (writes will have a page fa= ult + - * mprotect overhead). - */ - page_addr &=3D qemu_host_page_mask; - prot =3D 0; - for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; - addr +=3D TARGET_PAGE_SIZE) { - - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - continue; - } - prot |=3D p->flags; - p->flags &=3D ~PAGE_WRITE; - } - mprotect(g2h_untagged(page_addr), qemu_host_page_size, - (prot & PAGE_BITS) & ~PAGE_WRITE); - } -} - -/* called from signal handler: invalidate the code and unprotect the - * page. Return 0 if the fault was not handled, 1 if it was handled, - * and 2 if it was handled but the caller must cause the TB to be - * immediately exited. (We can only return 2 if the 'pc' argument is - * non-zero.) - */ -int page_unprotect(target_ulong address, uintptr_t pc) -{ - unsigned int prot; - bool current_tb_invalidated; - PageDesc *p; - target_ulong host_start, host_end, addr; - - /* Technically this isn't safe inside a signal handler. However we - know this only ever happens in a synchronous SEGV handler, so in - practice it seems to be ok. */ - mmap_lock(); - - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { - mmap_unlock(); - return 0; - } - - /* if the page was really writable, then we change its - protection back to writable */ - if (p->flags & PAGE_WRITE_ORG) { - current_tb_invalidated =3D false; - if (p->flags & PAGE_WRITE) { - /* If the page is actually marked WRITE then assume this is be= cause - * this thread raced with another one which got here first and - * set the page to PAGE_WRITE and did the TB invalidate for us. - */ -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D tcg_tb_lookup(pc); - if (current_tb) { - current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; - } -#endif - } else { - host_start =3D address & qemu_host_page_mask; - host_end =3D host_start + qemu_host_page_size; - - prot =3D 0; - for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - p->flags |=3D PAGE_WRITE; - prot |=3D p->flags; - - /* and since the content will be modified, we must invalid= ate - the corresponding translated code. */ - current_tb_invalidated |=3D - tb_invalidate_phys_page_unwind(addr, pc); - } - mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, - prot & PAGE_BITS); - } - mmap_unlock(); - /* If current TB was invalidated return to main loop */ - return current_tb_invalidated ? 2 : 1; - } - mmap_unlock(); - return 0; -} #endif /* CONFIG_USER_ONLY */ =20 /* diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 42a04bdb21..22ef780900 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -135,6 +135,352 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigse= t_t *old_set, } } =20 +/* + * Walks guest process memory "regions" one by one + * and calls callback function 'fn' for each region. + */ +struct walk_memory_regions_data { + walk_memory_regions_fn fn; + void *priv; + target_ulong start; + int prot; +}; + +static int walk_memory_regions_end(struct walk_memory_regions_data *data, + target_ulong end, int new_prot) +{ + if (data->start !=3D -1u) { + int rc =3D data->fn(data->priv, data->start, end, data->prot); + if (rc !=3D 0) { + return rc; + } + } + + data->start =3D (new_prot ? end : -1u); + data->prot =3D new_prot; + + return 0; +} + +static int walk_memory_regions_1(struct walk_memory_regions_data *data, + target_ulong base, int level, void **lp) +{ + target_ulong pa; + int i, rc; + + if (*lp =3D=3D NULL) { + return walk_memory_regions_end(data, base, 0); + } + + if (level =3D=3D 0) { + PageDesc *pd =3D *lp; + + for (i =3D 0; i < V_L2_SIZE; ++i) { + int prot =3D pd[i].flags; + + pa =3D base | (i << TARGET_PAGE_BITS); + if (prot !=3D data->prot) { + rc =3D walk_memory_regions_end(data, pa, prot); + if (rc !=3D 0) { + return rc; + } + } + } + } else { + void **pp =3D *lp; + + for (i =3D 0; i < V_L2_SIZE; ++i) { + pa =3D base | ((target_ulong)i << + (TARGET_PAGE_BITS + V_L2_BITS * level)); + rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); + if (rc !=3D 0) { + return rc; + } + } + } + + return 0; +} + +int walk_memory_regions(void *priv, walk_memory_regions_fn fn) +{ + struct walk_memory_regions_data data; + uintptr_t i, l1_sz =3D v_l1_size; + + data.fn =3D fn; + data.priv =3D priv; + data.start =3D -1u; + data.prot =3D 0; + + for (i =3D 0; i < l1_sz; i++) { + target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); + int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); + if (rc !=3D 0) { + return rc; + } + } + + return walk_memory_regions_end(&data, 0, 0); +} + +static int dump_region(void *priv, target_ulong start, + target_ulong end, unsigned long prot) +{ + FILE *f =3D (FILE *)priv; + + (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx + " "TARGET_FMT_lx" %c%c%c\n", + start, end, end - start, + ((prot & PAGE_READ) ? 'r' : '-'), + ((prot & PAGE_WRITE) ? 'w' : '-'), + ((prot & PAGE_EXEC) ? 'x' : '-')); + + return 0; +} + +/* dump memory mappings */ +void page_dump(FILE *f) +{ + const int length =3D sizeof(target_ulong) * 2; + (void) fprintf(f, "%-*s %-*s %-*s %s\n", + length, "start", length, "end", length, "size", "prot"); + walk_memory_regions(f, dump_region); +} + +int page_get_flags(target_ulong address) +{ + PageDesc *p; + + p =3D page_find(address >> TARGET_PAGE_BITS); + if (!p) { + return 0; + } + return p->flags; +} + +/* + * Allow the target to decide if PAGE_TARGET_[12] may be reset. + * By default, they are not kept. + */ +#ifndef PAGE_TARGET_STICKY +#define PAGE_TARGET_STICKY 0 +#endif +#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) + +/* + * Modify the flags of a page and invalidate the code if necessary. + * The flag PAGE_WRITE_ORG is positioned automatically depending + * on PAGE_WRITE. The mmap_lock should already be held. + */ +void page_set_flags(target_ulong start, target_ulong end, int flags) +{ + target_ulong addr, len; + bool reset, inval_tb =3D false; + + /* This function should never be called with addresses outside the + guest address space. If this assert fires, it probably indicates + a missing call to h2g_valid. */ + assert(end - 1 <=3D GUEST_ADDR_MAX); + assert(start < end); + /* Only set PAGE_ANON with new mappings. */ + assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); + assert_memory_lock(); + + start =3D start & TARGET_PAGE_MASK; + end =3D TARGET_PAGE_ALIGN(end); + + if (flags & PAGE_WRITE) { + flags |=3D PAGE_WRITE_ORG; + } + reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); + if (reset) { + page_reset_target_data(start, end); + } + flags &=3D ~PAGE_RESET; + + for (addr =3D start, len =3D end - start; + len !=3D 0; + len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { + PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); + + /* + * If the page was executable, but is reset, or is no longer + * executable, or has become writable, then invalidate any code. + */ + if ((p->flags & PAGE_EXEC) + && (reset || + !(flags & PAGE_EXEC) || + (flags & ~p->flags & PAGE_WRITE))) { + inval_tb =3D true; + } + /* Using mprotect on a page does not change sticky bits. */ + p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; + } + + if (inval_tb) { + tb_invalidate_phys_range(start, end); + } +} + +int page_check_range(target_ulong start, target_ulong len, int flags) +{ + PageDesc *p; + target_ulong end; + target_ulong addr; + + /* + * This function should never be called with addresses outside the + * guest address space. If this assert fires, it probably indicates + * a missing call to h2g_valid. + */ + if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { + assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); + } + + if (len =3D=3D 0) { + return 0; + } + if (start + len - 1 < start) { + /* We've wrapped around. */ + return -1; + } + + /* must do before we loose bits in the next step */ + end =3D TARGET_PAGE_ALIGN(start + len); + start =3D start & TARGET_PAGE_MASK; + + for (addr =3D start, len =3D end - start; + len !=3D 0; + len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { + p =3D page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + return -1; + } + if (!(p->flags & PAGE_VALID)) { + return -1; + } + + if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { + return -1; + } + if (flags & PAGE_WRITE) { + if (!(p->flags & PAGE_WRITE_ORG)) { + return -1; + } + /* unprotect the page if it was put read-only because it + contains translated code */ + if (!(p->flags & PAGE_WRITE)) { + if (!page_unprotect(addr, 0)) { + return -1; + } + } + } + } + return 0; +} + +void page_protect(tb_page_addr_t page_addr) +{ + target_ulong addr; + PageDesc *p; + int prot; + + p =3D page_find(page_addr >> TARGET_PAGE_BITS); + if (p && (p->flags & PAGE_WRITE)) { + /* + * Force the host page as non writable (writes will have a page fa= ult + + * mprotect overhead). + */ + page_addr &=3D qemu_host_page_mask; + prot =3D 0; + for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; + addr +=3D TARGET_PAGE_SIZE) { + + p =3D page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + continue; + } + prot |=3D p->flags; + p->flags &=3D ~PAGE_WRITE; + } + mprotect(g2h_untagged(page_addr), qemu_host_page_size, + (prot & PAGE_BITS) & ~PAGE_WRITE); + } +} + +/* + * Called from signal handler: invalidate the code and unprotect the + * page. Return 0 if the fault was not handled, 1 if it was handled, + * and 2 if it was handled but the caller must cause the TB to be + * immediately exited. (We can only return 2 if the 'pc' argument is + * non-zero.) + */ +int page_unprotect(target_ulong address, uintptr_t pc) +{ + unsigned int prot; + bool current_tb_invalidated; + PageDesc *p; + target_ulong host_start, host_end, addr; + + /* + * Technically this isn't safe inside a signal handler. However we + * know this only ever happens in a synchronous SEGV handler, so in + * practice it seems to be ok. + */ + mmap_lock(); + + p =3D page_find(address >> TARGET_PAGE_BITS); + if (!p) { + mmap_unlock(); + return 0; + } + + /* + * If the page was really writable, then we change its + * protection back to writable. + */ + if (p->flags & PAGE_WRITE_ORG) { + current_tb_invalidated =3D false; + if (p->flags & PAGE_WRITE) { + /* + * If the page is actually marked WRITE then assume this is be= cause + * this thread raced with another one which got here first and + * set the page to PAGE_WRITE and did the TB invalidate for us. + */ +#ifdef TARGET_HAS_PRECISE_SMC + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + if (current_tb) { + current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; + } +#endif + } else { + host_start =3D address & qemu_host_page_mask; + host_end =3D host_start + qemu_host_page_size; + + prot =3D 0; + for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { + p =3D page_find(addr >> TARGET_PAGE_BITS); + p->flags |=3D PAGE_WRITE; + prot |=3D p->flags; + + /* + * Since the content will be modified, we must invalidate + * the corresponding translated code. + */ + current_tb_invalidated |=3D + tb_invalidate_phys_page_unwind(addr, pc); + } + mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, + prot & PAGE_BITS); + } + mmap_unlock(); + /* If current TB was invalidated return to main loop */ + return current_tb_invalidated ? 2 : 1; + } + mmap_unlock(); + return 0; +} + static int probe_access_internal(CPUArchState *env, target_ulong addr, int fault_size, MMUAccessType access_type, bool nonfault, uintptr_t ra) --=20 2.34.1 From nobody Thu Oct 31 23:50:58 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1666869450; cv=none; d=zohomail.com; s=zohoarc; b=B6q9lC2EkwkYpByndJgfBhTJVbXXJQIVarx4/7MLjYV9yYREeepahowfzuRaGEApxI52REKdioOXOhRQBqc/karn6GDu8I008y/ooy785E4MjprUlGGlzQxHNUF1DkPfQIWbCxMxrcAlJHPriVGTju90R/VR4FNOtke2shy7owk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1666869450; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=bFCV94UaQRS6FCfei6H591BnjD9oQj2KBCHN0ANv8zs=; b=KxiuBpRQzUoVgmxVgiL2h0fg/Ub+Jyn9q8W+FDqZKQeRnKKn9Y1ckqU3TQFb9b6PCdb1FvJtXzATZPrGdQ9SIR56QrAQn3bODmr7ipmWUFz9uwY7ZYN7nYd/wwPRH3f+5jpIj2D1Spyc68v8ACgzSeVLTEGIcWM5VG9W/TQd/Fk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1666869450327163.23171811664326; Thu, 27 Oct 2022 04:17:30 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oo0pP-0007tG-SH; Thu, 27 Oct 2022 07:13:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oo0pD-0006yu-HX for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:28 -0400 Received: from mail-pg1-x536.google.com ([2607:f8b0:4864:20::536]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oo0p6-0008NP-Co for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:25 -0400 Received: by mail-pg1-x536.google.com with SMTP id q71so1011388pgq.8 for ; Thu, 27 Oct 2022 04:13:19 -0700 (PDT) Received: from localhost.localdomain ([2001:8003:501a:d301:3a91:9408:3918:55a]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b00172ea8ff334sm969621plj.7.2022.10.27.04.13.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 04:13:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bFCV94UaQRS6FCfei6H591BnjD9oQj2KBCHN0ANv8zs=; b=njWCXIz9sWJ8QhFiCoO8eYGPGbbmTTemTgheGvJKdbOtgG9iQGmTy2jOmsJfCE2dZR wCosLK+chpXtgQ3AJJ/Zp/Xh8HmgoODu2xVQs6cxlWEVaiSG6JOz24nhgsf4rfh5skwz xt5WNBZbezkEfZvV5NY0hNGC/hFOb2jEGMOzo/orXghU4wKCwEKuYosbkgqaWIF7z683 W2AWjgHsnxmugK1P4s3Om2zqPgZgbgPjonPSipwdUzxBXJYG9kETtXysCLQ6YaM02ENw 7p9dGj80eSWRT+YUwElvHSpgLqGdx7Cn/RgBz2BI91X5KOfJkZRmChC77eSpbh1Yc/QV 3STg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bFCV94UaQRS6FCfei6H591BnjD9oQj2KBCHN0ANv8zs=; b=uIVtVJV6Ek5F8axdqLJ9uJa8a8EbJiIVr9Vadn2b44ZyuPQy2rjPXwuJ/RAhU4MWv4 yXVixPkBvKa8vDa1n7NFwDa4dhW9qPAuIZcjr/pVD4F+sTMFNsVvT7b2gBCv6xVxRcF3 OQWiWb0vNrU6pzSve2F0tv+KqzD8g6TbbrNxa/uwG+DCpcg3MnqgNmizF6GGRN9bnX61 LnSWUqVEmowBjuMRSguqw1fT73n7+jlz3wgg4lfP8SSlzvPjLXU3E8/Yk0Lq95LYBlwT mpoUohbv/oCNiMvVzBEja+WyDNICLUPCOtA4RE/jIuRL8noU1PKnDw1gRqfvoOW/zoqO pN+w== X-Gm-Message-State: ACrzQf1S08A9/S+qwk0KDqLzozaQgWEW/FOAjdWoSGUpL3w9TI0BrsH6 t6ParSA4kUYQZKgO2l8iCDIKp9LK1haxsGg3 X-Google-Smtp-Source: AMsMyM7igTFDKK3BK+W/X1Zf1inHUmXQ1DtBse1U507S+7zjnSs7k0tv/DOmEFC/m5vWwEO43xv4Jw== X-Received: by 2002:aa7:8b44:0:b0:56c:73c4:f4ba with SMTP id i4-20020aa78b44000000b0056c73c4f4bamr6363502pfd.40.1666869198777; Thu, 27 Oct 2022 04:13:18 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu Subject: [PATCH v2 5/7] accel/tcg: Use interval tree for user-only page tracking Date: Thu, 27 Oct 2022 22:12:56 +1100 Message-Id: <20221027111258.348196-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027111258.348196-1-richard.henderson@linaro.org> References: <20221027111258.348196-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::536; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x536.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Qemu-devel" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1666869451266100001 Content-Type: text/plain; charset="utf-8" Finish weaning user-only away from PageDesc. Using an interval tree to track page permissions means that we can represent very large regions efficiently. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/290 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/967 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1214 Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 4 +- accel/tcg/tb-maint.c | 20 +- accel/tcg/user-exec.c | 614 ++++++++++++++++++++++----------- tests/tcg/multiarch/test-vma.c | 22 ++ 4 files changed, 451 insertions(+), 209 deletions(-) create mode 100644 tests/tcg/multiarch/test-vma.c diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 250f0daac9..c7e157d1cd 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -24,9 +24,7 @@ #endif =20 typedef struct PageDesc { -#ifdef CONFIG_USER_ONLY - unsigned long flags; -#else +#ifndef CONFIG_USER_ONLY QemuSpin lock; /* list of TBs intersecting this ram page */ uintptr_t first_tb; diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 14e8e47a6a..694440cb4a 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -68,15 +68,23 @@ static void page_flush_tb(void) /* Call with mmap_lock held. */ static void tb_page_add(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) { - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p1->flags & PAGE_WRITE)); - if (p2) { - assert(!(p2->flags & PAGE_WRITE)); - } + target_ulong addr; + int flags; =20 assert_memory_lock(); - tb->itree.last =3D tb->itree.start + tb->size - 1; + + /* translator_loop() must have made all TB pages non-writable */ + addr =3D tb_page_addr0(tb); + flags =3D page_get_flags(addr); + assert(!(flags & PAGE_WRITE)); + + addr =3D tb_page_addr1(tb); + if (addr !=3D -1) { + flags =3D page_get_flags(addr); + assert(!(flags & PAGE_WRITE)); + } + interval_tree_insert(&tb->itree, &tb_root); } =20 diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 22ef780900..d71404f49c 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -135,106 +135,61 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigs= et_t *old_set, } } =20 -/* - * Walks guest process memory "regions" one by one - * and calls callback function 'fn' for each region. - */ -struct walk_memory_regions_data { - walk_memory_regions_fn fn; - void *priv; - target_ulong start; - int prot; -}; +typedef struct PageFlagsNode { + IntervalTreeNode itree; + int flags; +} PageFlagsNode; =20 -static int walk_memory_regions_end(struct walk_memory_regions_data *data, - target_ulong end, int new_prot) +static IntervalTreeRoot pageflags_root; + +static PageFlagsNode *pageflags_find(target_ulong start, target_long last) { - if (data->start !=3D -1u) { - int rc =3D data->fn(data->priv, data->start, end, data->prot); - if (rc !=3D 0) { - return rc; - } - } + IntervalTreeNode *n; =20 - data->start =3D (new_prot ? end : -1u); - data->prot =3D new_prot; - - return 0; + n =3D interval_tree_iter_first(&pageflags_root, start, last); + return n ? container_of(n, PageFlagsNode, itree) : NULL; } =20 -static int walk_memory_regions_1(struct walk_memory_regions_data *data, - target_ulong base, int level, void **lp) +static PageFlagsNode *pageflags_next(PageFlagsNode *p, target_ulong start, + target_long last) { - target_ulong pa; - int i, rc; + IntervalTreeNode *n; =20 - if (*lp =3D=3D NULL) { - return walk_memory_regions_end(data, base, 0); - } - - if (level =3D=3D 0) { - PageDesc *pd =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - int prot =3D pd[i].flags; - - pa =3D base | (i << TARGET_PAGE_BITS); - if (prot !=3D data->prot) { - rc =3D walk_memory_regions_end(data, pa, prot); - if (rc !=3D 0) { - return rc; - } - } - } - } else { - void **pp =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - pa =3D base | ((target_ulong)i << - (TARGET_PAGE_BITS + V_L2_BITS * level)); - rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); - if (rc !=3D 0) { - return rc; - } - } - } - - return 0; + n =3D interval_tree_iter_next(&p->itree, start, last); + return n ? container_of(n, PageFlagsNode, itree) : NULL; } =20 int walk_memory_regions(void *priv, walk_memory_regions_fn fn) { - struct walk_memory_regions_data data; - uintptr_t i, l1_sz =3D v_l1_size; + IntervalTreeNode *n; + int rc =3D 0; =20 - data.fn =3D fn; - data.priv =3D priv; - data.start =3D -1u; - data.prot =3D 0; + mmap_lock(); + for (n =3D interval_tree_iter_first(&pageflags_root, 0, -1); + n !=3D NULL; + n =3D interval_tree_iter_next(n, 0, -1)) { + PageFlagsNode *p =3D container_of(n, PageFlagsNode, itree); =20 - for (i =3D 0; i < l1_sz; i++) { - target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); - int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); + rc =3D fn(priv, n->start, n->last + 1, p->flags); if (rc !=3D 0) { - return rc; + break; } } + mmap_unlock(); =20 - return walk_memory_regions_end(&data, 0, 0); + return rc; } =20 static int dump_region(void *priv, target_ulong start, - target_ulong end, unsigned long prot) + target_ulong end, unsigned long prot) { FILE *f =3D (FILE *)priv; =20 - (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx - " "TARGET_FMT_lx" %c%c%c\n", - start, end, end - start, - ((prot & PAGE_READ) ? 'r' : '-'), - ((prot & PAGE_WRITE) ? 'w' : '-'), - ((prot & PAGE_EXEC) ? 'x' : '-')); - + fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx" "TARGET_FMT_lx" %c%c%c\n", + start, end, end - start, + ((prot & PAGE_READ) ? 'r' : '-'), + ((prot & PAGE_WRITE) ? 'w' : '-'), + ((prot & PAGE_EXEC) ? 'x' : '-')); return 0; } =20 @@ -242,22 +197,134 @@ static int dump_region(void *priv, target_ulong star= t, void page_dump(FILE *f) { const int length =3D sizeof(target_ulong) * 2; - (void) fprintf(f, "%-*s %-*s %-*s %s\n", + + fprintf(f, "%-*s %-*s %-*s %s\n", length, "start", length, "end", length, "size", "prot"); walk_memory_regions(f, dump_region); } =20 int page_get_flags(target_ulong address) { - PageDesc *p; + PageFlagsNode *p =3D pageflags_find(address, address); =20 - p =3D page_find(address >> TARGET_PAGE_BITS); + /* + * See util/interval-tree.c re lockless lookups: no false positives but + * there are false negatives. If we find nothing, retry with the mmap + * lock acquired. + */ if (!p) { - return 0; + if (have_mmap_lock()) { + return 0; + } + mmap_lock(); + p =3D pageflags_find(address, address); + mmap_unlock(); + if (!p) { + return 0; + } } return p->flags; } =20 +/* A subroutine of page_set_flags: insert a new node for [start,last]. */ +static void pageflags_create(target_ulong start, target_ulong last, int fl= ags) +{ + PageFlagsNode *p =3D g_new(PageFlagsNode, 1); + + p->itree.start =3D start; + p->itree.last =3D last; + p->flags =3D flags; + interval_tree_insert(&p->itree, &pageflags_root); +} + +/* A subroutine of page_set_flags: remove everything in [start,last]. */ +static bool pageflags_unset(target_ulong start, target_ulong last) +{ + bool inval_tb =3D false; + + while (true) { + PageFlagsNode *p =3D pageflags_find(start, last); + target_ulong p_last; + + if (!p) { + break; + } + + if (p->flags & PAGE_EXEC) { + inval_tb =3D true; + } + + interval_tree_remove(&p->itree, &pageflags_root); + p_last =3D p->itree.last; + + if (p->itree.start < start) { + /* Truncate the node from the end, or split out the middle. */ + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + if (last < p_last) { + pageflags_create(last + 1, p_last, p->flags); + break; + } + } else if (p_last <=3D last) { + /* Range completely covers node -- remove it. */ + g_free(p); + } else { + /* Truncate the node from the start. */ + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + break; + } + } + + return inval_tb; +} + +/* + * A subroutine of page_set_flags: nothing overlaps [start,last], + * but check adjacent mappings and maybe merge into a single range. + */ +static void pageflags_create_merge(target_ulong start, target_ulong last, + int flags) +{ + PageFlagsNode *next =3D NULL, *prev =3D NULL; + + if (start > 0) { + prev =3D pageflags_find(start - 1, start - 1); + if (prev) { + if (prev->flags =3D=3D flags) { + interval_tree_remove(&prev->itree, &pageflags_root); + } else { + prev =3D NULL; + } + } + } + if (last + 1 !=3D 0) { + next =3D pageflags_find(last + 1, last + 1); + if (next) { + if (next->flags =3D=3D flags) { + interval_tree_remove(&next->itree, &pageflags_root); + } else { + next =3D NULL; + } + } + } + + if (prev) { + if (next) { + prev->itree.last =3D next->itree.last; + g_free(next); + } else { + prev->itree.last =3D last; + } + interval_tree_insert(&prev->itree, &pageflags_root); + } else if (next) { + next->itree.start =3D start; + interval_tree_insert(&next->itree, &pageflags_root); + } else { + pageflags_create(start, last, flags); + } +} + /* * Allow the target to decide if PAGE_TARGET_[12] may be reset. * By default, they are not kept. @@ -267,6 +334,146 @@ int page_get_flags(target_ulong address) #endif #define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) =20 +/* A subroutine of page_set_flags: add flags to [start,last]. */ +static bool pageflags_set_clear(target_ulong start, target_ulong last, + int set_flags, int clear_flags) +{ + PageFlagsNode *p; + target_ulong p_start, p_last; + int p_flags, merge_flags; + bool inval_tb =3D false; + + restart: + p =3D pageflags_find(start, last); + if (!p) { + if (set_flags) { + pageflags_create_merge(start, last, set_flags); + } + goto done; + } + + p_start =3D p->itree.start; + p_last =3D p->itree.last; + p_flags =3D p->flags; + /* Using mprotect on a page does not change sticky bits. */ + merge_flags =3D (p_flags & ~clear_flags) | set_flags; + + /* + * Need to flush if an overlapping executable region + * removes exec, or adds write. + */ + if ((p_flags & PAGE_EXEC) + && (!(merge_flags & PAGE_EXEC) + || (merge_flags & ~p_flags & PAGE_WRITE))) { + inval_tb =3D true; + } + + /* + * If there is an exact range match, update and return without + * attempting to merge with adjacent regions. + */ + if (start =3D=3D p_start && last =3D=3D p_last) { + if (merge_flags) { + p->flags =3D merge_flags; + } else { + interval_tree_remove(&p->itree, &pageflags_root); + g_free(p); + } + goto done; + } + + /* + * If sticky bits affect the original mapping, then we must be more + * careful about the existing intervals and the separate flags. + */ + if (set_flags !=3D merge_flags) { + if (p_start < start) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + + if (last < p_last) { + if (merge_flags) { + pageflags_create(start, last, merge_flags); + } + pageflags_create(last + 1, p_last, p_flags); + } else { + if (merge_flags) { + pageflags_create(start, p_last, merge_flags); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + } + } else { + if (start < p_start && set_flags) { + pageflags_create(start, p_start - 1, set_flags); + } + if (last < p_last) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + if (merge_flags) { + pageflags_create(start, last, merge_flags); + } + } else { + if (merge_flags) { + p->flags =3D merge_flags; + } else { + interval_tree_remove(&p->itree, &pageflags_root); + g_free(p); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + } + } + goto done; + } + + /* If flags are not changing for this range, incorporate it. */ + if (set_flags =3D=3D p_flags) { + if (start < p_start) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.start =3D start; + interval_tree_insert(&p->itree, &pageflags_root); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + goto done; + } + + /* Maybe split out head and/or tail ranges with the original flags. */ + interval_tree_remove(&p->itree, &pageflags_root); + if (p_start < start) { + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + + if (p_last < last) { + goto restart; + } + if (last < p_last) { + pageflags_create(last + 1, p_last, p_flags); + } + } else if (last < p_last) { + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + } else { + g_free(p); + goto restart; + } + if (set_flags) { + pageflags_create(start, last, set_flags); + } + + done: + return inval_tb; +} + /* * Modify the flags of a page and invalidate the code if necessary. * The flag PAGE_WRITE_ORG is positioned automatically depending @@ -274,49 +481,41 @@ int page_get_flags(target_ulong address) */ void page_set_flags(target_ulong start, target_ulong end, int flags) { - target_ulong addr, len; - bool reset, inval_tb =3D false; + target_ulong last; + bool reset =3D false; + bool inval_tb =3D false; =20 /* This function should never be called with addresses outside the guest address space. If this assert fires, it probably indicates a missing call to h2g_valid. */ - assert(end - 1 <=3D GUEST_ADDR_MAX); assert(start < end); + assert(end - 1 <=3D GUEST_ADDR_MAX); /* Only set PAGE_ANON with new mappings. */ assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); assert_memory_lock(); =20 start =3D start & TARGET_PAGE_MASK; end =3D TARGET_PAGE_ALIGN(end); + last =3D end - 1; =20 - if (flags & PAGE_WRITE) { - flags |=3D PAGE_WRITE_ORG; - } - reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); - if (reset) { - page_reset_target_data(start, end); - } - flags &=3D ~PAGE_RESET; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); - - /* - * If the page was executable, but is reset, or is no longer - * executable, or has become writable, then invalidate any code. - */ - if ((p->flags & PAGE_EXEC) - && (reset || - !(flags & PAGE_EXEC) || - (flags & ~p->flags & PAGE_WRITE))) { - inval_tb =3D true; + if (!(flags & PAGE_VALID)) { + flags =3D 0; + } else { + reset =3D flags & PAGE_RESET; + flags &=3D ~PAGE_RESET; + if (flags & PAGE_WRITE) { + flags |=3D PAGE_WRITE_ORG; } - /* Using mprotect on a page does not change sticky bits. */ - p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; } =20 + if (!flags || reset) { + page_reset_target_data(start, end); + inval_tb |=3D pageflags_unset(start, last); + } + if (flags) { + inval_tb |=3D pageflags_set_clear(start, last, flags, + ~(reset ? 0 : PAGE_STICKY)); + } if (inval_tb) { tb_invalidate_phys_range(start, end); } @@ -324,87 +523,89 @@ void page_set_flags(target_ulong start, target_ulong = end, int flags) =20 int page_check_range(target_ulong start, target_ulong len, int flags) { - PageDesc *p; - target_ulong end; - target_ulong addr; - - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { - assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); - } + target_ulong last; =20 if (len =3D=3D 0) { - return 0; - } - if (start + len - 1 < start) { - /* We've wrapped around. */ - return -1; + return 0; /* trivial length */ } =20 - /* must do before we loose bits in the next step */ - end =3D TARGET_PAGE_ALIGN(start + len); - start =3D start & TARGET_PAGE_MASK; + last =3D start + len - 1; + if (last < start) { + return -1; /* wrap around */ + } + + while (true) { + PageFlagsNode *p =3D pageflags_find(start, last); + int missing; =20 - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); if (!p) { - return -1; + return -1; /* entire region invalid */ } - if (!(p->flags & PAGE_VALID)) { - return -1; + if (start < p->itree.start) { + return -1; /* initial bytes invalid */ } =20 - if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { - return -1; + missing =3D flags & ~p->flags; + if (missing & PAGE_READ) { + return -1; /* page not readable */ } - if (flags & PAGE_WRITE) { + if (missing & PAGE_WRITE) { if (!(p->flags & PAGE_WRITE_ORG)) { + return -1; /* page not writable */ + } + /* Asking about writable, but has been protected: undo. */ + if (!page_unprotect(start, 0)) { return -1; } - /* unprotect the page if it was put read-only because it - contains translated code */ - if (!(p->flags & PAGE_WRITE)) { - if (!page_unprotect(addr, 0)) { - return -1; - } + /* TODO: page_unprotect should take a range, not a single page= . */ + if (last - start < TARGET_PAGE_SIZE) { + return 0; /* ok */ } + start +=3D TARGET_PAGE_SIZE; + continue; } + + if (last <=3D p->itree.last) { + return 0; /* ok */ + } + start =3D p->itree.last + 1; } - return 0; } =20 -void page_protect(tb_page_addr_t page_addr) +void page_protect(tb_page_addr_t address) { - target_ulong addr; - PageDesc *p; + PageFlagsNode *p; + target_ulong start, last; int prot; =20 - p =3D page_find(page_addr >> TARGET_PAGE_BITS); - if (p && (p->flags & PAGE_WRITE)) { - /* - * Force the host page as non writable (writes will have a page fa= ult + - * mprotect overhead). - */ - page_addr &=3D qemu_host_page_mask; - prot =3D 0; - for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; - addr +=3D TARGET_PAGE_SIZE) { + assert_memory_lock(); =20 - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - continue; - } + if (qemu_host_page_size <=3D TARGET_PAGE_SIZE) { + start =3D address & TARGET_PAGE_MASK; + last =3D start + TARGET_PAGE_SIZE - 1; + } else { + start =3D address & qemu_host_page_mask; + last =3D start + qemu_host_page_size - 1; + } + + p =3D pageflags_find(start, last); + if (!p) { + return; + } + prot =3D p->flags; + + if (unlikely(p->itree.last < last)) { + /* More than one protection region covers the one host page. */ + assert(TARGET_PAGE_SIZE < qemu_host_page_size); + while ((p =3D pageflags_next(p, start, last)) !=3D NULL) { prot |=3D p->flags; - p->flags &=3D ~PAGE_WRITE; } - mprotect(g2h_untagged(page_addr), qemu_host_page_size, - (prot & PAGE_BITS) & ~PAGE_WRITE); + } + + if (prot & PAGE_WRITE) { + pageflags_set_clear(start, last, 0, PAGE_WRITE); + mprotect(g2h_untagged(start), qemu_host_page_size, + prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE); } } =20 @@ -417,10 +618,8 @@ void page_protect(tb_page_addr_t page_addr) */ int page_unprotect(target_ulong address, uintptr_t pc) { - unsigned int prot; + PageFlagsNode *p; bool current_tb_invalidated; - PageDesc *p; - target_ulong host_start, host_end, addr; =20 /* * Technically this isn't safe inside a signal handler. However we @@ -429,40 +628,54 @@ int page_unprotect(target_ulong address, uintptr_t pc) */ mmap_lock(); =20 - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { + p =3D pageflags_find(address, address); + + /* If this address was not really writable, nothing to do. */ + if (!p || !(p->flags & PAGE_WRITE_ORG)) { mmap_unlock(); return 0; } =20 - /* - * If the page was really writable, then we change its - * protection back to writable. - */ - if (p->flags & PAGE_WRITE_ORG) { - current_tb_invalidated =3D false; - if (p->flags & PAGE_WRITE) { - /* - * If the page is actually marked WRITE then assume this is be= cause - * this thread raced with another one which got here first and - * set the page to PAGE_WRITE and did the TB invalidate for us. - */ + current_tb_invalidated =3D false; + if (p->flags & PAGE_WRITE) { + /* + * If the page is actually marked WRITE then assume this is because + * this thread raced with another one which got here first and + * set the page to PAGE_WRITE and did the TB invalidate for us. + */ #ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D tcg_tb_lookup(pc); - if (current_tb) { - current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; - } + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + if (current_tb) { + current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVALID; + } #endif + } else { + target_ulong start, len, i; + int prot; + + if (qemu_host_page_size <=3D TARGET_PAGE_SIZE) { + start =3D address & TARGET_PAGE_MASK; + len =3D TARGET_PAGE_SIZE; + prot =3D p->flags | PAGE_WRITE; + pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0); + current_tb_invalidated =3D tb_invalidate_phys_page_unwind(star= t, pc); } else { - host_start =3D address & qemu_host_page_mask; - host_end =3D host_start + qemu_host_page_size; - + start =3D address & qemu_host_page_mask; + len =3D qemu_host_page_size; prot =3D 0; - for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - p->flags |=3D PAGE_WRITE; - prot |=3D p->flags; =20 + for (i =3D 0; i < len; i +=3D TARGET_PAGE_SIZE) { + target_ulong addr =3D start + i; + + p =3D pageflags_find(addr, addr); + if (p) { + prot |=3D p->flags; + if (p->flags & PAGE_WRITE_ORG) { + prot |=3D PAGE_WRITE; + pageflags_set_clear(addr, addr + TARGET_PAGE_SIZE = - 1, + PAGE_WRITE, 0); + } + } /* * Since the content will be modified, we must invalidate * the corresponding translated code. @@ -470,15 +683,16 @@ int page_unprotect(target_ulong address, uintptr_t pc) current_tb_invalidated |=3D tb_invalidate_phys_page_unwind(addr, pc); } - mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, - prot & PAGE_BITS); } - mmap_unlock(); - /* If current TB was invalidated return to main loop */ - return current_tb_invalidated ? 2 : 1; + if (prot & PAGE_EXEC) { + prot =3D (prot & ~PAGE_EXEC) | PAGE_READ; + } + mprotect((void *)g2h_untagged(start), len, prot & PAGE_BITS); } mmap_unlock(); - return 0; + + /* If current TB was invalidated return to main loop */ + return current_tb_invalidated ? 2 : 1; } =20 static int probe_access_internal(CPUArchState *env, target_ulong addr, diff --git a/tests/tcg/multiarch/test-vma.c b/tests/tcg/multiarch/test-vma.c new file mode 100644 index 0000000000..2893d60334 --- /dev/null +++ b/tests/tcg/multiarch/test-vma.c @@ -0,0 +1,22 @@ +/* + * Test very large vma allocations. + * The qemu out-of-memory condition was within the mmap syscall itself. + * If the syscall actually returns with MAP_FAILED, the test succeeded. + */ +#include + +int main() +{ + int n =3D sizeof(size_t) =3D=3D 4 ? 32 : 45; + + for (int i =3D 28; i < n; i++) { + size_t l =3D (size_t)1 << i; + void *p =3D mmap(0, l, PROT_NONE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + if (p =3D=3D MAP_FAILED) { + break; + } + munmap(p, l); + } + return 0; +} --=20 2.34.1 From nobody Thu Oct 31 23:50:58 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1666869325; cv=none; d=zohomail.com; s=zohoarc; b=BXAz5Xqk29Mi3LkXPIcedlF/wD4zfB9N7qwkB/xbeGagptCheKy+fQ9t8/BU7/TO1LEaqxha2p9C2aZaSae4Mhjk1uNNKZHlydr1qMF9qciY8BACLyI2I1z1D17lP4f2gE8tK+cyMjLBgVHgNS4Zp1dV6rX6KvuHSaSpZIo+lNs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1666869325; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Bzfc5hZ1G0r8dqs+MeLFwKQ+2GaFMruqmaEyQHnQRYQ=; b=S3uUZJ+za4H/yWDK1R3WP/oWjJh1M9t5Kv4HIEiMIpUsMLnYJjDUnvzQu9sy9YLb8GR4RTf2gTP9zTau1dZnHLCUM/5LkBQGPGN4aFxPnq0f0BnQMZAuR45YROTv5gPorHJEg3GnZa1McFdw0uPzt5cD8uMWvhmPit/4erns4Bo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1666869325047824.8229086664569; Thu, 27 Oct 2022 04:15:25 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oo0pS-0008A3-TK; Thu, 27 Oct 2022 07:13:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oo0pP-0007sl-Of for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:39 -0400 Received: from mail-pf1-x431.google.com ([2607:f8b0:4864:20::431]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oo0p9-0008Nb-WC for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:38 -0400 Received: by mail-pf1-x431.google.com with SMTP id 192so1149607pfx.5 for ; Thu, 27 Oct 2022 04:13:22 -0700 (PDT) Received: from localhost.localdomain ([2001:8003:501a:d301:3a91:9408:3918:55a]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b00172ea8ff334sm969621plj.7.2022.10.27.04.13.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 04:13:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Bzfc5hZ1G0r8dqs+MeLFwKQ+2GaFMruqmaEyQHnQRYQ=; b=caIIOVQr72fWZd9oyqmmd3ebZE7LTrr6qn3sCwXbbIbmnTpKBBAA1MZfhNk+EE/MT9 aJDl3CCbiNaB4zZZAJaKbjv4NlLyermwbHQbdfAv/X2X08Pg7IVtoujthPXbipLes0p6 xLfAnaoDZ4etXpFdlSW8hL4ZrQQS4XN24rvqxvJ6QRduoG8ENhjqYg6Gcbil3ZofxP0I drhO6e20YaGLYE7JrlfU32ZDkJiuvt4f+L86d7IjPGAO62RpgrO/obzehcbWYoTPZ12X dvwyUeTgfB8+F7zVKjvPYtnULUqNpWL8hm3jSWY3pPlB5ARYbOY3vqIXJ9LlUjWbI1Ci rWiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Bzfc5hZ1G0r8dqs+MeLFwKQ+2GaFMruqmaEyQHnQRYQ=; b=LkoUpH/aYqaWDHWHfwRRKA4IH5l5HjQQzoSjyJk60dFzKT4tXzEmgdnV+LbGTtbR9p 9fab0gm1E9XuPNdz2cqM51duHXMYa48F/Ta8RWgPZa9GYLSoPYAj5RszO43C7F8hK2CD 84zmDcgvyhDegkjIhQHqoh2e9hKGuJXcayWvZ/EUfQGocVgZwhZ6d31iUHsQIQmI34we wgXB1posW50yPalQYh7gk5LOyarWrIRxCB9lKckNhfgu+7bITAVQR38zG7M45k9ovfFY tW+UYEk070Eu4TUMKhoibx8zDO4I7JLkMErnmCDYG+KkPUTl9nyOTccHUr3ovpmzsRwP 17LQ== X-Gm-Message-State: ACrzQf3HCOc1TPHkZv5LPx/uSV6NyX0VosJhG+9Vkai6PFoB68X7kJCq 50RBSlQF+8PrmSLwtLgVxJCkGIO0mikNPum7 X-Google-Smtp-Source: AMsMyM6fH3AaefxxPIt7Vmz6ke0ozUP0EgIL0QeIoJfz+3rOIEQQJEF6jJwN/G9I0FgCNV9f4QRIDg== X-Received: by 2002:aa7:8b46:0:b0:56c:349f:191e with SMTP id i6-20020aa78b46000000b0056c349f191emr11190114pfd.29.1666869201443; Thu, 27 Oct 2022 04:13:21 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu Subject: [PATCH v2 6/7] accel/tcg: Move PageDesc tree into tb-maint.c for system Date: Thu, 27 Oct 2022 22:12:57 +1100 Message-Id: <20221027111258.348196-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027111258.348196-1-richard.henderson@linaro.org> References: <20221027111258.348196-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::431; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Qemu-devel" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1666869326495100003 Content-Type: text/plain; charset="utf-8" Now that PageDesc is not used for user-only, and for system it is only used for tb maintenance, move the implementation into tb-main.c appropriately ifdefed. We have not yet eliminated all references to PageDesc for user-only, so retain a typedef to the structure without definition. Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 49 +++----------- accel/tcg/tb-maint.c | 130 ++++++++++++++++++++++++++++++++++++-- accel/tcg/translate-all.c | 95 ---------------------------- 3 files changed, 134 insertions(+), 140 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index c7e157d1cd..c6c9e02cfd 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -23,51 +23,13 @@ #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif =20 -typedef struct PageDesc { +typedef struct PageDesc PageDesc; #ifndef CONFIG_USER_ONLY +struct PageDesc { QemuSpin lock; /* list of TBs intersecting this ram page */ uintptr_t first_tb; -#endif -} PageDesc; - -/* - * In system mode we want L1_MAP to be based on ram offsets, - * while in user mode we want it to be based on virtual addresses. - * - * TODO: For user mode, see the caveat re host vs guest virtual - * address spaces near GUEST_ADDR_MAX. - */ -#if !defined(CONFIG_USER_ONLY) -#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS -# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS -#else -# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS -#endif -#else -# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) -#endif - -/* Size of the L2 (and L3, etc) page tables. */ -#define V_L2_BITS 10 -#define V_L2_SIZE (1 << V_L2_BITS) - -/* - * L1 Mapping properties - */ -extern int v_l1_size; -extern int v_l1_shift; -extern int v_l2_levels; - -/* - * The bottom level has pointers to PageDesc, and is indexed by - * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. - */ -#define V_L1_MIN_BITS 4 -#define V_L1_MAX_BITS (V_L2_BITS + 3) -#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) - -extern void *l1_map[V_L1_MAX_SIZE]; +}; =20 PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); =20 @@ -76,6 +38,11 @@ static inline PageDesc *page_find(tb_page_addr_t index) return page_find_alloc(index, false); } =20 +void page_table_config_init(void); +#else +static inline void page_table_config_init(void) { } +#endif + /* list iterators for lists of tagged pointers in TranslationBlock */ #define TB_FOR_EACH_TAGGED(head, tb, n, field) \ for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 694440cb4a..31d0a74aa9 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -127,6 +127,121 @@ static PageForEachNext foreach_tb_next(PageForEachNex= t tb, } =20 #else +/* + * In system mode we want L1_MAP to be based on ram offsets. + */ +#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS +# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS +#else +# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS +#endif + +/* Size of the L2 (and L3, etc) page tables. */ +#define V_L2_BITS 10 +#define V_L2_SIZE (1 << V_L2_BITS) + +/* + * L1 Mapping properties + */ +static int v_l1_size; +static int v_l1_shift; +static int v_l2_levels; + +/* + * The bottom level has pointers to PageDesc, and is indexed by + * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. + */ +#define V_L1_MIN_BITS 4 +#define V_L1_MAX_BITS (V_L2_BITS + 3) +#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) + +static void *l1_map[V_L1_MAX_SIZE]; + +void page_table_config_init(void) +{ + uint32_t v_l1_bits; + + assert(TARGET_PAGE_BITS); + /* The bits remaining after N lower levels of page tables. */ + v_l1_bits =3D (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS; + if (v_l1_bits < V_L1_MIN_BITS) { + v_l1_bits +=3D V_L2_BITS; + } + + v_l1_size =3D 1 << v_l1_bits; + v_l1_shift =3D L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits; + v_l2_levels =3D v_l1_shift / V_L2_BITS - 1; + + assert(v_l1_bits <=3D V_L1_MAX_BITS); + assert(v_l1_shift % V_L2_BITS =3D=3D 0); + assert(v_l2_levels >=3D 0); +} + +PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +{ + PageDesc *pd; + void **lp; + int i; + + /* Level 1. Always allocated. */ + lp =3D l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); + + /* Level 2..N-1. */ + for (i =3D v_l2_levels; i > 0; i--) { + void **p =3D qatomic_rcu_read(lp); + + if (p =3D=3D NULL) { + void *existing; + + if (!alloc) { + return NULL; + } + p =3D g_new0(void *, V_L2_SIZE); + existing =3D qatomic_cmpxchg(lp, NULL, p); + if (unlikely(existing)) { + g_free(p); + p =3D existing; + } + } + + lp =3D p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); + } + + pd =3D qatomic_rcu_read(lp); + if (pd =3D=3D NULL) { + void *existing; + + if (!alloc) { + return NULL; + } + pd =3D g_new0(PageDesc, V_L2_SIZE); +#ifndef CONFIG_USER_ONLY + { + int i; + + for (i =3D 0; i < V_L2_SIZE; i++) { + qemu_spin_init(&pd[i].lock); + } + } +#endif + existing =3D qatomic_cmpxchg(lp, NULL, pd); + if (unlikely(existing)) { +#ifndef CONFIG_USER_ONLY + { + int i; + + for (i =3D 0; i < V_L2_SIZE; i++) { + qemu_spin_destroy(&pd[i].lock); + } + } +#endif + g_free(pd); + pd =3D existing; + } + } + + return pd + (index & (V_L2_SIZE - 1)); +} =20 /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void page_flush_tb_1(int level, void **lp) @@ -420,6 +535,17 @@ static void tb_phys_invalidate__locked(TranslationBloc= k *tb) qemu_thread_jit_execute(); } =20 +#ifdef CONFIG_USER_ONLY +static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, + bool alloc) +{ + *ret_p1 =3D NULL; + *ret_p2 =3D NULL; +} +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } +#else static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, PageDesc **ret_p2, tb_page_addr_t phys2, bool a= lloc) { @@ -460,10 +586,6 @@ static void page_lock_pair(PageDesc **ret_p1, tb_page_= addr_t phys1, } } =20 -#ifdef CONFIG_USER_ONLY -static inline void page_lock_tb(const TranslationBlock *tb) { } -static inline void page_unlock_tb(const TranslationBlock *tb) { } -#else /* lock the page(s) of a TB in the correct acquisition order */ static void page_lock_tb(const TranslationBlock *tb) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 0f8f8e5bef..dec8eb2200 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -114,37 +114,8 @@ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) * BITS_PER_BYTE); =20 -/* - * L1 Mapping properties - */ -int v_l1_size; -int v_l1_shift; -int v_l2_levels; - -void *l1_map[V_L1_MAX_SIZE]; - TBContext tb_ctx; =20 -static void page_table_config_init(void) -{ - uint32_t v_l1_bits; - - assert(TARGET_PAGE_BITS); - /* The bits remaining after N lower levels of page tables. */ - v_l1_bits =3D (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS; - if (v_l1_bits < V_L1_MIN_BITS) { - v_l1_bits +=3D V_L2_BITS; - } - - v_l1_size =3D 1 << v_l1_bits; - v_l1_shift =3D L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits; - v_l2_levels =3D v_l1_shift / V_L2_BITS - 1; - - assert(v_l1_bits <=3D V_L1_MAX_BITS); - assert(v_l1_shift % V_L2_BITS =3D=3D 0); - assert(v_l2_levels >=3D 0); -} - /* Encode VAL as a signed leb128 sequence at P. Return P incremented past the encoded value. */ static uint8_t *encode_sleb128(uint8_t *p, target_long val) @@ -389,72 +360,6 @@ void page_init(void) #endif } =20 -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) -{ - PageDesc *pd; - void **lp; - int i; - - /* Level 1. Always allocated. */ - lp =3D l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); - - /* Level 2..N-1. */ - for (i =3D v_l2_levels; i > 0; i--) { - void **p =3D qatomic_rcu_read(lp); - - if (p =3D=3D NULL) { - void *existing; - - if (!alloc) { - return NULL; - } - p =3D g_new0(void *, V_L2_SIZE); - existing =3D qatomic_cmpxchg(lp, NULL, p); - if (unlikely(existing)) { - g_free(p); - p =3D existing; - } - } - - lp =3D p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); - } - - pd =3D qatomic_rcu_read(lp); - if (pd =3D=3D NULL) { - void *existing; - - if (!alloc) { - return NULL; - } - pd =3D g_new0(PageDesc, V_L2_SIZE); -#ifndef CONFIG_USER_ONLY - { - int i; - - for (i =3D 0; i < V_L2_SIZE; i++) { - qemu_spin_init(&pd[i].lock); - } - } -#endif - existing =3D qatomic_cmpxchg(lp, NULL, pd); - if (unlikely(existing)) { -#ifndef CONFIG_USER_ONLY - { - int i; - - for (i =3D 0; i < V_L2_SIZE; i++) { - qemu_spin_destroy(&pd[i].lock); - } - } -#endif - g_free(pd); - pd =3D existing; - } - } - - return pd + (index & (V_L2_SIZE - 1)); -} - /* In user-mode page locks aren't used; mmap_lock is enough */ #ifdef CONFIG_USER_ONLY struct page_collection * --=20 2.34.1 From nobody Thu Oct 31 23:50:58 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1666869405; cv=none; d=zohomail.com; s=zohoarc; b=exzBAdkJot5bjUmKvw1orsn58sIAjdIhXMNzSsqhhXkydE0hlQCeVeedfolW9vMrYzKz46ayLLChl+aQXfNCuXsZq/1bXo3kCteuRl/6WHABQcFnW8ZCeUZr/TCPH/8TJ5yjwapcgydGGr7zp/BJUbMmVFGK2ZCnKuSVN1V09LM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1666869405; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Ty3oGmlM7f4hplMtWOPLY+NiWfyNOHywgsyJrGuNkek=; b=i6nCN8W8uG4pt6ToDjFOGLFRw6mbesQXiM1DpuwIZ5634RW09jVLPGuewdASDdiDm1RRdiCjtxacDMzlepmEFKEOa06EOd5hx9VKAto9m5TagX0lQXctgoOoe1cozDg8MVj/FwF7Q3xbC3tg6yeOmfkAPog/iqWjMe2CnUbRekc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1666869405014552.2247416558015; Thu, 27 Oct 2022 04:16:45 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oo0pS-00088q-BQ; Thu, 27 Oct 2022 07:13:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oo0pL-0007TU-T3 for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:35 -0400 Received: from mail-pj1-x102f.google.com ([2607:f8b0:4864:20::102f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oo0pD-0008Nq-2e for qemu-devel@nongnu.org; Thu, 27 Oct 2022 07:13:35 -0400 Received: by mail-pj1-x102f.google.com with SMTP id v4-20020a17090a088400b00212cb0ed97eso1123213pjc.5 for ; Thu, 27 Oct 2022 04:13:26 -0700 (PDT) Received: from localhost.localdomain ([2001:8003:501a:d301:3a91:9408:3918:55a]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b00172ea8ff334sm969621plj.7.2022.10.27.04.13.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 04:13:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ty3oGmlM7f4hplMtWOPLY+NiWfyNOHywgsyJrGuNkek=; b=lx1fFmDPtkrAZGDGxdH1c3rNhICu5HIeuIxcGCIYGv99x2AqYbjYlCweV+kXdPzhs2 ES/jp5rheavYRU+CfVz47IVqM+Jjs2Y6jFTL8nCc7v8qY+D/OsAylRrB1NiUMfGJSCoJ 69DdBb7/zlIF9QEielqvIDSSJIS3uiufodGqO/1xgO8+VE72vsX7vOihZAaHtFfXbwee muujCAlMZsqmVicetBJvYk8v8+Os+b/hNx7xYJ3imLc8zi2rUZDxgqeaP8ct7oDOuySM sMll2Nnzf3m51TL8SzhQZjI7kJCTjcR6U3zns2wtO7VfILjxm+VJqvFMCarTSU/i0KEo pv/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ty3oGmlM7f4hplMtWOPLY+NiWfyNOHywgsyJrGuNkek=; b=GK1e7zfcnO3zgini69ySgeN05sKfQ4wVPu+zZZpUkry1bszU/u/kG127YF9rTnCXTs xerigaLO5KriryM/Q9NoJihQPF3h6Sg3RSe7Gwf1FabeCpmJbU+jzzzXe+0Uii9vVjYY 39ymo2wJhTYYkHtjSZwGz8/YpRimjH5W4F3k/TJtN1KaDDeCr7ypw69Nnvg823jUk8sj AuLdJ5oktK/t4wZnnk8MWDi/H1cbQEld/rO+gt9D8snRfbyAqZHVwOltwJslFnO7gM2h vllr/OfeaC3Dmks4LMv9bXngowGMvTl0GjnE+fgxBimMtvVz7MdiQJbuqOAMaxy2wRXM FE5Q== X-Gm-Message-State: ACrzQf3RXNZifcD0bKAi0nz7oVYNF//MF2JI+ju81EKwdkkOvL62HDVy iZ14u+3JXRzCQ8zqjykniIr/k+P5m2KiHat2 X-Google-Smtp-Source: AMsMyM7AZ8bzYnZsIs4OcRBhijU6B7U/FntgrU4CTgHLqhleCJPF7G0SeuoKxRX4389QkJ9ap0QqMA== X-Received: by 2002:a17:90a:4588:b0:205:d605:8bcc with SMTP id v8-20020a17090a458800b00205d6058bccmr9716904pjg.205.1666869204233; Thu, 27 Oct 2022 04:13:24 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu Subject: [PATCH v2 7/7] accel/tcg: Move remainder of page locking to tb-maint.c Date: Thu, 27 Oct 2022 22:12:58 +1100 Message-Id: <20221027111258.348196-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027111258.348196-1-richard.henderson@linaro.org> References: <20221027111258.348196-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102f; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Qemu-devel" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1666869406946100001 The only thing that still touches PageDesc in translate-all.c are some locking routines related to tb-maint.c which have not yet been moved. Do so now. Move some code up in tb-maint.c as well, to untangle the maze of ifdefs, and allow a sensible final ordering. Move some declarations from exec/translate-all.h to internal.h, as they are only used within accel/tcg/. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 68 ++----- include/exec/translate-all.h | 6 - accel/tcg/tb-maint.c | 352 +++++++++++++++++++++++++++++++++-- accel/tcg/translate-all.c | 301 ------------------------------ 4 files changed, 352 insertions(+), 375 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index c6c9e02cfd..6ce1437c58 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -23,62 +23,28 @@ #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif =20 -typedef struct PageDesc PageDesc; -#ifndef CONFIG_USER_ONLY -struct PageDesc { - QemuSpin lock; - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; -}; - -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); - -static inline PageDesc *page_find(tb_page_addr_t index) -{ - return page_find_alloc(index, false); -} - -void page_table_config_init(void); -#else -static inline void page_table_config_init(void) { } -#endif - -/* list iterators for lists of tagged pointers in TranslationBlock */ -#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ - for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ - tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ - tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) - -#define TB_FOR_EACH_JMP(head_tb, tb, n) \ - TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) - -/* In user-mode page locks aren't used; mmap_lock is enough */ -#ifdef CONFIG_USER_ONLY -#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) -static inline void page_lock(PageDesc *pd) { } -static inline void page_unlock(PageDesc *pd) { } -#else -#ifdef CONFIG_DEBUG_TCG -void do_assert_page_locked(const PageDesc *pd, const char *file, int line); -#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE_= _) -#else -#define assert_page_locked(pd) -#endif -void page_lock(PageDesc *pd); -void page_unlock(PageDesc *pd); - -/* TODO: For now, still shared with translate-all.c for system mode. */ -typedef int PageForEachNext; -#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - -#endif -#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) +#if defined(CONFIG_SOFTMMU) && defined(CONFIG_DEBUG_TCG) void assert_no_pages_locked(void); #else static inline void assert_no_pages_locked(void) { } #endif =20 +#ifdef CONFIG_USER_ONLY +static inline void page_table_config_init(void) { } +#else +void page_table_config_init(void); +#endif + +#ifdef CONFIG_SOFTMMU +struct page_collection; +void tb_invalidate_phys_page_fast(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr); +struct page_collection *page_collection_lock(tb_page_addr_t start, + tb_page_addr_t end); +void page_collection_unlock(struct page_collection *set); +#endif /* CONFIG_SOFTMMU */ + TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, int cflags); diff --git a/include/exec/translate-all.h b/include/exec/translate-all.h index 3e9cb91565..88602ae8d8 100644 --- a/include/exec/translate-all.h +++ b/include/exec/translate-all.h @@ -23,12 +23,6 @@ =20 =20 /* translate-all.c */ -struct page_collection *page_collection_lock(tb_page_addr_t start, - tb_page_addr_t end); -void page_collection_unlock(struct page_collection *set); -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr); void tb_invalidate_phys_page(tb_page_addr_t addr); void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr); =20 diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 31d0a74aa9..8fe2d322db 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -30,6 +30,15 @@ #include "internal.h" =20 =20 +/* List iterators for lists of tagged pointers in TranslationBlock. */ +#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ + for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ + tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ + tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) + +#define TB_FOR_EACH_JMP(head_tb, tb, n) \ + TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) + static bool tb_cmp(const void *ap, const void *bp) { const TranslationBlock *a =3D ap; @@ -51,7 +60,28 @@ void tb_htable_init(void) qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } =20 +typedef struct PageDesc PageDesc; + #ifdef CONFIG_USER_ONLY + +/* + * In user-mode page locks aren't used; mmap_lock is enough. + */ +#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) + +static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, + bool alloc) +{ + *ret_p1 =3D NULL; + *ret_p2 =3D NULL; +} + +static inline void page_lock(PageDesc *pd) { } +static inline void page_unlock(PageDesc *pd) { } +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } + /* * For user-only, since we are protecting all of memory with a single lock, * and because the two pages of a TranslationBlock are always contiguous, @@ -157,6 +187,12 @@ static int v_l2_levels; =20 static void *l1_map[V_L1_MAX_SIZE]; =20 +struct PageDesc { + QemuSpin lock; + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; +}; + void page_table_config_init(void) { uint32_t v_l1_bits; @@ -177,7 +213,7 @@ void page_table_config_init(void) assert(v_l2_levels >=3D 0); } =20 -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) { PageDesc *pd; void **lp; @@ -243,6 +279,303 @@ PageDesc *page_find_alloc(tb_page_addr_t index, bool = alloc) return pd + (index & (V_L2_SIZE - 1)); } =20 +static inline PageDesc *page_find(tb_page_addr_t index) +{ + return page_find_alloc(index, false); +} + +/** + * struct page_entry - page descriptor entry + * @pd: pointer to the &struct PageDesc of the page this entry represe= nts + * @index: page index of the page + * @locked: whether the page is locked + * + * This struct helps us keep track of the locked state of a page, without + * bloating &struct PageDesc. + * + * A page lock protects accesses to all fields of &struct PageDesc. + * + * See also: &struct page_collection. + */ +struct page_entry { + PageDesc *pd; + tb_page_addr_t index; + bool locked; +}; + +/** + * struct page_collection - tracks a set of pages (i.e. &struct page_entry= 's) + * @tree: Binary search tree (BST) of the pages, with key =3D=3D page in= dex + * @max: Pointer to the page in @tree with the highest page index + * + * To avoid deadlock we lock pages in ascending order of page index. + * When operating on a set of pages, we need to keep track of them so that + * we can lock them in order and also unlock them later. For this we colle= ct + * pages (i.e. &struct page_entry's) in a binary search @tree. Given that = the + * @tree implementation we use does not provide an O(1) operation to obtai= n the + * highest-ranked element, we use @max to keep track of the inserted page + * with the highest index. This is valuable because if a page is not in + * the tree and its index is higher than @max's, then we can lock it + * without breaking the locking order rule. + * + * Note on naming: 'struct page_set' would be shorter, but we already have= a few + * page_set_*() helpers, so page_collection is used instead to avoid confu= sion. + * + * See also: page_collection_lock(). + */ +struct page_collection { + GTree *tree; + struct page_entry *max; +}; + +typedef int PageForEachNext; +#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + +#ifdef CONFIG_DEBUG_TCG + +static __thread GHashTable *ht_pages_locked_debug; + +static void ht_pages_locked_debug_init(void) +{ + if (ht_pages_locked_debug) { + return; + } + ht_pages_locked_debug =3D g_hash_table_new(NULL, NULL); +} + +static bool page_is_locked(const PageDesc *pd) +{ + PageDesc *found; + + ht_pages_locked_debug_init(); + found =3D g_hash_table_lookup(ht_pages_locked_debug, pd); + return !!found; +} + +static void page_lock__debug(PageDesc *pd) +{ + ht_pages_locked_debug_init(); + g_assert(!page_is_locked(pd)); + g_hash_table_insert(ht_pages_locked_debug, pd, pd); +} + +static void page_unlock__debug(const PageDesc *pd) +{ + bool removed; + + ht_pages_locked_debug_init(); + g_assert(page_is_locked(pd)); + removed =3D g_hash_table_remove(ht_pages_locked_debug, pd); + g_assert(removed); +} + +static void do_assert_page_locked(const PageDesc *pd, + const char *file, int line) +{ + if (unlikely(!page_is_locked(pd))) { + error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", + pd, file, line); + abort(); + } +} +#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE_= _) + +void assert_no_pages_locked(void) +{ + ht_pages_locked_debug_init(); + g_assert(g_hash_table_size(ht_pages_locked_debug) =3D=3D 0); +} + +#else /* !CONFIG_DEBUG_TCG */ + +static inline void page_lock__debug(const PageDesc *pd) { } +static inline void page_unlock__debug(const PageDesc *pd) { } +static inline void assert_page_locked(const PageDesc *pd) { } + +#endif /* CONFIG_DEBUG_TCG */ + +static void page_lock(PageDesc *pd) +{ + page_lock__debug(pd); + qemu_spin_lock(&pd->lock); +} + +static void page_unlock(PageDesc *pd) +{ + qemu_spin_unlock(&pd->lock); + page_unlock__debug(pd); +} + +static inline struct page_entry * +page_entry_new(PageDesc *pd, tb_page_addr_t index) +{ + struct page_entry *pe =3D g_malloc(sizeof(*pe)); + + pe->index =3D index; + pe->pd =3D pd; + pe->locked =3D false; + return pe; +} + +static void page_entry_destroy(gpointer p) +{ + struct page_entry *pe =3D p; + + g_assert(pe->locked); + page_unlock(pe->pd); + g_free(pe); +} + +/* returns false on success */ +static bool page_entry_trylock(struct page_entry *pe) +{ + bool busy; + + busy =3D qemu_spin_trylock(&pe->pd->lock); + if (!busy) { + g_assert(!pe->locked); + pe->locked =3D true; + page_lock__debug(pe->pd); + } + return busy; +} + +static void do_page_entry_lock(struct page_entry *pe) +{ + page_lock(pe->pd); + g_assert(!pe->locked); + pe->locked =3D true; +} + +static gboolean page_entry_lock(gpointer key, gpointer value, gpointer dat= a) +{ + struct page_entry *pe =3D value; + + do_page_entry_lock(pe); + return FALSE; +} + +static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer d= ata) +{ + struct page_entry *pe =3D value; + + if (pe->locked) { + pe->locked =3D false; + page_unlock(pe->pd); + } + return FALSE; +} + +/* + * Trylock a page, and if successful, add the page to a collection. + * Returns true ("busy") if the page could not be locked; false otherwise. + */ +static bool page_trylock_add(struct page_collection *set, tb_page_addr_t a= ddr) +{ + tb_page_addr_t index =3D addr >> TARGET_PAGE_BITS; + struct page_entry *pe; + PageDesc *pd; + + pe =3D g_tree_lookup(set->tree, &index); + if (pe) { + return false; + } + + pd =3D page_find(index); + if (pd =3D=3D NULL) { + return false; + } + + pe =3D page_entry_new(pd, index); + g_tree_insert(set->tree, &pe->index, pe); + + /* + * If this is either (1) the first insertion or (2) a page whose index + * is higher than any other so far, just lock the page and move on. + */ + if (set->max =3D=3D NULL || pe->index > set->max->index) { + set->max =3D pe; + do_page_entry_lock(pe); + return false; + } + /* + * Try to acquire out-of-order lock; if busy, return busy so that we a= cquire + * locks in order. + */ + return page_entry_trylock(pe); +} + +static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer = udata) +{ + tb_page_addr_t a =3D *(const tb_page_addr_t *)ap; + tb_page_addr_t b =3D *(const tb_page_addr_t *)bp; + + if (a =3D=3D b) { + return 0; + } else if (a < b) { + return -1; + } + return 1; +} + +/* + * Lock a range of pages ([@start,@end[) as well as the pages of all + * intersecting TBs. + * Locking order: acquire locks in ascending order of page index. + */ +struct page_collection * +page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) +{ + struct page_collection *set =3D g_malloc(sizeof(*set)); + tb_page_addr_t index; + PageDesc *pd; + + start >>=3D TARGET_PAGE_BITS; + end >>=3D TARGET_PAGE_BITS; + g_assert(start <=3D end); + + set->tree =3D g_tree_new_full(tb_page_addr_cmp, NULL, NULL, + page_entry_destroy); + set->max =3D NULL; + assert_no_pages_locked(); + + retry: + g_tree_foreach(set->tree, page_entry_lock, NULL); + + for (index =3D start; index <=3D end; index++) { + TranslationBlock *tb; + PageForEachNext n; + + pd =3D page_find(index); + if (pd =3D=3D NULL) { + continue; + } + if (page_trylock_add(set, index << TARGET_PAGE_BITS)) { + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + assert_page_locked(pd); + PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { + if (page_trylock_add(set, tb_page_addr0(tb)) || + (tb_page_addr1(tb) !=3D -1 && + page_trylock_add(set, tb_page_addr1(tb)))) { + /* drop all locks, and reacquire in order */ + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + } + } + return set; +} + +void page_collection_unlock(struct page_collection *set) +{ + /* entries are unlocked and freed via page_entry_destroy */ + g_tree_destroy(set->tree); + g_free(set); +} + /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void page_flush_tb_1(int level, void **lp) { @@ -338,7 +671,6 @@ static void tb_page_remove(TranslationBlock *tb) tb_page_remove1(tb, pd); } } - #endif =20 /* flush all the translation blocks */ @@ -536,15 +868,6 @@ static void tb_phys_invalidate__locked(TranslationBloc= k *tb) } =20 #ifdef CONFIG_USER_ONLY -static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, - PageDesc **ret_p2, tb_page_addr_t phys2, - bool alloc) -{ - *ret_p1 =3D NULL; - *ret_p2 =3D NULL; -} -static inline void page_lock_tb(const TranslationBlock *tb) { } -static inline void page_unlock_tb(const TranslationBlock *tb) { } #else static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, PageDesc **ret_p2, tb_page_addr_t phys2, bool a= lloc) @@ -756,8 +1079,7 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t add= r, uintptr_t pc) #else /* * @p must be non-NULL. - * user-mode: call with mmap_lock held. - * !user-mode: call with all @pages locked. + * Call with all @pages locked. */ static void tb_invalidate_phys_page_range__locked(struct page_collection *pages, @@ -828,8 +1150,6 @@ tb_invalidate_phys_page_range__locked(struct page_coll= ection *pages, /* * Invalidate all TBs which intersect with the target physical * address page @addr. - * - * Called with mmap_lock held for user-mode emulation */ void tb_invalidate_phys_page(tb_page_addr_t addr) { @@ -855,8 +1175,6 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) * 'is_cpu_write_access' should be true if called from a real cpu write * access: the virtual CPU will exit the current TB if code is modified in= side * this TB. - * - * Called with mmap_lock held for user-mode emulation. */ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index dec8eb2200..db0b70037a 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -63,52 +63,6 @@ #include "tb-context.h" #include "internal.h" =20 -/* make various TB consistency checks */ - -/** - * struct page_entry - page descriptor entry - * @pd: pointer to the &struct PageDesc of the page this entry represe= nts - * @index: page index of the page - * @locked: whether the page is locked - * - * This struct helps us keep track of the locked state of a page, without - * bloating &struct PageDesc. - * - * A page lock protects accesses to all fields of &struct PageDesc. - * - * See also: &struct page_collection. - */ -struct page_entry { - PageDesc *pd; - tb_page_addr_t index; - bool locked; -}; - -/** - * struct page_collection - tracks a set of pages (i.e. &struct page_entry= 's) - * @tree: Binary search tree (BST) of the pages, with key =3D=3D page in= dex - * @max: Pointer to the page in @tree with the highest page index - * - * To avoid deadlock we lock pages in ascending order of page index. - * When operating on a set of pages, we need to keep track of them so that - * we can lock them in order and also unlock them later. For this we colle= ct - * pages (i.e. &struct page_entry's) in a binary search @tree. Given that = the - * @tree implementation we use does not provide an O(1) operation to obtai= n the - * highest-ranked element, we use @max to keep track of the inserted page - * with the highest index. This is valuable because if a page is not in - * the tree and its index is higher than @max's, then we can lock it - * without breaking the locking order rule. - * - * Note on naming: 'struct page_set' would be shorter, but we already have= a few - * page_set_*() helpers, so page_collection is used instead to avoid confu= sion. - * - * See also: page_collection_lock(). - */ -struct page_collection { - GTree *tree; - struct page_entry *max; -}; - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -360,261 +314,6 @@ void page_init(void) #endif } =20 -/* In user-mode page locks aren't used; mmap_lock is enough */ -#ifdef CONFIG_USER_ONLY -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) -{ - return NULL; -} - -void page_collection_unlock(struct page_collection *set) -{ } -#else /* !CONFIG_USER_ONLY */ - -#ifdef CONFIG_DEBUG_TCG - -static __thread GHashTable *ht_pages_locked_debug; - -static void ht_pages_locked_debug_init(void) -{ - if (ht_pages_locked_debug) { - return; - } - ht_pages_locked_debug =3D g_hash_table_new(NULL, NULL); -} - -static bool page_is_locked(const PageDesc *pd) -{ - PageDesc *found; - - ht_pages_locked_debug_init(); - found =3D g_hash_table_lookup(ht_pages_locked_debug, pd); - return !!found; -} - -static void page_lock__debug(PageDesc *pd) -{ - ht_pages_locked_debug_init(); - g_assert(!page_is_locked(pd)); - g_hash_table_insert(ht_pages_locked_debug, pd, pd); -} - -static void page_unlock__debug(const PageDesc *pd) -{ - bool removed; - - ht_pages_locked_debug_init(); - g_assert(page_is_locked(pd)); - removed =3D g_hash_table_remove(ht_pages_locked_debug, pd); - g_assert(removed); -} - -void do_assert_page_locked(const PageDesc *pd, const char *file, int line) -{ - if (unlikely(!page_is_locked(pd))) { - error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", - pd, file, line); - abort(); - } -} - -void assert_no_pages_locked(void) -{ - ht_pages_locked_debug_init(); - g_assert(g_hash_table_size(ht_pages_locked_debug) =3D=3D 0); -} - -#else /* !CONFIG_DEBUG_TCG */ - -static inline void page_lock__debug(const PageDesc *pd) { } -static inline void page_unlock__debug(const PageDesc *pd) { } - -#endif /* CONFIG_DEBUG_TCG */ - -void page_lock(PageDesc *pd) -{ - page_lock__debug(pd); - qemu_spin_lock(&pd->lock); -} - -void page_unlock(PageDesc *pd) -{ - qemu_spin_unlock(&pd->lock); - page_unlock__debug(pd); -} - -static inline struct page_entry * -page_entry_new(PageDesc *pd, tb_page_addr_t index) -{ - struct page_entry *pe =3D g_malloc(sizeof(*pe)); - - pe->index =3D index; - pe->pd =3D pd; - pe->locked =3D false; - return pe; -} - -static void page_entry_destroy(gpointer p) -{ - struct page_entry *pe =3D p; - - g_assert(pe->locked); - page_unlock(pe->pd); - g_free(pe); -} - -/* returns false on success */ -static bool page_entry_trylock(struct page_entry *pe) -{ - bool busy; - - busy =3D qemu_spin_trylock(&pe->pd->lock); - if (!busy) { - g_assert(!pe->locked); - pe->locked =3D true; - page_lock__debug(pe->pd); - } - return busy; -} - -static void do_page_entry_lock(struct page_entry *pe) -{ - page_lock(pe->pd); - g_assert(!pe->locked); - pe->locked =3D true; -} - -static gboolean page_entry_lock(gpointer key, gpointer value, gpointer dat= a) -{ - struct page_entry *pe =3D value; - - do_page_entry_lock(pe); - return FALSE; -} - -static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer d= ata) -{ - struct page_entry *pe =3D value; - - if (pe->locked) { - pe->locked =3D false; - page_unlock(pe->pd); - } - return FALSE; -} - -/* - * Trylock a page, and if successful, add the page to a collection. - * Returns true ("busy") if the page could not be locked; false otherwise. - */ -static bool page_trylock_add(struct page_collection *set, tb_page_addr_t a= ddr) -{ - tb_page_addr_t index =3D addr >> TARGET_PAGE_BITS; - struct page_entry *pe; - PageDesc *pd; - - pe =3D g_tree_lookup(set->tree, &index); - if (pe) { - return false; - } - - pd =3D page_find(index); - if (pd =3D=3D NULL) { - return false; - } - - pe =3D page_entry_new(pd, index); - g_tree_insert(set->tree, &pe->index, pe); - - /* - * If this is either (1) the first insertion or (2) a page whose index - * is higher than any other so far, just lock the page and move on. - */ - if (set->max =3D=3D NULL || pe->index > set->max->index) { - set->max =3D pe; - do_page_entry_lock(pe); - return false; - } - /* - * Try to acquire out-of-order lock; if busy, return busy so that we a= cquire - * locks in order. - */ - return page_entry_trylock(pe); -} - -static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer = udata) -{ - tb_page_addr_t a =3D *(const tb_page_addr_t *)ap; - tb_page_addr_t b =3D *(const tb_page_addr_t *)bp; - - if (a =3D=3D b) { - return 0; - } else if (a < b) { - return -1; - } - return 1; -} - -/* - * Lock a range of pages ([@start,@end[) as well as the pages of all - * intersecting TBs. - * Locking order: acquire locks in ascending order of page index. - */ -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) -{ - struct page_collection *set =3D g_malloc(sizeof(*set)); - tb_page_addr_t index; - PageDesc *pd; - - start >>=3D TARGET_PAGE_BITS; - end >>=3D TARGET_PAGE_BITS; - g_assert(start <=3D end); - - set->tree =3D g_tree_new_full(tb_page_addr_cmp, NULL, NULL, - page_entry_destroy); - set->max =3D NULL; - assert_no_pages_locked(); - - retry: - g_tree_foreach(set->tree, page_entry_lock, NULL); - - for (index =3D start; index <=3D end; index++) { - TranslationBlock *tb; - PageForEachNext n; - - pd =3D page_find(index); - if (pd =3D=3D NULL) { - continue; - } - if (page_trylock_add(set, index << TARGET_PAGE_BITS)) { - g_tree_foreach(set->tree, page_entry_unlock, NULL); - goto retry; - } - assert_page_locked(pd); - PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { - if (page_trylock_add(set, tb_page_addr0(tb)) || - (tb_page_addr1(tb) !=3D -1 && - page_trylock_add(set, tb_page_addr1(tb)))) { - /* drop all locks, and reacquire in order */ - g_tree_foreach(set->tree, page_entry_unlock, NULL); - goto retry; - } - } - } - return set; -} - -void page_collection_unlock(struct page_collection *set) -{ - /* entries are unlocked and freed via page_entry_destroy */ - g_tree_destroy(set->tree); - g_free(set); -} - -#endif /* !CONFIG_USER_ONLY */ - /* Called with mmap_lock held for user mode emulation. */ TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, --=20 2.34.1