From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671217522; cv=none; d=zohomail.com; s=zohoarc; b=W9Gwmvrf/jim40fbrqMEwy/slgn6gbvO6qGOzCxR+S/mVUx3IM7LjGz36pACQ47HgsDZHcDLSCFiSKHn9B6NxOt0pgcEfixRINr/6yjREewg2sLv8J/mpH0WyRwp85VNdIaOCpgjzbFU3UOorTmsHL2QgO+aCJNQi/IYdi+FZt0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671217522; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rlko1oJqJfMI0VUnBWAZzkGKcnDJFsu0OhrI7F7gks0=; b=GbfgdxFARJSY+g72f4Iai3b/fOXQltV1hKkY//dmtrEsUCIjAEP/HMp+CJ7etLzPJkbNUY2igSY18ZmxCUihI5T30CZ84/J6Ub/v+hwrxOfzUTyDkv9YwXZv8LA6rCYCdwd4IfJz/ziiVNMaTh6ACBIPQuL+VPKjQE3BIdubEXs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671217522937773.7457651289857; Fri, 16 Dec 2022 11:05:22 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fpo-0000LY-SG; Fri, 16 Dec 2022 13:53:28 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpm-0000IP-SG for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:27 -0500 Received: from mail-pj1-x1033.google.com ([2607:f8b0:4864:20::1033]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpe-0003Eu-O5 for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:22 -0500 Received: by mail-pj1-x1033.google.com with SMTP id e7-20020a17090a77c700b00216928a3917so6988506pjs.4 for ; Fri, 16 Dec 2022 10:53:17 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rlko1oJqJfMI0VUnBWAZzkGKcnDJFsu0OhrI7F7gks0=; b=ZSyM6Jm2+ySA7721apWiVfSDRvOQFwXEhEwAQRERLKIC9WzRrwtqB9CDG8gnckkhXt 7gnhp515tLhNy6kVqZ0IBlwmWVarFBw3gqKWK5+yMcGBYt5my79evvSW6s06v3KkcRU3 grE1T8qX2G9LrjdWtuTjlkFm0aT6hNekW78zDIghPuu4RgsV55AqzB8Df919tbiN0sRw cuhWctEV5Sp5ZdVCxBaWRnu/AtCGTeVKWklumyJoOuzsHo73QWz2Pa4+Rb+mfPevlp3C 8D7dwYxRIVfNeg7SNs2E4kgxR2SczYCWpX14vSqjT0TAk4Ugkrdb3AtLYvWW7dZgwHz0 PQtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rlko1oJqJfMI0VUnBWAZzkGKcnDJFsu0OhrI7F7gks0=; b=cTMIssnpJpD0FsKldz1kDBjVvRPuW9NqxyfAk3hKSUH5VP7UCrU8VeXLSyB7BXKlfh Fholer8DEJcO/MsB/I/bRYI3tsMAnf6zqsGbJsJ3kvo2F7/70tTdYI0ViBR3vHxvsD5x czFBFXM85MBIib/VqWU68wYRvqln3RIGSIoEQ/MZ7/Y1veUBOw6F7nf5QXnHPIpIQh4/ XOrSW2fJMpIwbJJXmh2lRBDUwcOFZX+i7dLlMsDbGXHFOZyRzldnbSuo7/uMgPYZmaHT G21Gc2eBeuNcn2nA+LuTXyhA4x1T5RrfHEcXAxLSLaKGu0qiiIso/YkvwCpOWxegFZSj EZuQ== X-Gm-Message-State: ANoB5plq9ucLAe/MLDbR64FMIOCFs3qObWtEVM5X8rMXNPDrHcQyInsX GCgOEZWeiBenGJ2WYTij23nER79j/EXq1+pK X-Google-Smtp-Source: AA0mqf4ISEKuY1vGK+EG0EnYzSIKy4qLuaXoee7Zv9kV2u6TYFQvBFRDNGXfq/y4FXqufimT4JNVHA== X-Received: by 2002:a05:6a20:699b:b0:a3:22f3:27ca with SMTP id t27-20020a056a20699b00b000a322f327camr47409081pzk.56.1671216796460; Fri, 16 Dec 2022 10:53:16 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 01/13] util: Add interval-tree.c Date: Fri, 16 Dec 2022 10:52:53 -0800 Message-Id: <20221216185305.3429913-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1033; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1033.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671217524247100003 Copy and simplify the Linux kernel's interval_tree_generic.h, instantiating for uint64_t. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- include/qemu/interval-tree.h | 99 ++++ tests/unit/test-interval-tree.c | 209 ++++++++ util/interval-tree.c | 882 ++++++++++++++++++++++++++++++++ tests/unit/meson.build | 1 + util/meson.build | 1 + 5 files changed, 1192 insertions(+) create mode 100644 include/qemu/interval-tree.h create mode 100644 tests/unit/test-interval-tree.c create mode 100644 util/interval-tree.c diff --git a/include/qemu/interval-tree.h b/include/qemu/interval-tree.h new file mode 100644 index 0000000000..25006debe8 --- /dev/null +++ b/include/qemu/interval-tree.h @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Interval trees. + * + * Derived from include/linux/interval_tree.h and its dependencies. + */ + +#ifndef QEMU_INTERVAL_TREE_H +#define QEMU_INTERVAL_TREE_H + +/* + * For now, don't expose Linux Red-Black Trees separately, but retain the + * separate type definitions to keep the implementation sane, and allow + * the possibility of disentangling them later. + */ +typedef struct RBNode +{ + /* Encodes parent with color in the lsb. */ + uintptr_t rb_parent_color; + struct RBNode *rb_right; + struct RBNode *rb_left; +} RBNode; + +typedef struct RBRoot +{ + RBNode *rb_node; +} RBRoot; + +typedef struct RBRootLeftCached { + RBRoot rb_root; + RBNode *rb_leftmost; +} RBRootLeftCached; + +typedef struct IntervalTreeNode +{ + RBNode rb; + + uint64_t start; /* Start of interval */ + uint64_t last; /* Last location _in_ interval */ + uint64_t subtree_last; +} IntervalTreeNode; + +typedef RBRootLeftCached IntervalTreeRoot; + +/** + * interval_tree_is_empty + * @root: root of the tree. + * + * Returns true if the tree contains no nodes. + */ +static inline bool interval_tree_is_empty(const IntervalTreeRoot *root) +{ + return root->rb_root.rb_node =3D=3D NULL; +} + +/** + * interval_tree_insert + * @node: node to insert, + * @root: root of the tree. + * + * Insert @node into @root, and rebalance. + */ +void interval_tree_insert(IntervalTreeNode *node, IntervalTreeRoot *root); + +/** + * interval_tree_remove + * @node: node to remove, + * @root: root of the tree. + * + * Remove @node from @root, and rebalance. + */ +void interval_tree_remove(IntervalTreeNode *node, IntervalTreeRoot *root); + +/** + * interval_tree_iter_first: + * @root: root of the tree, + * @start, @last: the inclusive interval [start, last]. + * + * Locate the "first" of a set of nodes within the tree at @root + * that overlap the interval, where "first" is sorted by start. + * Returns NULL if no overlap found. + */ +IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, + uint64_t start, uint64_t last); + +/** + * interval_tree_iter_next: + * @node: previous search result + * @start, @last: the inclusive interval [start, last]. + * + * Locate the "next" of a set of nodes within the tree that overlap the + * interval; @next is the result of a previous call to + * interval_tree_iter_{first,next}. Returns NULL if @next was the last + * node in the set. + */ +IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, + uint64_t start, uint64_t last); + +#endif /* QEMU_INTERVAL_TREE_H */ diff --git a/tests/unit/test-interval-tree.c b/tests/unit/test-interval-tre= e.c new file mode 100644 index 0000000000..119817a019 --- /dev/null +++ b/tests/unit/test-interval-tree.c @@ -0,0 +1,209 @@ +/* + * Test interval trees + * + * This work is licensed under the terms of the GNU LGPL, version 2 or lat= er. + * See the COPYING.LIB file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include "qemu/interval-tree.h" + +static IntervalTreeNode nodes[20]; +static IntervalTreeRoot root; + +static void rand_interval(IntervalTreeNode *n, uint64_t start, uint64_t la= st) +{ + gint32 s_ofs, l_ofs, l_max; + + if (last - start > INT32_MAX) { + l_max =3D INT32_MAX; + } else { + l_max =3D last - start; + } + s_ofs =3D g_test_rand_int_range(0, l_max); + l_ofs =3D g_test_rand_int_range(s_ofs, l_max); + + n->start =3D start + s_ofs; + n->last =3D start + l_ofs; +} + +static void test_empty(void) +{ + g_assert(root.rb_root.rb_node =3D=3D NULL); + g_assert(root.rb_leftmost =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, UINT64_MAX) =3D=3D NULL); +} + +static void test_find_one_point(void) +{ + /* Create a tree of a single node, which is the point [1,1]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 1; + + interval_tree_insert(&nodes[0], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_next(&nodes[0], 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 2) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 2, 2) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); + g_assert(root.rb_root.rb_node =3D=3D NULL); + g_assert(root.rb_leftmost =3D=3D NULL); +} + +static void test_find_two_point(void) +{ + IntervalTreeNode *find0, *find1; + + /* Create a tree of a two nodes, which are both the point [1,1]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 1; + nodes[1] =3D nodes[0]; + + interval_tree_insert(&nodes[0], &root); + interval_tree_insert(&nodes[1], &root); + + find0 =3D interval_tree_iter_first(&root, 0, 9); + g_assert(find0 =3D=3D &nodes[0] || find0 =3D=3D &nodes[1]); + + find1 =3D interval_tree_iter_next(find0, 0, 9); + g_assert(find1 =3D=3D &nodes[0] || find1 =3D=3D &nodes[1]); + g_assert(find0 !=3D find1); + + interval_tree_remove(&nodes[1], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); +} + +static void test_find_one_range(void) +{ + /* Create a tree of a single node, which is the range [1,8]. */ + nodes[0].start =3D 1; + nodes[0].last =3D 8; + + interval_tree_insert(&nodes[0], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 0) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 0, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 1) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 4, 6) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 8, 8) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 9, 9) =3D=3D NULL); + + interval_tree_remove(&nodes[0], &root); +} + +static void test_find_one_range_many(void) +{ + int i; + + /* + * Create a tree of many nodes in [0,99] and [200,299], + * but only one node with exactly [110,190]. + */ + nodes[0].start =3D 110; + nodes[0].last =3D 190; + + for (i =3D 1; i < ARRAY_SIZE(nodes) / 2; ++i) { + rand_interval(&nodes[i], 0, 99); + } + for (; i < ARRAY_SIZE(nodes); ++i) { + rand_interval(&nodes[i], 200, 299); + } + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_insert(&nodes[i], &root); + } + + /* Test that we find exactly the one node. */ + g_assert(interval_tree_iter_first(&root, 100, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 100, 199) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 100, 109) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 100, 110) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 111, 120) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 111, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 190, 199) =3D=3D &nodes[0]); + g_assert(interval_tree_iter_first(&root, 192, 199) =3D=3D NULL); + + /* + * Test that if there are multiple matches, we return the one + * with the minimal start. + */ + g_assert(interval_tree_iter_first(&root, 100, 300) =3D=3D &nodes[0]); + + /* Test that we don't find it after it is removed. */ + interval_tree_remove(&nodes[0], &root); + g_assert(interval_tree_iter_first(&root, 100, 199) =3D=3D NULL); + + for (i =3D 1; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_remove(&nodes[i], &root); + } +} + +static void test_find_many_range(void) +{ + IntervalTreeNode *find; + int i, n; + + n =3D g_test_rand_int_range(ARRAY_SIZE(nodes) / 3, ARRAY_SIZE(nodes) /= 2); + + /* + * Create a fair few nodes in [2000,2999], with the others + * distributed around. + */ + for (i =3D 0; i < n; ++i) { + rand_interval(&nodes[i], 2000, 2999); + } + for (; i < ARRAY_SIZE(nodes) * 2 / 3; ++i) { + rand_interval(&nodes[i], 1000, 1899); + } + for (; i < ARRAY_SIZE(nodes); ++i) { + rand_interval(&nodes[i], 3100, 3999); + } + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_insert(&nodes[i], &root); + } + + /* Test that we find all of the nodes. */ + find =3D interval_tree_iter_first(&root, 2000, 2999); + for (i =3D 0; find !=3D NULL; i++) { + find =3D interval_tree_iter_next(find, 2000, 2999); + } + g_assert_cmpint(i, =3D=3D, n); + + g_assert(interval_tree_iter_first(&root, 0, 999) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 1900, 1999) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 3000, 3099) =3D=3D NULL); + g_assert(interval_tree_iter_first(&root, 4000, UINT64_MAX) =3D=3D NULL= ); + + for (i =3D 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_remove(&nodes[i], &root); + } +} + +int main(int argc, char **argv) +{ + g_test_init(&argc, &argv, NULL); + + g_test_add_func("/interval-tree/empty", test_empty); + g_test_add_func("/interval-tree/find-one-point", test_find_one_point); + g_test_add_func("/interval-tree/find-two-point", test_find_two_point); + g_test_add_func("/interval-tree/find-one-range", test_find_one_range); + g_test_add_func("/interval-tree/find-one-range-many", + test_find_one_range_many); + g_test_add_func("/interval-tree/find-many-range", test_find_many_range= ); + + return g_test_run(); +} diff --git a/util/interval-tree.c b/util/interval-tree.c new file mode 100644 index 0000000000..4c0baf108f --- /dev/null +++ b/util/interval-tree.c @@ -0,0 +1,882 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#include "qemu/osdep.h" +#include "qemu/interval-tree.h" +#include "qemu/atomic.h" + +/* + * Red Black Trees. + * + * For now, don't expose Linux Red-Black Trees separately, but retain the + * separate type definitions to keep the implementation sane, and allow + * the possibility of separating them later. + * + * Derived from include/linux/rbtree_augmented.h and its dependencies. + */ + +/* + * red-black trees properties: https://en.wikipedia.org/wiki/Rbtree + * + * 1) A node is either red or black + * 2) The root is black + * 3) All leaves (NULL) are black + * 4) Both children of every red node are black + * 5) Every simple path from root to leaves contains the same number + * of black nodes. + * + * 4 and 5 give the O(log n) guarantee, since 4 implies you cannot have t= wo + * consecutive red nodes in a path and every red node is therefore follow= ed by + * a black. So if B is the number of black nodes on every simple path (as= per + * 5), then the longest possible path due to 4 is 2B. + * + * We shall indicate color with case, where black nodes are uppercase and= red + * nodes will be lowercase. Unknown color nodes shall be drawn as red wit= hin + * parentheses and have some accompanying text comment. + * + * Notes on lockless lookups: + * + * All stores to the tree structure (rb_left and rb_right) must be done us= ing + * WRITE_ONCE [qatomic_set for QEMU]. And we must not inadvertently cause + * (temporary) loops in the tree structure as seen in program order. + * + * These two requirements will allow lockless iteration of the tree -- not + * correct iteration mind you, tree rotations are not atomic so a lookup m= ight + * miss entire subtrees. + * + * But they do guarantee that any such traversal will only see valid eleme= nts + * and that it will indeed complete -- does not get stuck in a loop. + * + * It also guarantees that if the lookup returns an element it is the 'cor= rect' + * one. But not returning an element does _NOT_ mean it's not present. + * + * NOTE: + * + * Stores to __rb_parent_color are not important for simple lookups so tho= se + * are left undone as of now. Nor did I check for loops involving parent + * pointers. + */ + +typedef enum RBColor +{ + RB_RED, + RB_BLACK, +} RBColor; + +typedef struct RBAugmentCallbacks { + void (*propagate)(RBNode *node, RBNode *stop); + void (*copy)(RBNode *old, RBNode *new); + void (*rotate)(RBNode *old, RBNode *new); +} RBAugmentCallbacks; + +static inline RBNode *rb_parent(const RBNode *n) +{ + return (RBNode *)(n->rb_parent_color & ~1); +} + +static inline RBNode *rb_red_parent(const RBNode *n) +{ + return (RBNode *)n->rb_parent_color; +} + +static inline RBColor pc_color(uintptr_t pc) +{ + return (RBColor)(pc & 1); +} + +static inline bool pc_is_red(uintptr_t pc) +{ + return pc_color(pc) =3D=3D RB_RED; +} + +static inline bool pc_is_black(uintptr_t pc) +{ + return !pc_is_red(pc); +} + +static inline RBColor rb_color(const RBNode *n) +{ + return pc_color(n->rb_parent_color); +} + +static inline bool rb_is_red(const RBNode *n) +{ + return pc_is_red(n->rb_parent_color); +} + +static inline bool rb_is_black(const RBNode *n) +{ + return pc_is_black(n->rb_parent_color); +} + +static inline void rb_set_black(RBNode *n) +{ + n->rb_parent_color |=3D RB_BLACK; +} + +static inline void rb_set_parent_color(RBNode *n, RBNode *p, RBColor color) +{ + n->rb_parent_color =3D (uintptr_t)p | color; +} + +static inline void rb_set_parent(RBNode *n, RBNode *p) +{ + rb_set_parent_color(n, p, rb_color(n)); +} + +static inline void rb_link_node(RBNode *node, RBNode *parent, RBNode **rb_= link) +{ + node->rb_parent_color =3D (uintptr_t)parent; + node->rb_left =3D node->rb_right =3D NULL; + + qatomic_set(rb_link, node); +} + +static RBNode *rb_next(RBNode *node) +{ + RBNode *parent; + + /* OMIT: if empty node, return null. */ + + /* + * If we have a right-hand child, go down and then left as far as we c= an. + */ + if (node->rb_right) { + node =3D node->rb_right; + while (node->rb_left) { + node =3D node->rb_left; + } + return node; + } + + /* + * No right-hand children. Everything down and left is smaller than us, + * so any 'next' node must be in the general direction of our parent. + * Go up the tree; any time the ancestor is a right-hand child of its + * parent, keep going up. First time it's a left-hand child of its + * parent, said parent is our 'next' node. + */ + while ((parent =3D rb_parent(node)) && node =3D=3D parent->rb_right) { + node =3D parent; + } + + return parent; +} + +static inline void rb_change_child(RBNode *old, RBNode *new, + RBNode *parent, RBRoot *root) +{ + if (!parent) { + qatomic_set(&root->rb_node, new); + } else if (parent->rb_left =3D=3D old) { + qatomic_set(&parent->rb_left, new); + } else { + qatomic_set(&parent->rb_right, new); + } +} + +static inline void rb_rotate_set_parents(RBNode *old, RBNode *new, + RBRoot *root, RBColor color) +{ + RBNode *parent =3D rb_parent(old); + + new->rb_parent_color =3D old->rb_parent_color; + rb_set_parent_color(old, new, color); + rb_change_child(old, new, parent, root); +} + +static void rb_insert_augmented(RBNode *node, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *parent =3D rb_red_parent(node), *gparent, *tmp; + + while (true) { + /* + * Loop invariant: node is red. + */ + if (unlikely(!parent)) { + /* + * The inserted node is root. Either this is the first node, or + * we recursed at Case 1 below and are no longer violating 4). + */ + rb_set_parent_color(node, NULL, RB_BLACK); + break; + } + + /* + * If there is a black parent, we are done. Otherwise, take some + * corrective action as, per 4), we don't want a red root or two + * consecutive red nodes. + */ + if (rb_is_black(parent)) { + break; + } + + gparent =3D rb_red_parent(parent); + + tmp =3D gparent->rb_right; + if (parent !=3D tmp) { /* parent =3D=3D gparent->rb_left */ + if (tmp && rb_is_red(tmp)) { + /* + * Case 1 - node's uncle is red (color flips). + * + * G g + * / \ / \ + * p u --> P U + * / / + * n n + * + * However, since g's parent might be red, and 4) does not + * allow this, we need to recurse at g. + */ + rb_set_parent_color(tmp, gparent, RB_BLACK); + rb_set_parent_color(parent, gparent, RB_BLACK); + node =3D gparent; + parent =3D rb_parent(node); + rb_set_parent_color(node, parent, RB_RED); + continue; + } + + tmp =3D parent->rb_right; + if (node =3D=3D tmp) { + /* + * Case 2 - node's uncle is black and node is + * the parent's right child (left rotate at parent). + * + * G G + * / \ / \ + * p U --> n U + * \ / + * n p + * + * This still leaves us in violation of 4), the + * continuation into Case 3 will fix that. + */ + tmp =3D node->rb_left; + qatomic_set(&parent->rb_right, tmp); + qatomic_set(&node->rb_left, parent); + if (tmp) { + rb_set_parent_color(tmp, parent, RB_BLACK); + } + rb_set_parent_color(parent, node, RB_RED); + augment->rotate(parent, node); + parent =3D node; + tmp =3D node->rb_right; + } + + /* + * Case 3 - node's uncle is black and node is + * the parent's left child (right rotate at gparent). + * + * G P + * / \ / \ + * p U --> n g + * / \ + * n U + */ + qatomic_set(&gparent->rb_left, tmp); /* =3D=3D parent->rb_righ= t */ + qatomic_set(&parent->rb_right, gparent); + if (tmp) { + rb_set_parent_color(tmp, gparent, RB_BLACK); + } + rb_rotate_set_parents(gparent, parent, root, RB_RED); + augment->rotate(gparent, parent); + break; + } else { + tmp =3D gparent->rb_left; + if (tmp && rb_is_red(tmp)) { + /* Case 1 - color flips */ + rb_set_parent_color(tmp, gparent, RB_BLACK); + rb_set_parent_color(parent, gparent, RB_BLACK); + node =3D gparent; + parent =3D rb_parent(node); + rb_set_parent_color(node, parent, RB_RED); + continue; + } + + tmp =3D parent->rb_left; + if (node =3D=3D tmp) { + /* Case 2 - right rotate at parent */ + tmp =3D node->rb_right; + qatomic_set(&parent->rb_left, tmp); + qatomic_set(&node->rb_right, parent); + if (tmp) { + rb_set_parent_color(tmp, parent, RB_BLACK); + } + rb_set_parent_color(parent, node, RB_RED); + augment->rotate(parent, node); + parent =3D node; + tmp =3D node->rb_left; + } + + /* Case 3 - left rotate at gparent */ + qatomic_set(&gparent->rb_right, tmp); /* =3D=3D parent->rb_lef= t */ + qatomic_set(&parent->rb_left, gparent); + if (tmp) { + rb_set_parent_color(tmp, gparent, RB_BLACK); + } + rb_rotate_set_parents(gparent, parent, root, RB_RED); + augment->rotate(gparent, parent); + break; + } + } +} + +static void rb_insert_augmented_cached(RBNode *node, + RBRootLeftCached *root, bool newlef= t, + const RBAugmentCallbacks *augment) +{ + if (newleft) { + root->rb_leftmost =3D node; + } + rb_insert_augmented(node, &root->rb_root, augment); +} + +static void rb_erase_color(RBNode *parent, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *node =3D NULL, *sibling, *tmp1, *tmp2; + + while (true) { + /* + * Loop invariants: + * - node is black (or NULL on first iteration) + * - node is not the root (parent is not NULL) + * - All leaf paths going through parent and node have a + * black node count that is 1 lower than other leaf paths. + */ + sibling =3D parent->rb_right; + if (node !=3D sibling) { /* node =3D=3D parent->rb_left */ + if (rb_is_red(sibling)) { + /* + * Case 1 - left rotate at parent + * + * P S + * / \ / \=20 + * N s --> p Sr + * / \ / \=20 + * Sl Sr N Sl + */ + tmp1 =3D sibling->rb_left; + qatomic_set(&parent->rb_right, tmp1); + qatomic_set(&sibling->rb_left, parent); + rb_set_parent_color(tmp1, parent, RB_BLACK); + rb_rotate_set_parents(parent, sibling, root, RB_RED); + augment->rotate(parent, sibling); + sibling =3D tmp1; + } + tmp1 =3D sibling->rb_right; + if (!tmp1 || rb_is_black(tmp1)) { + tmp2 =3D sibling->rb_left; + if (!tmp2 || rb_is_black(tmp2)) { + /* + * Case 2 - sibling color flip + * (p could be either color here) + * + * (p) (p) + * / \ / \=20 + * N S --> N s + * / \ / \=20 + * Sl Sr Sl Sr + * + * This leaves us violating 5) which + * can be fixed by flipping p to black + * if it was red, or by recursing at p. + * p is red when coming from Case 1. + */ + rb_set_parent_color(sibling, parent, RB_RED); + if (rb_is_red(parent)) { + rb_set_black(parent); + } else { + node =3D parent; + parent =3D rb_parent(node); + if (parent) { + continue; + } + } + break; + } + /* + * Case 3 - right rotate at sibling + * (p could be either color here) + * + * (p) (p) + * / \ / \ + * N S --> N sl + * / \ \ + * sl Sr S + * \ + * Sr + * + * Note: p might be red, and then bot + * p and sl are red after rotation (which + * breaks property 4). This is fixed in + * Case 4 (in rb_rotate_set_parents() + * which set sl the color of p + * and set p RB_BLACK) + * + * (p) (sl) + * / \ / \ + * N sl --> P S + * \ / \ + * S N Sr + * \ + * Sr + */ + tmp1 =3D tmp2->rb_right; + qatomic_set(&sibling->rb_left, tmp1); + qatomic_set(&tmp2->rb_right, sibling); + qatomic_set(&parent->rb_right, tmp2); + if (tmp1) { + rb_set_parent_color(tmp1, sibling, RB_BLACK); + } + augment->rotate(sibling, tmp2); + tmp1 =3D sibling; + sibling =3D tmp2; + } + /* + * Case 4 - left rotate at parent + color flips + * (p and sl could be either color here. + * After rotation, p becomes black, s acquires + * p's color, and sl keeps its color) + * + * (p) (s) + * / \ / \ + * N S --> P Sr + * / \ / \ + * (sl) sr N (sl) + */ + tmp2 =3D sibling->rb_left; + qatomic_set(&parent->rb_right, tmp2); + qatomic_set(&sibling->rb_left, parent); + rb_set_parent_color(tmp1, sibling, RB_BLACK); + if (tmp2) { + rb_set_parent(tmp2, parent); + } + rb_rotate_set_parents(parent, sibling, root, RB_BLACK); + augment->rotate(parent, sibling); + break; + } else { + sibling =3D parent->rb_left; + if (rb_is_red(sibling)) { + /* Case 1 - right rotate at parent */ + tmp1 =3D sibling->rb_right; + qatomic_set(&parent->rb_left, tmp1); + qatomic_set(&sibling->rb_right, parent); + rb_set_parent_color(tmp1, parent, RB_BLACK); + rb_rotate_set_parents(parent, sibling, root, RB_RED); + augment->rotate(parent, sibling); + sibling =3D tmp1; + } + tmp1 =3D sibling->rb_left; + if (!tmp1 || rb_is_black(tmp1)) { + tmp2 =3D sibling->rb_right; + if (!tmp2 || rb_is_black(tmp2)) { + /* Case 2 - sibling color flip */ + rb_set_parent_color(sibling, parent, RB_RED); + if (rb_is_red(parent)) { + rb_set_black(parent); + } else { + node =3D parent; + parent =3D rb_parent(node); + if (parent) { + continue; + } + } + break; + } + /* Case 3 - left rotate at sibling */ + tmp1 =3D tmp2->rb_left; + qatomic_set(&sibling->rb_right, tmp1); + qatomic_set(&tmp2->rb_left, sibling); + qatomic_set(&parent->rb_left, tmp2); + if (tmp1) { + rb_set_parent_color(tmp1, sibling, RB_BLACK); + } + augment->rotate(sibling, tmp2); + tmp1 =3D sibling; + sibling =3D tmp2; + } + /* Case 4 - right rotate at parent + color flips */ + tmp2 =3D sibling->rb_right; + qatomic_set(&parent->rb_left, tmp2); + qatomic_set(&sibling->rb_right, parent); + rb_set_parent_color(tmp1, sibling, RB_BLACK); + if (tmp2) { + rb_set_parent(tmp2, parent); + } + rb_rotate_set_parents(parent, sibling, root, RB_BLACK); + augment->rotate(parent, sibling); + break; + } + } +} + +static void rb_erase_augmented(RBNode *node, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *child =3D node->rb_right; + RBNode *tmp =3D node->rb_left; + RBNode *parent, *rebalance; + uintptr_t pc; + + if (!tmp) { + /* + * Case 1: node to erase has no more than 1 child (easy!) + * + * Note that if there is one child it must be red due to 5) + * and node must be black due to 4). We adjust colors locally + * so as to bypass rb_erase_color() later on. + */ + pc =3D node->rb_parent_color; + parent =3D rb_parent(node); + rb_change_child(node, child, parent, root); + if (child) { + child->rb_parent_color =3D pc; + rebalance =3D NULL; + } else { + rebalance =3D pc_is_black(pc) ? parent : NULL; + } + tmp =3D parent; + } else if (!child) { + /* Still case 1, but this time the child is node->rb_left */ + pc =3D node->rb_parent_color; + parent =3D rb_parent(node); + tmp->rb_parent_color =3D pc; + rb_change_child(node, tmp, parent, root); + rebalance =3D NULL; + tmp =3D parent; + } else { + RBNode *successor =3D child, *child2; + tmp =3D child->rb_left; + if (!tmp) { + /* + * Case 2: node's successor is its right child + * + * (n) (s) + * / \ / \ + * (x) (s) -> (x) (c) + * \ + * (c) + */ + parent =3D successor; + child2 =3D successor->rb_right; + + augment->copy(node, successor); + } else { + /* + * Case 3: node's successor is leftmost under + * node's right child subtree + * + * (n) (s) + * / \ / \ + * (x) (y) -> (x) (y) + * / / + * (p) (p) + * / / + * (s) (c) + * \ + * (c) + */ + do { + parent =3D successor; + successor =3D tmp; + tmp =3D tmp->rb_left; + } while (tmp); + child2 =3D successor->rb_right; + qatomic_set(&parent->rb_left, child2); + qatomic_set(&successor->rb_right, child); + rb_set_parent(child, successor); + + augment->copy(node, successor); + augment->propagate(parent, successor); + } + + tmp =3D node->rb_left; + qatomic_set(&successor->rb_left, tmp); + rb_set_parent(tmp, successor); + + pc =3D node->rb_parent_color; + tmp =3D rb_parent(node); + rb_change_child(node, successor, tmp, root); + + if (child2) { + rb_set_parent_color(child2, parent, RB_BLACK); + rebalance =3D NULL; + } else { + rebalance =3D rb_is_black(successor) ? parent : NULL; + } + successor->rb_parent_color =3D pc; + tmp =3D successor; + } + + augment->propagate(tmp, NULL); + + if (rebalance) { + rb_erase_color(rebalance, root, augment); + } +} + +static void rb_erase_augmented_cached(RBNode *node, RBRootLeftCached *root, + const RBAugmentCallbacks *augment) +{ + if (root->rb_leftmost =3D=3D node) { + root->rb_leftmost =3D rb_next(node); + } + rb_erase_augmented(node, &root->rb_root, augment); +} + + +/* + * Interval trees. + * + * Derived from lib/interval_tree.c and its dependencies, + * especially include/linux/interval_tree_generic.h. + */ + +#define rb_to_itree(N) container_of(N, IntervalTreeNode, rb) + +static bool interval_tree_compute_max(IntervalTreeNode *node, bool exit) +{ + IntervalTreeNode *child; + uint64_t max =3D node->last; + + if (node->rb.rb_left) { + child =3D rb_to_itree(node->rb.rb_left); + if (child->subtree_last > max) { + max =3D child->subtree_last; + } + } + if (node->rb.rb_right) { + child =3D rb_to_itree(node->rb.rb_right); + if (child->subtree_last > max) { + max =3D child->subtree_last; + } + } + if (exit && node->subtree_last =3D=3D max) { + return true; + } + node->subtree_last =3D max; + return false; +} + +static void interval_tree_propagate(RBNode *rb, RBNode *stop) +{ + while (rb !=3D stop) { + IntervalTreeNode *node =3D rb_to_itree(rb); + if (interval_tree_compute_max(node, true)) { + break; + } + rb =3D rb_parent(&node->rb); + } +} + +static void interval_tree_copy(RBNode *rb_old, RBNode *rb_new) +{ + IntervalTreeNode *old =3D rb_to_itree(rb_old); + IntervalTreeNode *new =3D rb_to_itree(rb_new); + + new->subtree_last =3D old->subtree_last; +} + +static void interval_tree_rotate(RBNode *rb_old, RBNode *rb_new) +{ + IntervalTreeNode *old =3D rb_to_itree(rb_old); + IntervalTreeNode *new =3D rb_to_itree(rb_new); + + new->subtree_last =3D old->subtree_last; + interval_tree_compute_max(old, false); +} + +static const RBAugmentCallbacks interval_tree_augment =3D { + .propagate =3D interval_tree_propagate, + .copy =3D interval_tree_copy, + .rotate =3D interval_tree_rotate, +}; + +/* Insert / remove interval nodes from the tree */ +void interval_tree_insert(IntervalTreeNode *node, IntervalTreeRoot *root) +{ + RBNode **link =3D &root->rb_root.rb_node, *rb_parent =3D NULL; + uint64_t start =3D node->start, last =3D node->last; + IntervalTreeNode *parent; + bool leftmost =3D true; + + while (*link) { + rb_parent =3D *link; + parent =3D rb_to_itree(rb_parent); + + if (parent->subtree_last < last) { + parent->subtree_last =3D last; + } + if (start < parent->start) { + link =3D &parent->rb.rb_left; + } else { + link =3D &parent->rb.rb_right; + leftmost =3D false; + } + } + + node->subtree_last =3D last; + rb_link_node(&node->rb, rb_parent, link); + rb_insert_augmented_cached(&node->rb, root, leftmost, + &interval_tree_augment); +} + +void interval_tree_remove(IntervalTreeNode *node, IntervalTreeRoot *root) +{ + rb_erase_augmented_cached(&node->rb, root, &interval_tree_augment); +} + +/* + * Iterate over intervals intersecting [start;last] + * + * Note that a node's interval intersects [start;last] iff: + * Cond1: node->start <=3D last + * and + * Cond2: start <=3D node->last + */ + +static IntervalTreeNode *interval_tree_subtree_search(IntervalTreeNode *no= de, + uint64_t start, + uint64_t last) +{ + while (true) { + /* + * Loop invariant: start <=3D node->subtree_last + * (Cond2 is satisfied by one of the subtree nodes) + */ + if (node->rb.rb_left) { + IntervalTreeNode *left =3D rb_to_itree(node->rb.rb_left); + + if (start <=3D left->subtree_last) { + /* + * Some nodes in left subtree satisfy Cond2. + * Iterate to find the leftmost such node N. + * If it also satisfies Cond1, that's the + * match we are looking for. Otherwise, there + * is no matching interval as nodes to the + * right of N can't satisfy Cond1 either. + */ + node =3D left; + continue; + } + } + if (node->start <=3D last) { /* Cond1 */ + if (start <=3D node->last) { /* Cond2 */ + return node; /* node is leftmost match */ + } + if (node->rb.rb_right) { + node =3D rb_to_itree(node->rb.rb_right); + if (start <=3D node->subtree_last) { + continue; + } + } + } + return NULL; /* no match */ + } +} + +IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, + uint64_t start, uint64_t last) +{ + IntervalTreeNode *node, *leftmost; + + if (!root->rb_root.rb_node) { + return NULL; + } + + /* + * Fastpath range intersection/overlap between A: [a0, a1] and + * B: [b0, b1] is given by: + * + * a0 <=3D b1 && b0 <=3D a1 + * + * ... where A holds the lock range and B holds the smallest + * 'start' and largest 'last' in the tree. For the later, we + * rely on the root node, which by augmented interval tree + * property, holds the largest value in its last-in-subtree. + * This allows mitigating some of the tree walk overhead for + * for non-intersecting ranges, maintained and consulted in O(1). + */ + node =3D rb_to_itree(root->rb_root.rb_node); + if (node->subtree_last < start) { + return NULL; + } + + leftmost =3D rb_to_itree(root->rb_leftmost); + if (leftmost->start > last) { + return NULL; + } + + return interval_tree_subtree_search(node, start, last); +} + +IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, + uint64_t start, uint64_t last) +{ + RBNode *rb =3D node->rb.rb_right, *prev; + + while (true) { + /* + * Loop invariants: + * Cond1: node->start <=3D last + * rb =3D=3D node->rb.rb_right + * + * First, search right subtree if suitable + */ + if (rb) { + IntervalTreeNode *right =3D rb_to_itree(rb); + + if (start <=3D right->subtree_last) { + return interval_tree_subtree_search(right, start, last); + } + } + + /* Move up the tree until we come from a node's left child */ + do { + rb =3D rb_parent(&node->rb); + if (!rb) { + return NULL; + } + prev =3D &node->rb; + node =3D rb_to_itree(rb); + rb =3D node->rb.rb_right; + } while (prev =3D=3D rb); + + /* Check if the node intersects [start;last] */ + if (last < node->start) { /* !Cond1 */ + return NULL; + } + if (start <=3D node->last) { /* Cond2 */ + return node; + } + } +} + +/* Occasionally useful for calling from within the debugger. */ +#if 0 +static void debug_interval_tree_int(IntervalTreeNode *node, + const char *dir, int level) +{ + printf("%4d %*s %s [%" PRIu64 ",%" PRIu64 "] subtree_last:%" PRIu64 "\= n", + level, level + 1, dir, rb_is_red(&node->rb) ? "r" : "b", + node->start, node->last, node->subtree_last); + + if (node->rb.rb_left) { + debug_interval_tree_int(rb_to_itree(node->rb.rb_left), "<", level = + 1); + } + if (node->rb.rb_right) { + debug_interval_tree_int(rb_to_itree(node->rb.rb_right), ">", level= + 1); + } +} + +void debug_interval_tree(IntervalTreeNode *node); +void debug_interval_tree(IntervalTreeNode *node) +{ + if (node) { + debug_interval_tree_int(node, "*", 0); + } else { + printf("null\n"); + } +} +#endif diff --git a/tests/unit/meson.build b/tests/unit/meson.build index b497a41378..ffa444f432 100644 --- a/tests/unit/meson.build +++ b/tests/unit/meson.build @@ -47,6 +47,7 @@ tests =3D { 'ptimer-test': ['ptimer-test-stubs.c', meson.project_source_root() / 'hw= /core/ptimer.c'], 'test-qapi-util': [], 'test-smp-parse': [qom, meson.project_source_root() / 'hw/core/machine-s= mp.c'], + 'test-interval-tree': [], } =20 if have_system or have_tools diff --git a/util/meson.build b/util/meson.build index 25b9b61f98..d8d109ff84 100644 --- a/util/meson.build +++ b/util/meson.build @@ -57,6 +57,7 @@ util_ss.add(files('guest-random.c')) util_ss.add(files('yank.c')) util_ss.add(files('int128.c')) util_ss.add(files('memalign.c')) +util_ss.add(files('interval-tree.c')) =20 if have_user util_ss.add(files('selfmap.c')) --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671218072; cv=none; d=zohomail.com; s=zohoarc; b=OKD5/KtdsUnmuegc+kZeuK1v4viPQxSIeSrB3MIRiYZ82eAACaZTL1wD4pyOA9TDp3C63UpQR7NuPhe4te4DrTRq/ZZpSuJn4m9VupOJsiOkHgbI+2Tl+bBZ7t2vF3MsEMhThAEkliCr/EfcyZWrKonTU0WVEE/0pgXeqyZrxAU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671218072; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=lQgJmIQMrjWysce1e2je5afVIp4tHra/146j/kBBfUs=; b=V3EdUL/89RV+y89Wa/4ajNkiDcNZqTlmgZzLCSQDppgVLXIFwxblmez8FBV5Vi7g4QD6qQq7aiJvp+fbzm6dhmf/hsBRIYFxtBxWfsxr2xH9jjJchT15iVoH+aB/cIIbNXxc9R8cvYU+cUKsvTUSquiXqLPCJRLVOJils6pmpIA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671218072523389.3376482777405; Fri, 16 Dec 2022 11:14:32 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fpr-0000Ow-QT; Fri, 16 Dec 2022 13:53:31 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpo-0000L6-EN for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:28 -0500 Received: from mail-pl1-x62f.google.com ([2607:f8b0:4864:20::62f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpm-0003F8-O7 for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:28 -0500 Received: by mail-pl1-x62f.google.com with SMTP id g10so3161039plo.11 for ; Fri, 16 Dec 2022 10:53:21 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lQgJmIQMrjWysce1e2je5afVIp4tHra/146j/kBBfUs=; b=KmSbtYPs29vEu4rEQht+2XElKLAXCFx1xt/7aS/o8dhkD16U7qH7umV/fBf4cEJUiA yZ5v6FYPINzoubNemfsWTg4VZhbCcP3pPnFEBeMWet46YVy5NTlY9YjY35/nCEvrVWqZ /RR/bIofM6TS2zgU0XkK2+IBc0zcxLngknEJAyM6cSTmv59EyerIANzKa7IfHZW8krji lqVJw4QCEqAMmsLOGEYkwI0SP+jjcx7jeWfsk8DmHd2jk1CFd+RR4vPXn3rFZh+LtjTv QyZQEX1nVyygffG5R/HPMS+jRtmMuQnfL6wmCP7XG21gHLRcOI8AwLDXvnI11MDhLSg3 KYoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lQgJmIQMrjWysce1e2je5afVIp4tHra/146j/kBBfUs=; b=DDKzi22qE1IX93SheWWxlU+yefpToyDeWku8b49XNJLmuiTCWfXO/pbIYIRFib0NZD 2zndw9wUUxGSAvowfK0CDNNOdZOkP6XSkNaHHpWXi2sNVGdx8+pgmtVSRpm34m1o1Nsb un3qc37o2A8rkvxRQZ9x+BGYkQO86ZkJsyzGhH5jMGaB8UzrpoIQW3HE+dTSzaT1UZ6U duDwfuk8XQe3+0IwBQEmjKukGbbtdRXpJNLGhb2EquultoFvSG4wCAY2TC23kIHTL6/b ighhz4BzR9VmosaIT9ClDE35Y03lvX9luLEXe+cCq0a1liPmGp3UxufMa7CznUENs+mC cESw== X-Gm-Message-State: AFqh2komyC1VKUVr0mc2F34pFgAunko2dwbP8WEkZKx6m98FqVQahuBL SOq+5VDwMLQMjK/JjsVTmFbOkNm26RDByhG+ X-Google-Smtp-Source: AMrXdXvy5AWeRLisVwK/WYbAF1kqB90ZjBr5KjQkLFv6OBp0NH4ptf9MvmawKpe1cHbCiDRPfLsGFQ== X-Received: by 2002:a05:6a20:b723:b0:af:8e92:3eeb with SMTP id fg35-20020a056a20b72300b000af8e923eebmr8434577pzb.9.1671216800584; Fri, 16 Dec 2022 10:53:20 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Subject: [PULL 02/13] accel/tcg: Rename page_flush_tb Date: Fri, 16 Dec 2022 10:52:54 -0800 Message-Id: <20221216185305.3429913-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::62f; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671218073708100001 Rename to tb_remove_all, to remove the PageDesc "page" from the name, and to avoid suggesting a "flush" in the icache sense. Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Richard Henderson --- accel/tcg/tb-maint.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 0cdb35548c..b5b90347ae 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -51,7 +51,7 @@ void tb_htable_init(void) } =20 /* Set to NULL all the 'first_tb' fields in all PageDescs. */ -static void page_flush_tb_1(int level, void **lp) +static void tb_remove_all_1(int level, void **lp) { int i; =20 @@ -70,17 +70,17 @@ static void page_flush_tb_1(int level, void **lp) void **pp =3D *lp; =20 for (i =3D 0; i < V_L2_SIZE; ++i) { - page_flush_tb_1(level - 1, pp + i); + tb_remove_all_1(level - 1, pp + i); } } } =20 -static void page_flush_tb(void) +static void tb_remove_all(void) { int i, l1_sz =3D v_l1_size; =20 for (i =3D 0; i < l1_sz; i++) { - page_flush_tb_1(v_l2_levels, l1_map + i); + tb_remove_all_1(v_l2_levels, l1_map + i); } } =20 @@ -101,7 +101,7 @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_data = tb_flush_count) } =20 qht_reset_size(&tb_ctx.htable, CODE_GEN_HTABLE_SIZE); - page_flush_tb(); + tb_remove_all(); =20 tcg_region_reset_all(); /* XXX: flush processor icache at this point if cache flush is expensi= ve */ --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671216871; cv=none; d=zohomail.com; s=zohoarc; b=HkkSLcKA5zfWGKzi/k6l5UBm4A1tKCu45oq2zu/CiM0QHcx1a4Sz385hFW9seNMUM52kseIVNXY81fa1f/Yfi49o9H1IduPmGQIZsxVAsCclHMGORzNLE3Vauyqt00UJCZNNKNll+k0TuhW7o9OgifPn7JYxDs43hZdqDHSrYWs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671216871; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=J/h0b2L1icbN8detkr/ytUwOyb8luYEYfEhmAMSx4bk=; b=cG5waJBi34lLFwG4udw9O4JrYmNeBJrCmBfqNR1dtTf+uR1XlzLcrNTaB0W11tJxduXGd+L2+5Fz7b7p2mi/VUoZhe8iU9A3ZzASRm/4qVaUZTZGFDAOeQQhrtIkbgwXZUF9mQq+ypMnoSJxrN7NKjqAkwmzD4iMPez2wEZT4a8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671216871161403.69590889890446; Fri, 16 Dec 2022 10:54:31 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fps-0000QQ-On; Fri, 16 Dec 2022 13:53:32 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpr-0000OR-CN for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:31 -0500 Received: from mail-pl1-x635.google.com ([2607:f8b0:4864:20::635]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpm-0003FD-OR for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:31 -0500 Received: by mail-pl1-x635.google.com with SMTP id 4so3206455plj.3 for ; Fri, 16 Dec 2022 10:53:22 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J/h0b2L1icbN8detkr/ytUwOyb8luYEYfEhmAMSx4bk=; b=Aj0fD253j3mZJizhJDBYxq/r4kJEiA7Aay0lcDmsuVurgvfSLIGcyFr1LA9eYL4fWU WyX86FaNING1a1xfVzBfLwxkBLKHLeEjSS8D+tQenInkwUgz2I97/0yaTfYvAdN9NsVc 1hF2J6Qsv24HRnm5gL1slIjqcXnm4Z8YlsaCO9d4x70AYrUhiePjzWcn+bfoAExw+VkI 8gd9zmrhoM6oW2oM2O2zA8R5XzkgkW+I5Dfg2a8kRk1oiDTJWDZFjhnftDJ0ntD2ZX1q nNG2kz35QFAewDS0LH9PgKBOUDCVgdKub6WpJM52NWONSXQza3ZONFag07Zoq5nykMlC 0OYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J/h0b2L1icbN8detkr/ytUwOyb8luYEYfEhmAMSx4bk=; b=hZ5lyYnSocu5tmWf57MBUWuFerTAdZ+7xjH5GUYGLY9Q8HDbVNjPxJCnQ+MbLbUL9T bJ721+/JYDPkBJI13CrDsrNYffubq/gEVBIw7xBlNYmJ/2K88cT2YRgceQLIdZneZtx4 Faqhznmrqlt6GELBfH5rQbSSNo6uv+ky62wlRPVNlTjyQ7q3HCadQ6680qSVCXe9tVA/ W60+CIyfTrHh+JYCYilLWwGOCYS1o2GRUBvGA1OA4NZlFq4O9ls9PTciKFV+WXgQ22ZX dnbcEfFORrxSSeNm0odVnMTV6a16nZvYUqiK1jBroUCAtelS40WsP8jxOlcVM1hXg68U 3Y5w== X-Gm-Message-State: ANoB5pmLdft0ObGs4OVBxJZJ4ULlO7zYfcIfbF69PJTSqJMiG+YeSn1m DZWgCtVI3hkZNH/LtCVMiiukmrKkpgTuh7U+ X-Google-Smtp-Source: AA0mqf7P21Of6DrV3pr1zoJVFZDDoAvLZUPzlv7cNi90pnanldpK6pgSe0osrnfKj85XUmPqbiFvIg== X-Received: by 2002:a05:6a21:7881:b0:9d:efbf:785f with SMTP id bf1-20020a056a21788100b0009defbf785fmr50469149pzc.20.1671216801402; Fri, 16 Dec 2022 10:53:21 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 03/13] accel/tcg: Use interval tree for TBs in user-only mode Date: Fri, 16 Dec 2022 10:52:55 -0800 Message-Id: <20221216185305.3429913-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::635; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x635.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671216872213100001 Begin weaning user-only away from PageDesc. Since, for user-only, all TB (and page) manipulation is done with a single mutex, and there is no virtual/physical discontinuity to split a TB across discontinuous pages, place all of the TBs into a single IntervalTree. This makes it trivial to find all of the TBs intersecting a range. Retain the existing PageDesc + linked list implementation for system mode. Move the portion of the implementation that overlaps the new user-only code behind the common ifdef. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 16 +- include/exec/exec-all.h | 43 ++++- accel/tcg/tb-maint.c | 387 ++++++++++++++++++++++---------------- accel/tcg/translate-all.c | 4 +- 4 files changed, 279 insertions(+), 171 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index cb13bade4f..bf1bf62e2a 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -24,14 +24,13 @@ #endif =20 typedef struct PageDesc { - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; #ifdef CONFIG_USER_ONLY unsigned long flags; void *target_data; -#endif -#ifdef CONFIG_SOFTMMU +#else QemuSpin lock; + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; #endif } PageDesc; =20 @@ -69,9 +68,6 @@ static inline PageDesc *page_find(tb_page_addr_t index) tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) =20 -#define PAGE_FOR_EACH_TB(pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - #define TB_FOR_EACH_JMP(head_tb, tb, n) \ TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) =20 @@ -89,6 +85,12 @@ void do_assert_page_locked(const PageDesc *pd, const cha= r *file, int line); #endif void page_lock(PageDesc *pd); void page_unlock(PageDesc *pd); + +/* TODO: For now, still shared with translate-all.c for system mode. */ +typedef int PageForEachNext; +#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + #endif #if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) void assert_no_pages_locked(void); diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 9b7bfbf09a..25e11b0a8d 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -24,6 +24,7 @@ #ifdef CONFIG_TCG #include "exec/cpu_ldst.h" #endif +#include "qemu/interval-tree.h" =20 /* allow to see translation results - the slowdown should be negligible, s= o we leave it */ #define DEBUG_DISAS @@ -559,11 +560,20 @@ struct TranslationBlock { =20 struct tb_tc tc; =20 - /* first and second physical page containing code. The lower bit - of the pointer tells the index in page_next[]. - The list is protected by the TB's page('s) lock(s) */ + /* + * Track tb_page_addr_t intervals that intersect this TB. + * For user-only, the virtual addresses are always contiguous, + * and we use a unified interval tree. For system, we use a + * linked list headed in each PageDesc. Within the list, the lsb + * of the previous pointer tells the index of page_next[], and the + * list is protected by the PageDesc lock(s). + */ +#ifdef CONFIG_USER_ONLY + IntervalTreeNode itree; +#else uintptr_t page_next[2]; tb_page_addr_t page_addr[2]; +#endif =20 /* jmp_lock placed here to fill a 4-byte hole. Its documentation is be= low */ QemuSpin jmp_lock; @@ -619,24 +629,51 @@ static inline uint32_t tb_cflags(const TranslationBlo= ck *tb) =20 static inline tb_page_addr_t tb_page_addr0(const TranslationBlock *tb) { +#ifdef CONFIG_USER_ONLY + return tb->itree.start; +#else return tb->page_addr[0]; +#endif } =20 static inline tb_page_addr_t tb_page_addr1(const TranslationBlock *tb) { +#ifdef CONFIG_USER_ONLY + tb_page_addr_t next =3D tb->itree.last & TARGET_PAGE_MASK; + return next =3D=3D (tb->itree.start & TARGET_PAGE_MASK) ? -1 : next; +#else return tb->page_addr[1]; +#endif } =20 static inline void tb_set_page_addr0(TranslationBlock *tb, tb_page_addr_t addr) { +#ifdef CONFIG_USER_ONLY + tb->itree.start =3D addr; + /* + * To begin, we record an interval of one byte. When the translation + * loop encounters a second page, the interval will be extended to + * include the first byte of the second page, which is sufficient to + * allow tb_page_addr1() above to work properly. The final corrected + * interval will be set by tb_page_add() from tb->size before the + * node is added to the interval tree. + */ + tb->itree.last =3D addr; +#else tb->page_addr[0] =3D addr; +#endif } =20 static inline void tb_set_page_addr1(TranslationBlock *tb, tb_page_addr_t addr) { +#ifdef CONFIG_USER_ONLY + /* Extend the interval to the first byte of the second page. See abov= e. */ + tb->itree.last =3D addr; +#else tb->page_addr[1] =3D addr; +#endif } =20 /* current cflags for hashing/comparison */ diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index b5b90347ae..8da2c64d87 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -18,6 +18,7 @@ */ =20 #include "qemu/osdep.h" +#include "qemu/interval-tree.h" #include "exec/cputlb.h" #include "exec/log.h" #include "exec/exec-all.h" @@ -50,6 +51,75 @@ void tb_htable_init(void) qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } =20 +#ifdef CONFIG_USER_ONLY +/* + * For user-only, since we are protecting all of memory with a single lock, + * and because the two pages of a TranslationBlock are always contiguous, + * use a single data structure to record all TranslationBlocks. + */ +static IntervalTreeRoot tb_root; + +static void tb_remove_all(void) +{ + assert_memory_lock(); + memset(&tb_root, 0, sizeof(tb_root)); +} + +/* Call with mmap_lock held. */ +static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) +{ + /* translator_loop() must have made all TB pages non-writable */ + assert(!(p1->flags & PAGE_WRITE)); + if (p2) { + assert(!(p2->flags & PAGE_WRITE)); + } + + assert_memory_lock(); + + tb->itree.last =3D tb->itree.start + tb->size - 1; + interval_tree_insert(&tb->itree, &tb_root); +} + +/* Call with mmap_lock held. */ +static void tb_remove(TranslationBlock *tb) +{ + assert_memory_lock(); + interval_tree_remove(&tb->itree, &tb_root); +} + +/* TODO: For now, still shared with translate-all.c for system mode. */ +#define PAGE_FOR_EACH_TB(start, end, pagedesc, T, N) \ + for (T =3D foreach_tb_first(start, end), \ + N =3D foreach_tb_next(T, start, end); \ + T !=3D NULL; \ + T =3D N, N =3D foreach_tb_next(N, start, end)) + +typedef TranslationBlock *PageForEachNext; + +static PageForEachNext foreach_tb_first(tb_page_addr_t start, + tb_page_addr_t end) +{ + IntervalTreeNode *n =3D interval_tree_iter_first(&tb_root, start, end = - 1); + return n ? container_of(n, TranslationBlock, itree) : NULL; +} + +static PageForEachNext foreach_tb_next(PageForEachNext tb, + tb_page_addr_t start, + tb_page_addr_t end) +{ + IntervalTreeNode *n; + + if (tb) { + n =3D interval_tree_iter_next(&tb->itree, start, end - 1); + if (n) { + return container_of(n, TranslationBlock, itree); + } + } + return NULL; +} + +#else + /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void tb_remove_all_1(int level, void **lp) { @@ -84,6 +154,70 @@ static void tb_remove_all(void) } } =20 +/* + * Add the tb in the target page and protect it if necessary. + * Called with @p->lock held. + */ +static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, + unsigned int n) +{ + bool page_already_protected; + + assert_page_locked(p); + + tb->page_next[n] =3D p->first_tb; + page_already_protected =3D p->first_tb !=3D 0; + p->first_tb =3D (uintptr_t)tb | n; + + /* + * If some code is already present, then the pages are already + * protected. So we handle the case where only the first TB is + * allocated in a physical page. + */ + if (!page_already_protected) { + tlb_protect_code(tb->page_addr[n] & TARGET_PAGE_MASK); + } +} + +static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) +{ + tb_page_add(p1, tb, 0); + if (unlikely(p2)) { + tb_page_add(p2, tb, 1); + } +} + +static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) +{ + TranslationBlock *tb1; + uintptr_t *pprev; + PageForEachNext n1; + + assert_page_locked(pd); + pprev =3D &pd->first_tb; + PAGE_FOR_EACH_TB(unused, unused, pd, tb1, n1) { + if (tb1 =3D=3D tb) { + *pprev =3D tb1->page_next[n1]; + return; + } + pprev =3D &tb1->page_next[n1]; + } + g_assert_not_reached(); +} + +static void tb_remove(TranslationBlock *tb) +{ + PageDesc *pd; + + pd =3D page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + tb_page_remove(pd, tb); + if (unlikely(tb->page_addr[1] !=3D -1)) { + pd =3D page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + tb_page_remove(pd, tb); + } +} +#endif /* CONFIG_USER_ONLY */ + /* flush all the translation blocks */ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) { @@ -128,28 +262,6 @@ void tb_flush(CPUState *cpu) } } =20 -/* - * user-mode: call with mmap_lock held - * !user-mode: call with @pd->lock held - */ -static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) -{ - TranslationBlock *tb1; - uintptr_t *pprev; - unsigned int n1; - - assert_page_locked(pd); - pprev =3D &pd->first_tb; - PAGE_FOR_EACH_TB(pd, tb1, n1) { - if (tb1 =3D=3D tb) { - *pprev =3D tb1->page_next[n1]; - return; - } - pprev =3D &tb1->page_next[n1]; - } - g_assert_not_reached(); -} - /* remove @orig from its @n_orig-th jump list */ static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_o= rig) { @@ -255,7 +367,6 @@ static void tb_jmp_cache_inval_tb(TranslationBlock *tb) */ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_= list) { - PageDesc *p; uint32_t h; tb_page_addr_t phys_pc; uint32_t orig_cflags =3D tb_cflags(tb); @@ -277,13 +388,7 @@ static void do_tb_phys_invalidate(TranslationBlock *tb= , bool rm_from_page_list) =20 /* remove the TB from the page list */ if (rm_from_page_list) { - p =3D page_find(phys_pc >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - phys_pc =3D tb_page_addr1(tb); - if (phys_pc !=3D -1) { - p =3D page_find(phys_pc >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - } + tb_remove(tb); } =20 /* remove the TB from the hash list */ @@ -387,41 +492,6 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_= addr_t page_addr) } } =20 -/* - * Add the tb in the target page and protect it if necessary. - * Called with mmap_lock held for user-mode emulation. - * Called with @p->lock held in !user-mode. - */ -static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, - unsigned int n, tb_page_addr_t page_addr) -{ -#ifndef CONFIG_USER_ONLY - bool page_already_protected; -#endif - - assert_page_locked(p); - - tb->page_next[n] =3D p->first_tb; -#ifndef CONFIG_USER_ONLY - page_already_protected =3D p->first_tb !=3D (uintptr_t)NULL; -#endif - p->first_tb =3D (uintptr_t)tb | n; - -#if defined(CONFIG_USER_ONLY) - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p->flags & PAGE_WRITE)); -#else - /* - * If some code is already present, then the pages are already - * protected. So we handle the case where only the first TB is - * allocated in a physical page. - */ - if (!page_already_protected) { - tlb_protect_code(page_addr); - } -#endif -} - /* * Add a new TB and link it to the physical page tables. phys_page2 is * (-1) to indicate that only one page contains the TB. @@ -453,10 +523,7 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, * we can only insert TBs that are fully initialized. */ page_lock_pair(&p, phys_pc, &p2, phys_page2, true); - tb_page_add(p, tb, 0, phys_pc); - if (p2) { - tb_page_add(p2, tb, 1, phys_page2); - } + tb_record(tb, p, p2); =20 /* add in the hash table */ h =3D tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), @@ -465,10 +532,7 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, =20 /* remove TB from the page(s) if we couldn't insert it */ if (unlikely(existing_tb)) { - tb_page_remove(p, tb); - if (p2) { - tb_page_remove(p2, tb); - } + tb_remove(tb); tb =3D existing_tb; } =20 @@ -479,6 +543,87 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, t= b_page_addr_t phys_pc, return tb; } =20 +#ifdef CONFIG_USER_ONLY +/* + * Invalidate all TBs which intersect with the target address range. + * Called with mmap_lock held for user-mode emulation. + * NOTE: this function must not be called while a TB is running. + */ +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +{ + TranslationBlock *tb; + PageForEachNext n; + + assert_memory_lock(); + + PAGE_FOR_EACH_TB(start, end, unused, tb, n) { + tb_phys_invalidate__locked(tb); + } +} + +/* + * Invalidate all TBs which intersect with the target address page @addr. + * Called with mmap_lock held for user-mode emulation + * NOTE: this function must not be called while a TB is running. + */ +void tb_invalidate_phys_page(tb_page_addr_t addr) +{ + tb_page_addr_t start, end; + + start =3D addr & TARGET_PAGE_MASK; + end =3D start + TARGET_PAGE_SIZE; + tb_invalidate_phys_range(start, end); +} + +/* + * Called with mmap_lock held. If pc is not 0 then it indicates the + * host PC of the faulting store instruction that caused this invalidate. + * Returns true if the caller needs to abort execution of the current + * TB (because it was modified by this store and the guest CPU has + * precise-SMC semantics). + */ +bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) +{ + assert(pc !=3D 0); +#ifdef TARGET_HAS_PRECISE_SMC + assert_memory_lock(); + { + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + bool current_tb_modified =3D false; + TranslationBlock *tb; + PageForEachNext n; + + addr &=3D TARGET_PAGE_MASK; + + PAGE_FOR_EACH_TB(addr, addr + TARGET_PAGE_SIZE, unused, tb, n) { + if (current_tb =3D=3D tb && + (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { + /* + * If we are modifying the current TB, we must stop its + * execution. We could be more precise by checking that + * the modification is after the current PC, but it would + * require a specialized function to partially restore + * the CPU state. + */ + current_tb_modified =3D true; + cpu_restore_state_from_tb(current_cpu, current_tb, pc); + } + tb_phys_invalidate__locked(tb); + } + + if (current_tb_modified) { + /* Force execution of one insn next time. */ + CPUState *cpu =3D current_cpu; + cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(current_cpu= ); + return true; + } + } +#else + tb_invalidate_phys_page(addr); +#endif /* TARGET_HAS_PRECISE_SMC */ + return false; +} +#else /* * @p must be non-NULL. * user-mode: call with mmap_lock held. @@ -492,22 +637,17 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, { TranslationBlock *tb; tb_page_addr_t tb_start, tb_end; - int n; + PageForEachNext n; #ifdef TARGET_HAS_PRECISE_SMC - CPUState *cpu =3D current_cpu; - bool current_tb_not_found =3D retaddr !=3D 0; bool current_tb_modified =3D false; - TranslationBlock *current_tb =3D NULL; + TranslationBlock *current_tb =3D retaddr ? tcg_tb_lookup(retaddr) : NU= LL; #endif /* TARGET_HAS_PRECISE_SMC */ =20 - assert_page_locked(p); - /* * We remove all the TBs in the range [start, end[. * XXX: see if in some cases it could be faster to invalidate all the = code */ - PAGE_FOR_EACH_TB(p, tb, n) { - assert_page_locked(p); + PAGE_FOR_EACH_TB(start, end, p, tb, n) { /* NOTE: this is subtle as a TB may span two physical pages */ if (n =3D=3D 0) { /* NOTE: tb_end may be after the end of the page, but @@ -521,11 +661,6 @@ tb_invalidate_phys_page_range__locked(struct page_coll= ection *pages, } if (!(tb_end <=3D start || tb_start >=3D end)) { #ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_not_found) { - current_tb_not_found =3D false; - /* now we have a real cpu fault */ - current_tb =3D tcg_tb_lookup(retaddr); - } if (current_tb =3D=3D tb && (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { /* @@ -536,25 +671,25 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, * restore the CPU state. */ current_tb_modified =3D true; - cpu_restore_state_from_tb(cpu, current_tb, retaddr); + cpu_restore_state_from_tb(current_cpu, current_tb, retaddr= ); } #endif /* TARGET_HAS_PRECISE_SMC */ tb_phys_invalidate__locked(tb); } } -#if !defined(CONFIG_USER_ONLY) + /* if no code remaining, no need to continue to use slow writes */ if (!p->first_tb) { tlb_unprotect_code(start); } -#endif + #ifdef TARGET_HAS_PRECISE_SMC if (current_tb_modified) { page_collection_unlock(pages); /* Force execution of one insn next time. */ - cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(cpu); + current_cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(current= _cpu); mmap_unlock(); - cpu_loop_exit_noexc(cpu); + cpu_loop_exit_noexc(current_cpu); } #endif } @@ -571,8 +706,6 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) tb_page_addr_t start, end; PageDesc *p; =20 - assert_memory_lock(); - p =3D page_find(addr >> TARGET_PAGE_BITS); if (p =3D=3D NULL) { return; @@ -599,8 +732,6 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_= page_addr_t end) struct page_collection *pages; tb_page_addr_t next; =20 - assert_memory_lock(); - pages =3D page_collection_lock(start, end); for (next =3D (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; start < end; @@ -611,12 +742,12 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) if (pd =3D=3D NULL) { continue; } + assert_page_locked(pd); tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0); } page_collection_unlock(pages); } =20 -#ifdef CONFIG_SOFTMMU /* * len must be <=3D 8 and start must be a multiple of len. * Called via softmmu_template.h when code areas are written to with @@ -630,8 +761,6 @@ void tb_invalidate_phys_page_fast(struct page_collectio= n *pages, { PageDesc *p; =20 - assert_memory_lock(); - p =3D page_find(start >> TARGET_PAGE_BITS); if (!p) { return; @@ -641,64 +770,4 @@ void tb_invalidate_phys_page_fast(struct page_collecti= on *pages, tb_invalidate_phys_page_range__locked(pages, p, start, start + len, retaddr); } -#else -/* - * Called with mmap_lock held. If pc is not 0 then it indicates the - * host PC of the faulting store instruction that caused this invalidate. - * Returns true if the caller needs to abort execution of the current - * TB (because it was modified by this store and the guest CPU has - * precise-SMC semantics). - */ -bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) -{ - TranslationBlock *tb; - PageDesc *p; - int n; -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D NULL; - CPUState *cpu =3D current_cpu; - bool current_tb_modified =3D false; -#endif - - assert_memory_lock(); - - addr &=3D TARGET_PAGE_MASK; - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return false; - } - -#ifdef TARGET_HAS_PRECISE_SMC - if (p->first_tb && pc !=3D 0) { - current_tb =3D tcg_tb_lookup(pc); - } -#endif - assert_page_locked(p); - PAGE_FOR_EACH_TB(p, tb, n) { -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb =3D=3D tb && - (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { - /* - * If we are modifying the current TB, we must stop its execut= ion. - * We could be more precise by checking that the modification = is - * after the current PC, but it would require a specialized - * function to partially restore the CPU state. - */ - current_tb_modified =3D true; - cpu_restore_state_from_tb(cpu, current_tb, pc); - } -#endif /* TARGET_HAS_PRECISE_SMC */ - tb_phys_invalidate(tb, addr); - } - p->first_tb =3D (uintptr_t)NULL; -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_modified) { - /* Force execution of one insn next time. */ - cpu->cflags_next_tb =3D 1 | CF_NOIRQ | curr_cflags(cpu); - return true; - } -#endif - - return false; -} -#endif +#endif /* CONFIG_USER_ONLY */ diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index ac3ee3740c..b964ea44d7 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -709,7 +709,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) =20 for (index =3D start; index <=3D end; index++) { TranslationBlock *tb; - int n; + PageForEachNext n; =20 pd =3D page_find(index); if (pd =3D=3D NULL) { @@ -720,7 +720,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) goto retry; } assert_page_locked(pd); - PAGE_FOR_EACH_TB(pd, tb, n) { + PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { if (page_trylock_add(set, tb_page_addr0(tb)) || (tb_page_addr1(tb) !=3D -1 && page_trylock_add(set, tb_page_addr1(tb)))) { --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671216837; cv=none; d=zohomail.com; s=zohoarc; b=RLwzXUVRMpjautXKTgS0CJ4sVp5Cwx/ssDDFDWIKOHBLnRveYcKdNNyO7WNJkT5WOJQfdBYlljuUe3Bv4UjSYUh+LzpfgNpXYYXyfTd5zXibU9Ab2xKTS2ArTdDkfCOU+t4tGV5qO/MY5UaYMAUeciQhpxNU3/yBpEjPGh7QGUc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671216837; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zm3VUx9ZkFCce7tlymmqLl3qxTdEEqRkB/7xizKqvmo=; b=hWvaBQN9Tzg9FDCa5XmOZomxSb7cAwZh+mfwtJuvKPJ9MaugIGa2sNzRRpJRbVV+bOiUv9uN2DKKKMAzU9IYIZwt89z/Ha87TTMEHIme5yu6drNet58Pg9tjCFpejZmBv/zkmGrHjx9RPlnkLWt8ZTre9LYAu+dNK+e4Ng6PW6U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671216837633701.9537980555226; Fri, 16 Dec 2022 10:53:57 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fpq-0000NS-M7; Fri, 16 Dec 2022 13:53:30 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpo-0000LB-Gq for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:28 -0500 Received: from mail-pj1-x1030.google.com ([2607:f8b0:4864:20::1030]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpm-0003FF-Lv for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:28 -0500 Received: by mail-pj1-x1030.google.com with SMTP id e7-20020a17090a77c700b00216928a3917so6988837pjs.4 for ; Fri, 16 Dec 2022 10:53:23 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zm3VUx9ZkFCce7tlymmqLl3qxTdEEqRkB/7xizKqvmo=; b=RNl3HuWVfYK4hp2dzQ6xaDxpm0ki5H9SjGG0Xzt1nSo2hAt+7v+fda2wkkYx2BoHrq nGYVK7uLfeCN6l6xqmG63+xNWWIF5ub6xHcstVLKh60fsajLIgzfr0QwmZBK+TT0cEmI q9Un6Q6a1dUTWulmeqS3YgZgdqPo63Xrjv1VFVU9dvPPtgJYhmce+DslagmhaAKconML yLQKTj5BobjZyy6BtDwlG+5FZNYiX4gniAwOFqGLeNi79NGdmUn7Mj8pbs3jZNM7FNJo Aqwx7l1ogPYbms8+a1vt6N3PiMftuH8ZIv3gBXnDC/nJ5rSd0czHrkIroMZ+sMkGAgMs t65w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zm3VUx9ZkFCce7tlymmqLl3qxTdEEqRkB/7xizKqvmo=; b=GbJFUO7CXL6h0iZMNDeSWQokLw0SXtar4Y+nVhPOuGT8YhBnjM/pmfkans/o7uf33r MYquhtjwYsyp0xqhvh1UQCsxGmkQVke0cwRUVcciXbHO9fapDaew1P9GA/QvFl9HbKIX EXlkUg08iYyNPQKr+X8WCjsfEl0YgyggSM7agZpOvrxov/QIR43VUfnnxfUT7Fq2AleL BtbtszaofOeyJ5fByGdqtYGfSnjPw3ldMqAwisSGcqVRlg7TXNyGXozIxZtaD2F7ffxq SNZLLGOyGbWj0Z3E1950W+QvTfUWjXX39xhDCI9MNuqoFaIyBY4F+H16NUs1k035+FFh ywRw== X-Gm-Message-State: ANoB5pmYKhmWdtKZgRJ/9dklxBnfqXsBkN8ExnDsJiVTBDP89f0gMy8x oGVpZjzbSMPOMOMcBykoc71oaQWq0zbmJDOT X-Google-Smtp-Source: AA0mqf4AknqK7gZF73zxKt1cYgmwvaF6MsVxWhpDzcE44I5y9UZkIrkjQYzhhd+nDw3sjdWYy84uFw== X-Received: by 2002:a05:6a20:8e02:b0:a0:462f:8e3f with SMTP id y2-20020a056a208e0200b000a0462f8e3fmr57146917pzj.53.1671216802232; Fri, 16 Dec 2022 10:53:22 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 04/13] accel/tcg: Use interval tree for TARGET_PAGE_DATA_SIZE Date: Fri, 16 Dec 2022 10:52:56 -0800 Message-Id: <20221216185305.3429913-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1030; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1030.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671216838042100003 Continue weaning user-only away from PageDesc. Use an interval tree to record target data. Chunk the data, to minimize allocation overhead. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 1 - accel/tcg/user-exec.c | 99 ++++++++++++++++++++++++++++++++----------- 2 files changed, 74 insertions(+), 26 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index bf1bf62e2a..0f91ee939c 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -26,7 +26,6 @@ typedef struct PageDesc { #ifdef CONFIG_USER_ONLY unsigned long flags; - void *target_data; #else QemuSpin lock; /* list of TBs intersecting this ram page */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index fb7d6ee9e9..42a04bdb21 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -210,47 +210,96 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState = *env, target_ulong addr, return addr; } =20 +#ifdef TARGET_PAGE_DATA_SIZE +/* + * Allocate chunks of target data together. For the only current user, + * if we allocate one hunk per page, we have overhead of 40/128 or 40%. + * Therefore, allocate memory for 64 pages at a time for overhead < 1%. + */ +#define TPD_PAGES 64 +#define TBD_MASK (TARGET_PAGE_MASK * TPD_PAGES) + +typedef struct TargetPageDataNode { + IntervalTreeNode itree; + char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned)); +} TargetPageDataNode; + +static IntervalTreeRoot targetdata_root; + void page_reset_target_data(target_ulong start, target_ulong end) { -#ifdef TARGET_PAGE_DATA_SIZE - target_ulong addr, len; + IntervalTreeNode *n, *next; + target_ulong last; =20 - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - assert(end - 1 <=3D GUEST_ADDR_MAX); - assert(start < end); assert_memory_lock(); =20 start =3D start & TARGET_PAGE_MASK; - end =3D TARGET_PAGE_ALIGN(end); + last =3D TARGET_PAGE_ALIGN(end) - 1; =20 - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, 1); + for (n =3D interval_tree_iter_first(&targetdata_root, start, last), + next =3D n ? interval_tree_iter_next(n, start, last) : NULL; + n !=3D NULL; + n =3D next, + next =3D next ? interval_tree_iter_next(n, start, last) : NULL) { + target_ulong n_start, n_last, p_ofs, p_len; + TargetPageDataNode *t; =20 - g_free(p->target_data); - p->target_data =3D NULL; + if (n->start >=3D start && n->last <=3D last) { + interval_tree_remove(n, &targetdata_root); + g_free(n); + continue; + } + + if (n->start < start) { + n_start =3D start; + p_ofs =3D (start - n->start) >> TARGET_PAGE_BITS; + } else { + n_start =3D n->start; + p_ofs =3D 0; + } + n_last =3D MIN(last, n->last); + p_len =3D (n_last + 1 - n_start) >> TARGET_PAGE_BITS; + + t =3D container_of(n, TargetPageDataNode, itree); + memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE); } -#endif } =20 -#ifdef TARGET_PAGE_DATA_SIZE void *page_get_target_data(target_ulong address) { - PageDesc *p =3D page_find(address >> TARGET_PAGE_BITS); - void *ret =3D p->target_data; + IntervalTreeNode *n; + TargetPageDataNode *t; + target_ulong page, region; =20 - if (!ret) { - ret =3D g_malloc0(TARGET_PAGE_DATA_SIZE); - p->target_data =3D ret; + page =3D address & TARGET_PAGE_MASK; + region =3D address & TBD_MASK; + + n =3D interval_tree_iter_first(&targetdata_root, page, page); + if (!n) { + /* + * See util/interval-tree.c re lockless lookups: no false positives + * but there are false negatives. If we find nothing, retry with + * the mmap lock acquired. We also need the lock for the + * allocation + insert. + */ + mmap_lock(); + n =3D interval_tree_iter_first(&targetdata_root, page, page); + if (!n) { + t =3D g_new0(TargetPageDataNode, 1); + n =3D &t->itree; + n->start =3D region; + n->last =3D region | ~TBD_MASK; + interval_tree_insert(n, &targetdata_root); + } + mmap_unlock(); } - return ret; + + t =3D container_of(n, TargetPageDataNode, itree); + return t->data[(page - region) >> TARGET_PAGE_BITS]; } -#endif +#else +void page_reset_target_data(target_ulong start, target_ulong end) { } +#endif /* TARGET_PAGE_DATA_SIZE */ =20 /* The softmmu versions of these helpers are in cputlb.c. */ =20 --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671217608; cv=none; d=zohomail.com; s=zohoarc; b=OzYe7Eo7jq4bKcSGtgp0RRyfFO9lLHB7gpw1R0CXdUsrPXK/1lOuYYSjDWousmTh1Po1DCwU45SLCalf+vI1vlPFk1OOIGGurKrhtPuZ8zZ2k7BjyuEEFXmD1AUzlsA+wPHwwOv5fG9kIUrqK7+9k3pEyg9QsWKtKOCSjtrFPvI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671217608; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=jloeNmMr+Uf/GUu9zkfcyVNn7bo9MScPLiA7lhFAJ+M=; b=EmkxgC6FbgCSZYCvHTzOKx+Mb1xM2RBVexGnxr7UedXNYZoR8rug8n1+3t8ph6kLFL1mDSrlebQFchGktAYduCU6ydLki5pN8KNoWOIQ0KdHOIabM39q9IpprBvBQne81KqvrOpQRGqf3/1QAGICHTglFyjR9bzI62rxllOMECc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 167121760867717.03080238353948; Fri, 16 Dec 2022 11:06:48 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fpt-0000RL-BZ; Fri, 16 Dec 2022 13:53:33 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpr-0000OM-9z for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:31 -0500 Received: from mail-pl1-x62a.google.com ([2607:f8b0:4864:20::62a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpm-0003FK-OS for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:30 -0500 Received: by mail-pl1-x62a.google.com with SMTP id 17so3250753pll.0 for ; Fri, 16 Dec 2022 10:53:24 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jloeNmMr+Uf/GUu9zkfcyVNn7bo9MScPLiA7lhFAJ+M=; b=rUgtCvE+1KX+dUO2VW86vMB/Y/mxxFwq3VxvG6XRjXdOctn/I1fA8w13qWWUJp2KoH oGwcB4OEmNEwmCmN98+i/LiWR+rtysYmfPSunZfBsdC/i8j4b1EL8KMxxzFR9ZcwSJou v2Kj6k1igeOSK6pqWY6tAaoCDduFNsJ6HfCL1ME5G8LStChnz4Lx41MVOishsgfli5Hw cVBhFsgwZR4UUTmGersgrUeE3GhMd9yqaWvbXNp53YN2+y3rmFvxrvGR/kfftmx/mTO3 zg6W9BgQ4iKzuvSk20WSjvf8rleH0szUmZbf2H+GcuJb3G8BcL7B+QHicuMBHD7DXjuD WHmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jloeNmMr+Uf/GUu9zkfcyVNn7bo9MScPLiA7lhFAJ+M=; b=YxcTaWMEyndBHHOKBrlNYB4NKH2afkCQ+T7nIbZSwcd7EeguKF2/7uW0I/CuKiPBnn cr5zJw/azz6bT8bvThXGdyGd7X1qJANMjZf0CR3cJKUaMMWxYLWL3vZiTyBg+pFEwpPZ sMCR8GZe50v2yiPirK01PZwZdQHFSJs0gqx0kA/lU5SRogyNuymXghG/pvIwfuwfmJ+a hje/XG9LelNAWugfXV00FopFH3R5r7/DvAuxrg6mzeJGgLvkOhtvHIpq8XDEKSjgrxeA UrMHDQlB8sjUTm7wf+JH7V1vIYN+PboMGIRaDMf7T3/JC36LvA3/1S3OagOpbKzbMLiY RD2A== X-Gm-Message-State: AFqh2kpFOxKSGmBa6mDSr3yHtYxhHR3vpLsU79YmMyKVb9cSCzenFBb8 +0ppVwhmNN1zqs1nRYuzYVwu8RYKk8mYrB++ X-Google-Smtp-Source: AMrXdXvZqfMPPYirwNamykEG/kTTxAHUgItOqrEBgEHDI4DP+edMNeEW/Q67tuujGVPK9Zy+K8Vd9g== X-Received: by 2002:a05:6a21:3398:b0:af:8ff3:fc78 with SMTP id yy24-20020a056a21339800b000af8ff3fc78mr12849578pzb.19.1671216803071; Fri, 16 Dec 2022 10:53:23 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 05/13] accel/tcg: Move page_{get,set}_flags to user-exec.c Date: Fri, 16 Dec 2022 10:52:57 -0800 Message-Id: <20221216185305.3429913-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::62a; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671217610970100001 This page tracking implementation is specific to user-only, since the system softmmu version is in cputlb.c. Move it out of translate-all.c to user-exec.c. Reviewed-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 17 ++ accel/tcg/translate-all.c | 350 -------------------------------------- accel/tcg/user-exec.c | 346 +++++++++++++++++++++++++++++++++++++ 3 files changed, 363 insertions(+), 350 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 0f91ee939c..ddd1fa6bdc 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -33,6 +33,23 @@ typedef struct PageDesc { #endif } PageDesc; =20 +/* + * In system mode we want L1_MAP to be based on ram offsets, + * while in user mode we want it to be based on virtual addresses. + * + * TODO: For user mode, see the caveat re host vs guest virtual + * address spaces near GUEST_ADDR_MAX. + */ +#if !defined(CONFIG_USER_ONLY) +#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS +# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS +#else +# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS +#endif +#else +# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) +#endif + /* Size of the L2 (and L3, etc) page tables. */ #define V_L2_BITS 10 #define V_L2_SIZE (1 << V_L2_BITS) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index b964ea44d7..0d7596fcb8 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -109,23 +109,6 @@ struct page_collection { struct page_entry *max; }; =20 -/* - * In system mode we want L1_MAP to be based on ram offsets, - * while in user mode we want it to be based on virtual addresses. - * - * TODO: For user mode, see the caveat re host vs guest virtual - * address spaces near GUEST_ADDR_MAX. - */ -#if !defined(CONFIG_USER_ONLY) -#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS -# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS -#else -# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS -#endif -#else -# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) -#endif - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -1235,339 +1218,6 @@ void cpu_interrupt(CPUState *cpu, int mask) qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); } =20 -/* - * Walks guest process memory "regions" one by one - * and calls callback function 'fn' for each region. - */ -struct walk_memory_regions_data { - walk_memory_regions_fn fn; - void *priv; - target_ulong start; - int prot; -}; - -static int walk_memory_regions_end(struct walk_memory_regions_data *data, - target_ulong end, int new_prot) -{ - if (data->start !=3D -1u) { - int rc =3D data->fn(data->priv, data->start, end, data->prot); - if (rc !=3D 0) { - return rc; - } - } - - data->start =3D (new_prot ? end : -1u); - data->prot =3D new_prot; - - return 0; -} - -static int walk_memory_regions_1(struct walk_memory_regions_data *data, - target_ulong base, int level, void **lp) -{ - target_ulong pa; - int i, rc; - - if (*lp =3D=3D NULL) { - return walk_memory_regions_end(data, base, 0); - } - - if (level =3D=3D 0) { - PageDesc *pd =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - int prot =3D pd[i].flags; - - pa =3D base | (i << TARGET_PAGE_BITS); - if (prot !=3D data->prot) { - rc =3D walk_memory_regions_end(data, pa, prot); - if (rc !=3D 0) { - return rc; - } - } - } - } else { - void **pp =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - pa =3D base | ((target_ulong)i << - (TARGET_PAGE_BITS + V_L2_BITS * level)); - rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); - if (rc !=3D 0) { - return rc; - } - } - } - - return 0; -} - -int walk_memory_regions(void *priv, walk_memory_regions_fn fn) -{ - struct walk_memory_regions_data data; - uintptr_t i, l1_sz =3D v_l1_size; - - data.fn =3D fn; - data.priv =3D priv; - data.start =3D -1u; - data.prot =3D 0; - - for (i =3D 0; i < l1_sz; i++) { - target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); - int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); - if (rc !=3D 0) { - return rc; - } - } - - return walk_memory_regions_end(&data, 0, 0); -} - -static int dump_region(void *priv, target_ulong start, - target_ulong end, unsigned long prot) -{ - FILE *f =3D (FILE *)priv; - - (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx - " "TARGET_FMT_lx" %c%c%c\n", - start, end, end - start, - ((prot & PAGE_READ) ? 'r' : '-'), - ((prot & PAGE_WRITE) ? 'w' : '-'), - ((prot & PAGE_EXEC) ? 'x' : '-')); - - return 0; -} - -/* dump memory mappings */ -void page_dump(FILE *f) -{ - const int length =3D sizeof(target_ulong) * 2; - (void) fprintf(f, "%-*s %-*s %-*s %s\n", - length, "start", length, "end", length, "size", "prot"); - walk_memory_regions(f, dump_region); -} - -int page_get_flags(target_ulong address) -{ - PageDesc *p; - - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { - return 0; - } - return p->flags; -} - -/* - * Allow the target to decide if PAGE_TARGET_[12] may be reset. - * By default, they are not kept. - */ -#ifndef PAGE_TARGET_STICKY -#define PAGE_TARGET_STICKY 0 -#endif -#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) - -/* Modify the flags of a page and invalidate the code if necessary. - The flag PAGE_WRITE_ORG is positioned automatically depending - on PAGE_WRITE. The mmap_lock should already be held. */ -void page_set_flags(target_ulong start, target_ulong end, int flags) -{ - target_ulong addr, len; - bool reset, inval_tb =3D false; - - /* This function should never be called with addresses outside the - guest address space. If this assert fires, it probably indicates - a missing call to h2g_valid. */ - assert(end - 1 <=3D GUEST_ADDR_MAX); - assert(start < end); - /* Only set PAGE_ANON with new mappings. */ - assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); - assert_memory_lock(); - - start =3D start & TARGET_PAGE_MASK; - end =3D TARGET_PAGE_ALIGN(end); - - if (flags & PAGE_WRITE) { - flags |=3D PAGE_WRITE_ORG; - } - reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); - if (reset) { - page_reset_target_data(start, end); - } - flags &=3D ~PAGE_RESET; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); - - /* - * If the page was executable, but is reset, or is no longer - * executable, or has become writable, then invalidate any code. - */ - if ((p->flags & PAGE_EXEC) - && (reset || - !(flags & PAGE_EXEC) || - (flags & ~p->flags & PAGE_WRITE))) { - inval_tb =3D true; - } - /* Using mprotect on a page does not change sticky bits. */ - p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; - } - - if (inval_tb) { - tb_invalidate_phys_range(start, end); - } -} - -int page_check_range(target_ulong start, target_ulong len, int flags) -{ - PageDesc *p; - target_ulong end; - target_ulong addr; - - /* This function should never be called with addresses outside the - guest address space. If this assert fires, it probably indicates - a missing call to h2g_valid. */ - if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { - assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); - } - - if (len =3D=3D 0) { - return 0; - } - if (start + len - 1 < start) { - /* We've wrapped around. */ - return -1; - } - - /* must do before we loose bits in the next step */ - end =3D TARGET_PAGE_ALIGN(start + len); - start =3D start & TARGET_PAGE_MASK; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return -1; - } - if (!(p->flags & PAGE_VALID)) { - return -1; - } - - if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { - return -1; - } - if (flags & PAGE_WRITE) { - if (!(p->flags & PAGE_WRITE_ORG)) { - return -1; - } - /* unprotect the page if it was put read-only because it - contains translated code */ - if (!(p->flags & PAGE_WRITE)) { - if (!page_unprotect(addr, 0)) { - return -1; - } - } - } - } - return 0; -} - -void page_protect(tb_page_addr_t page_addr) -{ - target_ulong addr; - PageDesc *p; - int prot; - - p =3D page_find(page_addr >> TARGET_PAGE_BITS); - if (p && (p->flags & PAGE_WRITE)) { - /* - * Force the host page as non writable (writes will have a page fa= ult + - * mprotect overhead). - */ - page_addr &=3D qemu_host_page_mask; - prot =3D 0; - for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; - addr +=3D TARGET_PAGE_SIZE) { - - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - continue; - } - prot |=3D p->flags; - p->flags &=3D ~PAGE_WRITE; - } - mprotect(g2h_untagged(page_addr), qemu_host_page_size, - (prot & PAGE_BITS) & ~PAGE_WRITE); - } -} - -/* called from signal handler: invalidate the code and unprotect the - * page. Return 0 if the fault was not handled, 1 if it was handled, - * and 2 if it was handled but the caller must cause the TB to be - * immediately exited. (We can only return 2 if the 'pc' argument is - * non-zero.) - */ -int page_unprotect(target_ulong address, uintptr_t pc) -{ - unsigned int prot; - bool current_tb_invalidated; - PageDesc *p; - target_ulong host_start, host_end, addr; - - /* Technically this isn't safe inside a signal handler. However we - know this only ever happens in a synchronous SEGV handler, so in - practice it seems to be ok. */ - mmap_lock(); - - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { - mmap_unlock(); - return 0; - } - - /* if the page was really writable, then we change its - protection back to writable */ - if (p->flags & PAGE_WRITE_ORG) { - current_tb_invalidated =3D false; - if (p->flags & PAGE_WRITE) { - /* If the page is actually marked WRITE then assume this is be= cause - * this thread raced with another one which got here first and - * set the page to PAGE_WRITE and did the TB invalidate for us. - */ -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D tcg_tb_lookup(pc); - if (current_tb) { - current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; - } -#endif - } else { - host_start =3D address & qemu_host_page_mask; - host_end =3D host_start + qemu_host_page_size; - - prot =3D 0; - for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - p->flags |=3D PAGE_WRITE; - prot |=3D p->flags; - - /* and since the content will be modified, we must invalid= ate - the corresponding translated code. */ - current_tb_invalidated |=3D - tb_invalidate_phys_page_unwind(addr, pc); - } - mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, - prot & PAGE_BITS); - } - mmap_unlock(); - /* If current TB was invalidated return to main loop */ - return current_tb_invalidated ? 2 : 1; - } - mmap_unlock(); - return 0; -} #endif /* CONFIG_USER_ONLY */ =20 /* diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 42a04bdb21..22ef780900 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -135,6 +135,352 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigse= t_t *old_set, } } =20 +/* + * Walks guest process memory "regions" one by one + * and calls callback function 'fn' for each region. + */ +struct walk_memory_regions_data { + walk_memory_regions_fn fn; + void *priv; + target_ulong start; + int prot; +}; + +static int walk_memory_regions_end(struct walk_memory_regions_data *data, + target_ulong end, int new_prot) +{ + if (data->start !=3D -1u) { + int rc =3D data->fn(data->priv, data->start, end, data->prot); + if (rc !=3D 0) { + return rc; + } + } + + data->start =3D (new_prot ? end : -1u); + data->prot =3D new_prot; + + return 0; +} + +static int walk_memory_regions_1(struct walk_memory_regions_data *data, + target_ulong base, int level, void **lp) +{ + target_ulong pa; + int i, rc; + + if (*lp =3D=3D NULL) { + return walk_memory_regions_end(data, base, 0); + } + + if (level =3D=3D 0) { + PageDesc *pd =3D *lp; + + for (i =3D 0; i < V_L2_SIZE; ++i) { + int prot =3D pd[i].flags; + + pa =3D base | (i << TARGET_PAGE_BITS); + if (prot !=3D data->prot) { + rc =3D walk_memory_regions_end(data, pa, prot); + if (rc !=3D 0) { + return rc; + } + } + } + } else { + void **pp =3D *lp; + + for (i =3D 0; i < V_L2_SIZE; ++i) { + pa =3D base | ((target_ulong)i << + (TARGET_PAGE_BITS + V_L2_BITS * level)); + rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); + if (rc !=3D 0) { + return rc; + } + } + } + + return 0; +} + +int walk_memory_regions(void *priv, walk_memory_regions_fn fn) +{ + struct walk_memory_regions_data data; + uintptr_t i, l1_sz =3D v_l1_size; + + data.fn =3D fn; + data.priv =3D priv; + data.start =3D -1u; + data.prot =3D 0; + + for (i =3D 0; i < l1_sz; i++) { + target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); + int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); + if (rc !=3D 0) { + return rc; + } + } + + return walk_memory_regions_end(&data, 0, 0); +} + +static int dump_region(void *priv, target_ulong start, + target_ulong end, unsigned long prot) +{ + FILE *f =3D (FILE *)priv; + + (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx + " "TARGET_FMT_lx" %c%c%c\n", + start, end, end - start, + ((prot & PAGE_READ) ? 'r' : '-'), + ((prot & PAGE_WRITE) ? 'w' : '-'), + ((prot & PAGE_EXEC) ? 'x' : '-')); + + return 0; +} + +/* dump memory mappings */ +void page_dump(FILE *f) +{ + const int length =3D sizeof(target_ulong) * 2; + (void) fprintf(f, "%-*s %-*s %-*s %s\n", + length, "start", length, "end", length, "size", "prot"); + walk_memory_regions(f, dump_region); +} + +int page_get_flags(target_ulong address) +{ + PageDesc *p; + + p =3D page_find(address >> TARGET_PAGE_BITS); + if (!p) { + return 0; + } + return p->flags; +} + +/* + * Allow the target to decide if PAGE_TARGET_[12] may be reset. + * By default, they are not kept. + */ +#ifndef PAGE_TARGET_STICKY +#define PAGE_TARGET_STICKY 0 +#endif +#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) + +/* + * Modify the flags of a page and invalidate the code if necessary. + * The flag PAGE_WRITE_ORG is positioned automatically depending + * on PAGE_WRITE. The mmap_lock should already be held. + */ +void page_set_flags(target_ulong start, target_ulong end, int flags) +{ + target_ulong addr, len; + bool reset, inval_tb =3D false; + + /* This function should never be called with addresses outside the + guest address space. If this assert fires, it probably indicates + a missing call to h2g_valid. */ + assert(end - 1 <=3D GUEST_ADDR_MAX); + assert(start < end); + /* Only set PAGE_ANON with new mappings. */ + assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); + assert_memory_lock(); + + start =3D start & TARGET_PAGE_MASK; + end =3D TARGET_PAGE_ALIGN(end); + + if (flags & PAGE_WRITE) { + flags |=3D PAGE_WRITE_ORG; + } + reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); + if (reset) { + page_reset_target_data(start, end); + } + flags &=3D ~PAGE_RESET; + + for (addr =3D start, len =3D end - start; + len !=3D 0; + len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { + PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); + + /* + * If the page was executable, but is reset, or is no longer + * executable, or has become writable, then invalidate any code. + */ + if ((p->flags & PAGE_EXEC) + && (reset || + !(flags & PAGE_EXEC) || + (flags & ~p->flags & PAGE_WRITE))) { + inval_tb =3D true; + } + /* Using mprotect on a page does not change sticky bits. */ + p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; + } + + if (inval_tb) { + tb_invalidate_phys_range(start, end); + } +} + +int page_check_range(target_ulong start, target_ulong len, int flags) +{ + PageDesc *p; + target_ulong end; + target_ulong addr; + + /* + * This function should never be called with addresses outside the + * guest address space. If this assert fires, it probably indicates + * a missing call to h2g_valid. + */ + if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { + assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); + } + + if (len =3D=3D 0) { + return 0; + } + if (start + len - 1 < start) { + /* We've wrapped around. */ + return -1; + } + + /* must do before we loose bits in the next step */ + end =3D TARGET_PAGE_ALIGN(start + len); + start =3D start & TARGET_PAGE_MASK; + + for (addr =3D start, len =3D end - start; + len !=3D 0; + len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { + p =3D page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + return -1; + } + if (!(p->flags & PAGE_VALID)) { + return -1; + } + + if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { + return -1; + } + if (flags & PAGE_WRITE) { + if (!(p->flags & PAGE_WRITE_ORG)) { + return -1; + } + /* unprotect the page if it was put read-only because it + contains translated code */ + if (!(p->flags & PAGE_WRITE)) { + if (!page_unprotect(addr, 0)) { + return -1; + } + } + } + } + return 0; +} + +void page_protect(tb_page_addr_t page_addr) +{ + target_ulong addr; + PageDesc *p; + int prot; + + p =3D page_find(page_addr >> TARGET_PAGE_BITS); + if (p && (p->flags & PAGE_WRITE)) { + /* + * Force the host page as non writable (writes will have a page fa= ult + + * mprotect overhead). + */ + page_addr &=3D qemu_host_page_mask; + prot =3D 0; + for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; + addr +=3D TARGET_PAGE_SIZE) { + + p =3D page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + continue; + } + prot |=3D p->flags; + p->flags &=3D ~PAGE_WRITE; + } + mprotect(g2h_untagged(page_addr), qemu_host_page_size, + (prot & PAGE_BITS) & ~PAGE_WRITE); + } +} + +/* + * Called from signal handler: invalidate the code and unprotect the + * page. Return 0 if the fault was not handled, 1 if it was handled, + * and 2 if it was handled but the caller must cause the TB to be + * immediately exited. (We can only return 2 if the 'pc' argument is + * non-zero.) + */ +int page_unprotect(target_ulong address, uintptr_t pc) +{ + unsigned int prot; + bool current_tb_invalidated; + PageDesc *p; + target_ulong host_start, host_end, addr; + + /* + * Technically this isn't safe inside a signal handler. However we + * know this only ever happens in a synchronous SEGV handler, so in + * practice it seems to be ok. + */ + mmap_lock(); + + p =3D page_find(address >> TARGET_PAGE_BITS); + if (!p) { + mmap_unlock(); + return 0; + } + + /* + * If the page was really writable, then we change its + * protection back to writable. + */ + if (p->flags & PAGE_WRITE_ORG) { + current_tb_invalidated =3D false; + if (p->flags & PAGE_WRITE) { + /* + * If the page is actually marked WRITE then assume this is be= cause + * this thread raced with another one which got here first and + * set the page to PAGE_WRITE and did the TB invalidate for us. + */ +#ifdef TARGET_HAS_PRECISE_SMC + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + if (current_tb) { + current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; + } +#endif + } else { + host_start =3D address & qemu_host_page_mask; + host_end =3D host_start + qemu_host_page_size; + + prot =3D 0; + for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { + p =3D page_find(addr >> TARGET_PAGE_BITS); + p->flags |=3D PAGE_WRITE; + prot |=3D p->flags; + + /* + * Since the content will be modified, we must invalidate + * the corresponding translated code. + */ + current_tb_invalidated |=3D + tb_invalidate_phys_page_unwind(addr, pc); + } + mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, + prot & PAGE_BITS); + } + mmap_unlock(); + /* If current TB was invalidated return to main loop */ + return current_tb_invalidated ? 2 : 1; + } + mmap_unlock(); + return 0; +} + static int probe_access_internal(CPUArchState *env, target_ulong addr, int fault_size, MMUAccessType access_type, bool nonfault, uintptr_t ra) --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671218163; cv=none; d=zohomail.com; s=zohoarc; b=SEKbFId0s0IAgjBS+R0JuCiJAYwwiA6QH4uJGTDCrA44v8QC8HehxX6QEuLqW3rZpwLMoYwit/VPrmsVCUBLTaIKsEOSHpHHW1fOyWPeFgg8aOTD6PvSXFe1wjo8y92BKBTNg+C6OGm2z0fk3oAAj0LaJCF8EI0lROHrr24LqPk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671218163; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+us9lauksEcfPHcu7sMk8ugSYxQaW3YKOdx61GwHr/s=; b=GkC6Js+Wnoa1jD+b7YgWn44nIFV2wqsxpasbPj3PSPS+toi+hNxY8i6w82i4w5ct6VmmSs9Bb9PfqhxWVqgEoMyjmKMvwuhlQk9HW0rTwZEfRW1GMcecQ+r+RWts8GCJvloH78i+qtrNoJ8GIUiYxiFggwSBl2TT7XT7+Ot0AMo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671218163803686.9413036014596; Fri, 16 Dec 2022 11:16:03 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fpu-0000T4-QF; Fri, 16 Dec 2022 13:53:34 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpr-0000OS-CN for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:31 -0500 Received: from mail-pj1-x102c.google.com ([2607:f8b0:4864:20::102c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpm-0003FP-O8 for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:31 -0500 Received: by mail-pj1-x102c.google.com with SMTP id v13-20020a17090a6b0d00b00219c3be9830so3258209pjj.4 for ; Fri, 16 Dec 2022 10:53:25 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+us9lauksEcfPHcu7sMk8ugSYxQaW3YKOdx61GwHr/s=; b=aYKKQ2Q39g3wI08uARl9ezaV1Qw1Wu9u5zz7mP5o9qTIB0Nuzf+PjZ1tU8Yl6LqJCe Vv4XJj5x0IxNK/0AJePUaauvK/AqDCCYwYE1IcQjGWQ89q4aXpvi63EyOG+PXtpePpTS Zn1SsVAdqpw3toAFs3O4t/b4SpIcCOQDPYc0kGdUt508lwDlQX0EVd8WAzrHxw7MJdG+ AXpP2QDSvnwcjtynK/IYOULgXlVaZyXi3nLAlu64bnxOmky/3V5PKWPhDnTdGxK/H0MZ wztKzrS6N+g4RB9UHK5zyWdjtjS01rh1YerizftNbkkn+9F8Hvo0qFGtIbkNtLoIpu/I vfcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+us9lauksEcfPHcu7sMk8ugSYxQaW3YKOdx61GwHr/s=; b=z9x+O7v2Rh4tiQhqhN/mQkJWo3PyD4h3eN0CtQTLpYN8Rx83pxGunwzAxdzA9cHpup atR3VHad/dbnDSGTECpPA0TlrLRSCT9+G964PiUTcdi+oFkenwdnXDf7zOfY2y2WQEZP 8T0RzINQsPsTLreBP5e8YPHsnr8iy1Qh+FkUfiIbRQOzoGmA89njiTbpnxNsKG4J1j0H PRwNthLleG0D3z0FwOehhooix88hw+8G3zsknyY7jSeel42ShEndwb+QYEPWkURn3ST1 214df/C+HSkbUJ2BG1/bwzNW/qOG0IzU953zuRcZ0VDanOs+vDM/VovaQIURcbz5OZAW r0yA== X-Gm-Message-State: AFqh2kq9ZoTIGdy/j9QgFNioT6vxekL15mPGs2m47MKO5EdslqwEhSTC RB3obtcdgwIEfccGIL86OFgRGMqtVq9kZtLL X-Google-Smtp-Source: AMrXdXvSAsGKUiyKFN4phBIDt383ogIDqSAD8IgTT6TwNFN+stBkIsc0j8JwG4yF+f94Wd4IsHPj/w== X-Received: by 2002:a17:90a:cc08:b0:223:2aa6:7f0f with SMTP id b8-20020a17090acc0800b002232aa67f0fmr13611479pju.7.1671216804297; Fri, 16 Dec 2022 10:53:24 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 06/13] accel/tcg: Use interval tree for user-only page tracking Date: Fri, 16 Dec 2022 10:52:58 -0800 Message-Id: <20221216185305.3429913-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102c; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671218166121100003 Finish weaning user-only away from PageDesc. Using an interval tree to track page permissions means that we can represent very large regions efficiently. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/290 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/967 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1214 Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 4 +- accel/tcg/tb-maint.c | 20 +- accel/tcg/user-exec.c | 615 ++++++++++++++++++++++----------- tests/tcg/multiarch/test-vma.c | 22 ++ 4 files changed, 451 insertions(+), 210 deletions(-) create mode 100644 tests/tcg/multiarch/test-vma.c diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index ddd1fa6bdc..be19bdf088 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -24,9 +24,7 @@ #endif =20 typedef struct PageDesc { -#ifdef CONFIG_USER_ONLY - unsigned long flags; -#else +#ifndef CONFIG_USER_ONLY QemuSpin lock; /* list of TBs intersecting this ram page */ uintptr_t first_tb; diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 8da2c64d87..20e86c813d 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -68,15 +68,23 @@ static void tb_remove_all(void) /* Call with mmap_lock held. */ static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) { - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p1->flags & PAGE_WRITE)); - if (p2) { - assert(!(p2->flags & PAGE_WRITE)); - } + target_ulong addr; + int flags; =20 assert_memory_lock(); - tb->itree.last =3D tb->itree.start + tb->size - 1; + + /* translator_loop() must have made all TB pages non-writable */ + addr =3D tb_page_addr0(tb); + flags =3D page_get_flags(addr); + assert(!(flags & PAGE_WRITE)); + + addr =3D tb_page_addr1(tb); + if (addr !=3D -1) { + flags =3D page_get_flags(addr); + assert(!(flags & PAGE_WRITE)); + } + interval_tree_insert(&tb->itree, &tb_root); } =20 diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 22ef780900..a3cecda405 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -135,106 +135,61 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigs= et_t *old_set, } } =20 -/* - * Walks guest process memory "regions" one by one - * and calls callback function 'fn' for each region. - */ -struct walk_memory_regions_data { - walk_memory_regions_fn fn; - void *priv; - target_ulong start; - int prot; -}; +typedef struct PageFlagsNode { + IntervalTreeNode itree; + int flags; +} PageFlagsNode; =20 -static int walk_memory_regions_end(struct walk_memory_regions_data *data, - target_ulong end, int new_prot) +static IntervalTreeRoot pageflags_root; + +static PageFlagsNode *pageflags_find(target_ulong start, target_long last) { - if (data->start !=3D -1u) { - int rc =3D data->fn(data->priv, data->start, end, data->prot); - if (rc !=3D 0) { - return rc; - } - } + IntervalTreeNode *n; =20 - data->start =3D (new_prot ? end : -1u); - data->prot =3D new_prot; - - return 0; + n =3D interval_tree_iter_first(&pageflags_root, start, last); + return n ? container_of(n, PageFlagsNode, itree) : NULL; } =20 -static int walk_memory_regions_1(struct walk_memory_regions_data *data, - target_ulong base, int level, void **lp) +static PageFlagsNode *pageflags_next(PageFlagsNode *p, target_ulong start, + target_long last) { - target_ulong pa; - int i, rc; + IntervalTreeNode *n; =20 - if (*lp =3D=3D NULL) { - return walk_memory_regions_end(data, base, 0); - } - - if (level =3D=3D 0) { - PageDesc *pd =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - int prot =3D pd[i].flags; - - pa =3D base | (i << TARGET_PAGE_BITS); - if (prot !=3D data->prot) { - rc =3D walk_memory_regions_end(data, pa, prot); - if (rc !=3D 0) { - return rc; - } - } - } - } else { - void **pp =3D *lp; - - for (i =3D 0; i < V_L2_SIZE; ++i) { - pa =3D base | ((target_ulong)i << - (TARGET_PAGE_BITS + V_L2_BITS * level)); - rc =3D walk_memory_regions_1(data, pa, level - 1, pp + i); - if (rc !=3D 0) { - return rc; - } - } - } - - return 0; + n =3D interval_tree_iter_next(&p->itree, start, last); + return n ? container_of(n, PageFlagsNode, itree) : NULL; } =20 int walk_memory_regions(void *priv, walk_memory_regions_fn fn) { - struct walk_memory_regions_data data; - uintptr_t i, l1_sz =3D v_l1_size; + IntervalTreeNode *n; + int rc =3D 0; =20 - data.fn =3D fn; - data.priv =3D priv; - data.start =3D -1u; - data.prot =3D 0; + mmap_lock(); + for (n =3D interval_tree_iter_first(&pageflags_root, 0, -1); + n !=3D NULL; + n =3D interval_tree_iter_next(n, 0, -1)) { + PageFlagsNode *p =3D container_of(n, PageFlagsNode, itree); =20 - for (i =3D 0; i < l1_sz; i++) { - target_ulong base =3D i << (v_l1_shift + TARGET_PAGE_BITS); - int rc =3D walk_memory_regions_1(&data, base, v_l2_levels, l1_map = + i); + rc =3D fn(priv, n->start, n->last + 1, p->flags); if (rc !=3D 0) { - return rc; + break; } } + mmap_unlock(); =20 - return walk_memory_regions_end(&data, 0, 0); + return rc; } =20 static int dump_region(void *priv, target_ulong start, - target_ulong end, unsigned long prot) + target_ulong end, unsigned long prot) { FILE *f =3D (FILE *)priv; =20 - (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx - " "TARGET_FMT_lx" %c%c%c\n", - start, end, end - start, - ((prot & PAGE_READ) ? 'r' : '-'), - ((prot & PAGE_WRITE) ? 'w' : '-'), - ((prot & PAGE_EXEC) ? 'x' : '-')); - + fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx" "TARGET_FMT_lx" %c%c%c\n", + start, end, end - start, + ((prot & PAGE_READ) ? 'r' : '-'), + ((prot & PAGE_WRITE) ? 'w' : '-'), + ((prot & PAGE_EXEC) ? 'x' : '-')); return 0; } =20 @@ -242,20 +197,131 @@ static int dump_region(void *priv, target_ulong star= t, void page_dump(FILE *f) { const int length =3D sizeof(target_ulong) * 2; - (void) fprintf(f, "%-*s %-*s %-*s %s\n", + + fprintf(f, "%-*s %-*s %-*s %s\n", length, "start", length, "end", length, "size", "prot"); walk_memory_regions(f, dump_region); } =20 int page_get_flags(target_ulong address) { - PageDesc *p; + PageFlagsNode *p =3D pageflags_find(address, address); =20 - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { + /* + * See util/interval-tree.c re lockless lookups: no false positives but + * there are false negatives. If we find nothing, retry with the mmap + * lock acquired. + */ + if (p) { + return p->flags; + } + if (have_mmap_lock()) { return 0; } - return p->flags; + + mmap_lock(); + p =3D pageflags_find(address, address); + mmap_unlock(); + return p ? p->flags : 0; +} + +/* A subroutine of page_set_flags: insert a new node for [start,last]. */ +static void pageflags_create(target_ulong start, target_ulong last, int fl= ags) +{ + PageFlagsNode *p =3D g_new(PageFlagsNode, 1); + + p->itree.start =3D start; + p->itree.last =3D last; + p->flags =3D flags; + interval_tree_insert(&p->itree, &pageflags_root); +} + +/* A subroutine of page_set_flags: remove everything in [start,last]. */ +static bool pageflags_unset(target_ulong start, target_ulong last) +{ + bool inval_tb =3D false; + + while (true) { + PageFlagsNode *p =3D pageflags_find(start, last); + target_ulong p_last; + + if (!p) { + break; + } + + if (p->flags & PAGE_EXEC) { + inval_tb =3D true; + } + + interval_tree_remove(&p->itree, &pageflags_root); + p_last =3D p->itree.last; + + if (p->itree.start < start) { + /* Truncate the node from the end, or split out the middle. */ + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + if (last < p_last) { + pageflags_create(last + 1, p_last, p->flags); + break; + } + } else if (p_last <=3D last) { + /* Range completely covers node -- remove it. */ + g_free(p); + } else { + /* Truncate the node from the start. */ + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + break; + } + } + + return inval_tb; +} + +/* + * A subroutine of page_set_flags: nothing overlaps [start,last], + * but check adjacent mappings and maybe merge into a single range. + */ +static void pageflags_create_merge(target_ulong start, target_ulong last, + int flags) +{ + PageFlagsNode *next =3D NULL, *prev =3D NULL; + + if (start > 0) { + prev =3D pageflags_find(start - 1, start - 1); + if (prev) { + if (prev->flags =3D=3D flags) { + interval_tree_remove(&prev->itree, &pageflags_root); + } else { + prev =3D NULL; + } + } + } + if (last + 1 !=3D 0) { + next =3D pageflags_find(last + 1, last + 1); + if (next) { + if (next->flags =3D=3D flags) { + interval_tree_remove(&next->itree, &pageflags_root); + } else { + next =3D NULL; + } + } + } + + if (prev) { + if (next) { + prev->itree.last =3D next->itree.last; + g_free(next); + } else { + prev->itree.last =3D last; + } + interval_tree_insert(&prev->itree, &pageflags_root); + } else if (next) { + next->itree.start =3D start; + interval_tree_insert(&next->itree, &pageflags_root); + } else { + pageflags_create(start, last, flags); + } } =20 /* @@ -267,6 +333,146 @@ int page_get_flags(target_ulong address) #endif #define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) =20 +/* A subroutine of page_set_flags: add flags to [start,last]. */ +static bool pageflags_set_clear(target_ulong start, target_ulong last, + int set_flags, int clear_flags) +{ + PageFlagsNode *p; + target_ulong p_start, p_last; + int p_flags, merge_flags; + bool inval_tb =3D false; + + restart: + p =3D pageflags_find(start, last); + if (!p) { + if (set_flags) { + pageflags_create_merge(start, last, set_flags); + } + goto done; + } + + p_start =3D p->itree.start; + p_last =3D p->itree.last; + p_flags =3D p->flags; + /* Using mprotect on a page does not change sticky bits. */ + merge_flags =3D (p_flags & ~clear_flags) | set_flags; + + /* + * Need to flush if an overlapping executable region + * removes exec, or adds write. + */ + if ((p_flags & PAGE_EXEC) + && (!(merge_flags & PAGE_EXEC) + || (merge_flags & ~p_flags & PAGE_WRITE))) { + inval_tb =3D true; + } + + /* + * If there is an exact range match, update and return without + * attempting to merge with adjacent regions. + */ + if (start =3D=3D p_start && last =3D=3D p_last) { + if (merge_flags) { + p->flags =3D merge_flags; + } else { + interval_tree_remove(&p->itree, &pageflags_root); + g_free(p); + } + goto done; + } + + /* + * If sticky bits affect the original mapping, then we must be more + * careful about the existing intervals and the separate flags. + */ + if (set_flags !=3D merge_flags) { + if (p_start < start) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + + if (last < p_last) { + if (merge_flags) { + pageflags_create(start, last, merge_flags); + } + pageflags_create(last + 1, p_last, p_flags); + } else { + if (merge_flags) { + pageflags_create(start, p_last, merge_flags); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + } + } else { + if (start < p_start && set_flags) { + pageflags_create(start, p_start - 1, set_flags); + } + if (last < p_last) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + if (merge_flags) { + pageflags_create(start, last, merge_flags); + } + } else { + if (merge_flags) { + p->flags =3D merge_flags; + } else { + interval_tree_remove(&p->itree, &pageflags_root); + g_free(p); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + } + } + goto done; + } + + /* If flags are not changing for this range, incorporate it. */ + if (set_flags =3D=3D p_flags) { + if (start < p_start) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.start =3D start; + interval_tree_insert(&p->itree, &pageflags_root); + } + if (p_last < last) { + start =3D p_last + 1; + goto restart; + } + goto done; + } + + /* Maybe split out head and/or tail ranges with the original flags. */ + interval_tree_remove(&p->itree, &pageflags_root); + if (p_start < start) { + p->itree.last =3D start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + + if (p_last < last) { + goto restart; + } + if (last < p_last) { + pageflags_create(last + 1, p_last, p_flags); + } + } else if (last < p_last) { + p->itree.start =3D last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + } else { + g_free(p); + goto restart; + } + if (set_flags) { + pageflags_create(start, last, set_flags); + } + + done: + return inval_tb; +} + /* * Modify the flags of a page and invalidate the code if necessary. * The flag PAGE_WRITE_ORG is positioned automatically depending @@ -274,49 +480,41 @@ int page_get_flags(target_ulong address) */ void page_set_flags(target_ulong start, target_ulong end, int flags) { - target_ulong addr, len; - bool reset, inval_tb =3D false; + target_ulong last; + bool reset =3D false; + bool inval_tb =3D false; =20 /* This function should never be called with addresses outside the guest address space. If this assert fires, it probably indicates a missing call to h2g_valid. */ - assert(end - 1 <=3D GUEST_ADDR_MAX); assert(start < end); + assert(end - 1 <=3D GUEST_ADDR_MAX); /* Only set PAGE_ANON with new mappings. */ assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); assert_memory_lock(); =20 start =3D start & TARGET_PAGE_MASK; end =3D TARGET_PAGE_ALIGN(end); + last =3D end - 1; =20 - if (flags & PAGE_WRITE) { - flags |=3D PAGE_WRITE_ORG; - } - reset =3D !(flags & PAGE_VALID) || (flags & PAGE_RESET); - if (reset) { - page_reset_target_data(start, end); - } - flags &=3D ~PAGE_RESET; - - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - PageDesc *p =3D page_find_alloc(addr >> TARGET_PAGE_BITS, true); - - /* - * If the page was executable, but is reset, or is no longer - * executable, or has become writable, then invalidate any code. - */ - if ((p->flags & PAGE_EXEC) - && (reset || - !(flags & PAGE_EXEC) || - (flags & ~p->flags & PAGE_WRITE))) { - inval_tb =3D true; + if (!(flags & PAGE_VALID)) { + flags =3D 0; + } else { + reset =3D flags & PAGE_RESET; + flags &=3D ~PAGE_RESET; + if (flags & PAGE_WRITE) { + flags |=3D PAGE_WRITE_ORG; } - /* Using mprotect on a page does not change sticky bits. */ - p->flags =3D (reset ? 0 : p->flags & PAGE_STICKY) | flags; } =20 + if (!flags || reset) { + page_reset_target_data(start, end); + inval_tb |=3D pageflags_unset(start, last); + } + if (flags) { + inval_tb |=3D pageflags_set_clear(start, last, flags, + ~(reset ? 0 : PAGE_STICKY)); + } if (inval_tb) { tb_invalidate_phys_range(start, end); } @@ -324,87 +522,89 @@ void page_set_flags(target_ulong start, target_ulong = end, int flags) =20 int page_check_range(target_ulong start, target_ulong len, int flags) { - PageDesc *p; - target_ulong end; - target_ulong addr; - - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { - assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); - } + target_ulong last; =20 if (len =3D=3D 0) { - return 0; - } - if (start + len - 1 < start) { - /* We've wrapped around. */ - return -1; + return 0; /* trivial length */ } =20 - /* must do before we loose bits in the next step */ - end =3D TARGET_PAGE_ALIGN(start + len); - start =3D start & TARGET_PAGE_MASK; + last =3D start + len - 1; + if (last < start) { + return -1; /* wrap around */ + } + + while (true) { + PageFlagsNode *p =3D pageflags_find(start, last); + int missing; =20 - for (addr =3D start, len =3D end - start; - len !=3D 0; - len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET_PAGE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); if (!p) { - return -1; + return -1; /* entire region invalid */ } - if (!(p->flags & PAGE_VALID)) { - return -1; + if (start < p->itree.start) { + return -1; /* initial bytes invalid */ } =20 - if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { - return -1; + missing =3D flags & ~p->flags; + if (missing & PAGE_READ) { + return -1; /* page not readable */ } - if (flags & PAGE_WRITE) { + if (missing & PAGE_WRITE) { if (!(p->flags & PAGE_WRITE_ORG)) { + return -1; /* page not writable */ + } + /* Asking about writable, but has been protected: undo. */ + if (!page_unprotect(start, 0)) { return -1; } - /* unprotect the page if it was put read-only because it - contains translated code */ - if (!(p->flags & PAGE_WRITE)) { - if (!page_unprotect(addr, 0)) { - return -1; - } + /* TODO: page_unprotect should take a range, not a single page= . */ + if (last - start < TARGET_PAGE_SIZE) { + return 0; /* ok */ } + start +=3D TARGET_PAGE_SIZE; + continue; } + + if (last <=3D p->itree.last) { + return 0; /* ok */ + } + start =3D p->itree.last + 1; } - return 0; } =20 -void page_protect(tb_page_addr_t page_addr) +void page_protect(tb_page_addr_t address) { - target_ulong addr; - PageDesc *p; + PageFlagsNode *p; + target_ulong start, last; int prot; =20 - p =3D page_find(page_addr >> TARGET_PAGE_BITS); - if (p && (p->flags & PAGE_WRITE)) { - /* - * Force the host page as non writable (writes will have a page fa= ult + - * mprotect overhead). - */ - page_addr &=3D qemu_host_page_mask; - prot =3D 0; - for (addr =3D page_addr; addr < page_addr + qemu_host_page_size; - addr +=3D TARGET_PAGE_SIZE) { + assert_memory_lock(); =20 - p =3D page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - continue; - } + if (qemu_host_page_size <=3D TARGET_PAGE_SIZE) { + start =3D address & TARGET_PAGE_MASK; + last =3D start + TARGET_PAGE_SIZE - 1; + } else { + start =3D address & qemu_host_page_mask; + last =3D start + qemu_host_page_size - 1; + } + + p =3D pageflags_find(start, last); + if (!p) { + return; + } + prot =3D p->flags; + + if (unlikely(p->itree.last < last)) { + /* More than one protection region covers the one host page. */ + assert(TARGET_PAGE_SIZE < qemu_host_page_size); + while ((p =3D pageflags_next(p, start, last)) !=3D NULL) { prot |=3D p->flags; - p->flags &=3D ~PAGE_WRITE; } - mprotect(g2h_untagged(page_addr), qemu_host_page_size, - (prot & PAGE_BITS) & ~PAGE_WRITE); + } + + if (prot & PAGE_WRITE) { + pageflags_set_clear(start, last, 0, PAGE_WRITE); + mprotect(g2h_untagged(start), qemu_host_page_size, + prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE); } } =20 @@ -417,10 +617,8 @@ void page_protect(tb_page_addr_t page_addr) */ int page_unprotect(target_ulong address, uintptr_t pc) { - unsigned int prot; + PageFlagsNode *p; bool current_tb_invalidated; - PageDesc *p; - target_ulong host_start, host_end, addr; =20 /* * Technically this isn't safe inside a signal handler. However we @@ -429,40 +627,54 @@ int page_unprotect(target_ulong address, uintptr_t pc) */ mmap_lock(); =20 - p =3D page_find(address >> TARGET_PAGE_BITS); - if (!p) { + p =3D pageflags_find(address, address); + + /* If this address was not really writable, nothing to do. */ + if (!p || !(p->flags & PAGE_WRITE_ORG)) { mmap_unlock(); return 0; } =20 - /* - * If the page was really writable, then we change its - * protection back to writable. - */ - if (p->flags & PAGE_WRITE_ORG) { - current_tb_invalidated =3D false; - if (p->flags & PAGE_WRITE) { - /* - * If the page is actually marked WRITE then assume this is be= cause - * this thread raced with another one which got here first and - * set the page to PAGE_WRITE and did the TB invalidate for us. - */ + current_tb_invalidated =3D false; + if (p->flags & PAGE_WRITE) { + /* + * If the page is actually marked WRITE then assume this is because + * this thread raced with another one which got here first and + * set the page to PAGE_WRITE and did the TB invalidate for us. + */ #ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D tcg_tb_lookup(pc); - if (current_tb) { - current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; - } + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); + if (current_tb) { + current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVALID; + } #endif + } else { + target_ulong start, len, i; + int prot; + + if (qemu_host_page_size <=3D TARGET_PAGE_SIZE) { + start =3D address & TARGET_PAGE_MASK; + len =3D TARGET_PAGE_SIZE; + prot =3D p->flags | PAGE_WRITE; + pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0); + current_tb_invalidated =3D tb_invalidate_phys_page_unwind(star= t, pc); } else { - host_start =3D address & qemu_host_page_mask; - host_end =3D host_start + qemu_host_page_size; - + start =3D address & qemu_host_page_mask; + len =3D qemu_host_page_size; prot =3D 0; - for (addr =3D host_start; addr < host_end; addr +=3D TARGET_PA= GE_SIZE) { - p =3D page_find(addr >> TARGET_PAGE_BITS); - p->flags |=3D PAGE_WRITE; - prot |=3D p->flags; =20 + for (i =3D 0; i < len; i +=3D TARGET_PAGE_SIZE) { + target_ulong addr =3D start + i; + + p =3D pageflags_find(addr, addr); + if (p) { + prot |=3D p->flags; + if (p->flags & PAGE_WRITE_ORG) { + prot |=3D PAGE_WRITE; + pageflags_set_clear(addr, addr + TARGET_PAGE_SIZE = - 1, + PAGE_WRITE, 0); + } + } /* * Since the content will be modified, we must invalidate * the corresponding translated code. @@ -470,15 +682,16 @@ int page_unprotect(target_ulong address, uintptr_t pc) current_tb_invalidated |=3D tb_invalidate_phys_page_unwind(addr, pc); } - mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, - prot & PAGE_BITS); } - mmap_unlock(); - /* If current TB was invalidated return to main loop */ - return current_tb_invalidated ? 2 : 1; + if (prot & PAGE_EXEC) { + prot =3D (prot & ~PAGE_EXEC) | PAGE_READ; + } + mprotect((void *)g2h_untagged(start), len, prot & PAGE_BITS); } mmap_unlock(); - return 0; + + /* If current TB was invalidated return to main loop */ + return current_tb_invalidated ? 2 : 1; } =20 static int probe_access_internal(CPUArchState *env, target_ulong addr, diff --git a/tests/tcg/multiarch/test-vma.c b/tests/tcg/multiarch/test-vma.c new file mode 100644 index 0000000000..2893d60334 --- /dev/null +++ b/tests/tcg/multiarch/test-vma.c @@ -0,0 +1,22 @@ +/* + * Test very large vma allocations. + * The qemu out-of-memory condition was within the mmap syscall itself. + * If the syscall actually returns with MAP_FAILED, the test succeeded. + */ +#include + +int main() +{ + int n =3D sizeof(size_t) =3D=3D 4 ? 32 : 45; + + for (int i =3D 28; i < n; i++) { + size_t l =3D (size_t)1 << i; + void *p =3D mmap(0, l, PROT_NONE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + if (p =3D=3D MAP_FAILED) { + break; + } + munmap(p, l); + } + return 0; +} --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671218229; cv=none; d=zohomail.com; s=zohoarc; b=gKM7Jyw0Eh+VSQHwOrWCvniXJAgUxri8nvqX94lG0KJPsjvG7xMi+RRo9Lu3W8O6OOLjBvO8/swkO33equE1C3pDAmQ+WOBDR3QsZedShhgUKwIgpIeYqT7YT1rKLrEFA447iR4Df5iMXRJYe0Szl1Y3GcKAFYozR6R75Sb8avw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671218229; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=VpkqTSi/S/R8YDjeXoE2gavgjVSzJyexbosPKuVi+f0=; b=CHdBxeXmSZ+E9J7chfTM3nCzDbWICeRWAyoZabB664yLv8RdqDCHnkYA9tAmF61RDV9dBD7QonBLGC6CrGVRabhFy/RlQbZz7iE5tz5OYsTMXqS++C35Qv+vMz86ryZMAuUaTGXT4Obv5oDTm46+/sxM4lO3y0jNX8rVvkq5hv8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671218229003521.4111568863016; Fri, 16 Dec 2022 11:17:09 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fpv-0000Ts-6B; Fri, 16 Dec 2022 13:53:35 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fps-0000QN-LM for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:32 -0500 Received: from mail-pj1-x102c.google.com ([2607:f8b0:4864:20::102c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpq-0003GB-6g for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:32 -0500 Received: by mail-pj1-x102c.google.com with SMTP id o12so3363272pjo.4 for ; Fri, 16 Dec 2022 10:53:29 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VpkqTSi/S/R8YDjeXoE2gavgjVSzJyexbosPKuVi+f0=; b=dDFCN+augU6EVq60NbYq5SMFrJjE7sGD4t+rTx3e727KpDfRuwqfgNpgoo9O5oTtw3 7LM06hmf7Mv/3PWFw3moMSa2o+7mi4ETt++Qw3O40npnhOojfLbpA+KK3w2lOx2KKXeX plNnNihI8gEC9K93sym6+jS7L3EykV/nBhR7itqqGkoNgjexCgCEamwX2m3TY0u1yGXI pAHxUoHCnzSYC0GHa8CizFChJ0vm8XUHv8TeEykvq0Su5Pr7I6aW58IlgbRX+WskjR9Q jbrBc0KYIWtQ7g6IBND3pgotmp3TTTl05T15p/SmjsksST9fDmQIx+i2KzwcgxmZYHmq fIkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VpkqTSi/S/R8YDjeXoE2gavgjVSzJyexbosPKuVi+f0=; b=w7V2cK/KAzNoeGBdpe65PXfcAtaHDYA7zjXlmyl/IkiPsy2Apthpxc8+XHUtlSJu7Z 4uYHAbNDh5l5gFBGteEOQBc8r+BegG1qxt07U5ocGfdSGsGTWt3yqjY/mzMyKYm0ZquF /deGL5iL1pvZPouPSQFEfNEAZAQO8+cqDrQO7Z6tzA0jx1T4nUpou4E+C36eUFTHzm+C HwZ2W8eyvTfSNrh31pPiF6mNJor9rnAX0Dgrvv4WfgiL3Q6PYnb5vS4Asf0+vNMMALER WphT/O4NOjLZa3wBYTnvr97VItDb3QcB8nWk8YGX4O6tYkuYLtdJBWcwigADPs9fRuv5 BMJg== X-Gm-Message-State: ANoB5pn8iUb52/FZkmTEhiiocziNRIjLVVpP27Qu2I/ZrFt3N5t08SwE 7x3jsGVPXzaVgA32Uk59Ot1D9twf66tCXXB5 X-Google-Smtp-Source: AA0mqf481g6mtToNQXR1nDohTEg8w0Qnb9yf/abIVvm8PcfgdXqSseM4NSxOQZb6Xs2jRE9dfwmAtQ== X-Received: by 2002:a05:6a20:a69d:b0:af:7762:3c29 with SMTP id ba29-20020a056a20a69d00b000af77623c29mr18899885pzb.10.1671216808480; Fri, 16 Dec 2022 10:53:28 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org Subject: [PULL 07/13] accel/tcg: Move PageDesc tree into tb-maint.c for system Date: Fri, 16 Dec 2022 10:52:59 -0800 Message-Id: <20221216185305.3429913-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102c; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671218230235100001 Content-Type: text/plain; charset="utf-8" Now that PageDesc is not used for user-only, and for system it is only used for tb maintenance, move the implementation into tb-main.c appropriately ifdefed. We have not yet eliminated all references to PageDesc for user-only, so retain a typedef to the structure without definition. Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 49 +++------------- accel/tcg/tb-maint.c | 120 ++++++++++++++++++++++++++++++++++++-- accel/tcg/translate-all.c | 95 ------------------------------ 3 files changed, 124 insertions(+), 140 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index be19bdf088..14b89c4ee8 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -23,51 +23,13 @@ #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif =20 -typedef struct PageDesc { +typedef struct PageDesc PageDesc; #ifndef CONFIG_USER_ONLY +struct PageDesc { QemuSpin lock; /* list of TBs intersecting this ram page */ uintptr_t first_tb; -#endif -} PageDesc; - -/* - * In system mode we want L1_MAP to be based on ram offsets, - * while in user mode we want it to be based on virtual addresses. - * - * TODO: For user mode, see the caveat re host vs guest virtual - * address spaces near GUEST_ADDR_MAX. - */ -#if !defined(CONFIG_USER_ONLY) -#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS -# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS -#else -# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS -#endif -#else -# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) -#endif - -/* Size of the L2 (and L3, etc) page tables. */ -#define V_L2_BITS 10 -#define V_L2_SIZE (1 << V_L2_BITS) - -/* - * L1 Mapping properties - */ -extern int v_l1_size; -extern int v_l1_shift; -extern int v_l2_levels; - -/* - * The bottom level has pointers to PageDesc, and is indexed by - * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. - */ -#define V_L1_MIN_BITS 4 -#define V_L1_MAX_BITS (V_L2_BITS + 3) -#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) - -extern void *l1_map[V_L1_MAX_SIZE]; +}; =20 PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); =20 @@ -76,6 +38,11 @@ static inline PageDesc *page_find(tb_page_addr_t index) return page_find_alloc(index, false); } =20 +void page_table_config_init(void); +#else +static inline void page_table_config_init(void) { } +#endif + /* list iterators for lists of tagged pointers in TranslationBlock */ #define TB_FOR_EACH_TAGGED(head, tb, n, field) \ for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 20e86c813d..d32e5f80c8 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -127,6 +127,111 @@ static PageForEachNext foreach_tb_next(PageForEachNex= t tb, } =20 #else +/* + * In system mode we want L1_MAP to be based on ram offsets. + */ +#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS +# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS +#else +# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS +#endif + +/* Size of the L2 (and L3, etc) page tables. */ +#define V_L2_BITS 10 +#define V_L2_SIZE (1 << V_L2_BITS) + +/* + * L1 Mapping properties + */ +static int v_l1_size; +static int v_l1_shift; +static int v_l2_levels; + +/* + * The bottom level has pointers to PageDesc, and is indexed by + * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. + */ +#define V_L1_MIN_BITS 4 +#define V_L1_MAX_BITS (V_L2_BITS + 3) +#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) + +static void *l1_map[V_L1_MAX_SIZE]; + +void page_table_config_init(void) +{ + uint32_t v_l1_bits; + + assert(TARGET_PAGE_BITS); + /* The bits remaining after N lower levels of page tables. */ + v_l1_bits =3D (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS; + if (v_l1_bits < V_L1_MIN_BITS) { + v_l1_bits +=3D V_L2_BITS; + } + + v_l1_size =3D 1 << v_l1_bits; + v_l1_shift =3D L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits; + v_l2_levels =3D v_l1_shift / V_L2_BITS - 1; + + assert(v_l1_bits <=3D V_L1_MAX_BITS); + assert(v_l1_shift % V_L2_BITS =3D=3D 0); + assert(v_l2_levels >=3D 0); +} + +PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +{ + PageDesc *pd; + void **lp; + int i; + + /* Level 1. Always allocated. */ + lp =3D l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); + + /* Level 2..N-1. */ + for (i =3D v_l2_levels; i > 0; i--) { + void **p =3D qatomic_rcu_read(lp); + + if (p =3D=3D NULL) { + void *existing; + + if (!alloc) { + return NULL; + } + p =3D g_new0(void *, V_L2_SIZE); + existing =3D qatomic_cmpxchg(lp, NULL, p); + if (unlikely(existing)) { + g_free(p); + p =3D existing; + } + } + + lp =3D p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); + } + + pd =3D qatomic_rcu_read(lp); + if (pd =3D=3D NULL) { + void *existing; + + if (!alloc) { + return NULL; + } + + pd =3D g_new0(PageDesc, V_L2_SIZE); + for (int i =3D 0; i < V_L2_SIZE; i++) { + qemu_spin_init(&pd[i].lock); + } + + existing =3D qatomic_cmpxchg(lp, NULL, pd); + if (unlikely(existing)) { + for (int i =3D 0; i < V_L2_SIZE; i++) { + qemu_spin_destroy(&pd[i].lock); + } + g_free(pd); + pd =3D existing; + } + } + + return pd + (index & (V_L2_SIZE - 1)); +} =20 /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void tb_remove_all_1(int level, void **lp) @@ -420,6 +525,17 @@ static void tb_phys_invalidate__locked(TranslationBloc= k *tb) qemu_thread_jit_execute(); } =20 +#ifdef CONFIG_USER_ONLY +static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, + bool alloc) +{ + *ret_p1 =3D NULL; + *ret_p2 =3D NULL; +} +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } +#else static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, PageDesc **ret_p2, tb_page_addr_t phys2, bool a= lloc) { @@ -460,10 +576,6 @@ static void page_lock_pair(PageDesc **ret_p1, tb_page_= addr_t phys1, } } =20 -#ifdef CONFIG_USER_ONLY -static inline void page_lock_tb(const TranslationBlock *tb) { } -static inline void page_unlock_tb(const TranslationBlock *tb) { } -#else /* lock the page(s) of a TB in the correct acquisition order */ static void page_lock_tb(const TranslationBlock *tb) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 0d7596fcb8..90787bc04f 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -114,37 +114,8 @@ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) * BITS_PER_BYTE); =20 -/* - * L1 Mapping properties - */ -int v_l1_size; -int v_l1_shift; -int v_l2_levels; - -void *l1_map[V_L1_MAX_SIZE]; - TBContext tb_ctx; =20 -static void page_table_config_init(void) -{ - uint32_t v_l1_bits; - - assert(TARGET_PAGE_BITS); - /* The bits remaining after N lower levels of page tables. */ - v_l1_bits =3D (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS; - if (v_l1_bits < V_L1_MIN_BITS) { - v_l1_bits +=3D V_L2_BITS; - } - - v_l1_size =3D 1 << v_l1_bits; - v_l1_shift =3D L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits; - v_l2_levels =3D v_l1_shift / V_L2_BITS - 1; - - assert(v_l1_bits <=3D V_L1_MAX_BITS); - assert(v_l1_shift % V_L2_BITS =3D=3D 0); - assert(v_l2_levels >=3D 0); -} - /* Encode VAL as a signed leb128 sequence at P. Return P incremented past the encoded value. */ static uint8_t *encode_sleb128(uint8_t *p, target_long val) @@ -404,72 +375,6 @@ void page_init(void) #endif } =20 -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) -{ - PageDesc *pd; - void **lp; - int i; - - /* Level 1. Always allocated. */ - lp =3D l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); - - /* Level 2..N-1. */ - for (i =3D v_l2_levels; i > 0; i--) { - void **p =3D qatomic_rcu_read(lp); - - if (p =3D=3D NULL) { - void *existing; - - if (!alloc) { - return NULL; - } - p =3D g_new0(void *, V_L2_SIZE); - existing =3D qatomic_cmpxchg(lp, NULL, p); - if (unlikely(existing)) { - g_free(p); - p =3D existing; - } - } - - lp =3D p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); - } - - pd =3D qatomic_rcu_read(lp); - if (pd =3D=3D NULL) { - void *existing; - - if (!alloc) { - return NULL; - } - pd =3D g_new0(PageDesc, V_L2_SIZE); -#ifndef CONFIG_USER_ONLY - { - int i; - - for (i =3D 0; i < V_L2_SIZE; i++) { - qemu_spin_init(&pd[i].lock); - } - } -#endif - existing =3D qatomic_cmpxchg(lp, NULL, pd); - if (unlikely(existing)) { -#ifndef CONFIG_USER_ONLY - { - int i; - - for (i =3D 0; i < V_L2_SIZE; i++) { - qemu_spin_destroy(&pd[i].lock); - } - } -#endif - g_free(pd); - pd =3D existing; - } - } - - return pd + (index & (V_L2_SIZE - 1)); -} - /* In user-mode page locks aren't used; mmap_lock is enough */ #ifdef CONFIG_USER_ONLY struct page_collection * --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671216853; cv=none; d=zohomail.com; s=zohoarc; b=A15wctI4/BByBTPpthHtcYQw3mBbJ2w9a9TbMTtPATrLmPWmFUZvvZkyYX4XqPW8KKvjWww7RKEAGqoEBUuRqmLOFbHCDcJjFYHpe2Mh8KVB9uoHLn4TzBPgoAg9sr7ByFVi9XLIUmt82k41oAGQ41aalxwkbW1KlQ6VofUrB/M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671216853; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zXS2ic4OUTBWZ1khRqfMbYIl4PA2XOgr8gm0iv6hdhI=; b=Ltozs5jMX7lNBBFhOkTA+onahe7ZhjzGDzpQnmusCyHJoEU5p9Zb7em4jm486WRy4vKXP0VtWUUKtMVDzapt9E27/2O7W7kIIooW1DvwqywvRBbf8Ym5lgcjlBu02RF1B/LY+Y9UxKqmNw/8+6hjr96OmyQVX3Ci8n6kc6zTa7w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671216853608336.5464763808411; Fri, 16 Dec 2022 10:54:13 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fq1-0000bt-Ea; Fri, 16 Dec 2022 13:53:42 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpu-0000TG-TB for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:34 -0500 Received: from mail-pj1-x102e.google.com ([2607:f8b0:4864:20::102e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpr-0003GF-6S for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:34 -0500 Received: by mail-pj1-x102e.google.com with SMTP id n65-20020a17090a2cc700b0021bc5ef7a14so3323423pjd.0 for ; Fri, 16 Dec 2022 10:53:30 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zXS2ic4OUTBWZ1khRqfMbYIl4PA2XOgr8gm0iv6hdhI=; b=nGLtynrxB00bkBog23sNi7XvbF34C+9Y/NW2HaDkUGMkF5mJo92q8gJz3emkEyJzx4 0f+izPivtoustXuXOU1Koo2nlBXbOC8gpUVOQmt8zjZuxfVkpoxPuw9uIROfRaxcpqiO wWqvv6oKTnlXiivdjQWZ0k40AaAdcNiNFF/AUOqGT798oFoJPD5CnkHtESSV6rccs41S l8kRkaXPDRt7h5YgpvoAhtildjj51t7CZZijMneJ3f70MdkvpQ3/43ajYGNFp9jnW2sF 8SZKUUntCsrW9z9xGivBO4IZ9gEzD325WQFwb4gGErh2h7it0jzA2u0jWpDooVWwCHLs RgJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zXS2ic4OUTBWZ1khRqfMbYIl4PA2XOgr8gm0iv6hdhI=; b=zPdtjWL5PIklR70N/5jGsZqzQoBZsSzkEshaUxc1JnUcXy2iii6VppKEePzWAvMgmV Ivxk6lRVoVToLytlxE3WnoxKOYE7xoaTDnXI/LqUUsNBZZGIkU4f50qsSGUFsimDDxjE 6XVlgvjGQ5OVYeBfUe/zbh1GYfjyp4zj5mj1fbv/W1+h9yl+rSuZqqnzfQQm73UEkqLZ TicL9EI4KAdKYnrNrkkefIVaVLoFdAUWL4cSlzQFmtMSV3fanUmWsR7xg0yIm91sUII3 6ytrZ1K/6c2ydbLFk4aRWq0DKpGoloYiTzTnPjYKRYAtERLi65+E1FSJTlFQlDazvk5X Kmww== X-Gm-Message-State: ANoB5pnL27wqEXYMNkROYCg4DAM+P4RkH4N/PVZRXGh4zu9yo9PBiX+W EtZ8G1t6yWS7n1WHVBtbdK+C9D64jHSur9ng X-Google-Smtp-Source: AA0mqf7yjsB5XhjMLMR64u+0X3ao0y4HZoe7V6XZC37zJRkwKWw7ZvAf9llwJcZ9pn3tfJYozOApsw== X-Received: by 2002:a05:6a20:9e4a:b0:ac:1265:d5bb with SMTP id mt10-20020a056a209e4a00b000ac1265d5bbmr36680845pzb.49.1671216809336; Fri, 16 Dec 2022 10:53:29 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 08/13] accel/tcg: Move remainder of page locking to tb-maint.c Date: Fri, 16 Dec 2022 10:53:00 -0800 Message-Id: <20221216185305.3429913-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102e; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671216854085100003 The only thing that still touches PageDesc in translate-all.c are some locking routines related to tb-maint.c which have not yet been moved. Do so now. Move some code up in tb-maint.c as well, to untangle the maze of ifdefs, and allow a sensible final ordering. Move some declarations from exec/translate-all.h to internal.h, as they are only used within accel/tcg/. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 68 ++--- include/exec/translate-all.h | 6 - accel/tcg/tb-maint.c | 473 +++++++++++++++++++++++++++++------ accel/tcg/translate-all.c | 301 ---------------------- 4 files changed, 411 insertions(+), 437 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 14b89c4ee8..e1429a53ac 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -23,62 +23,28 @@ #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif =20 -typedef struct PageDesc PageDesc; -#ifndef CONFIG_USER_ONLY -struct PageDesc { - QemuSpin lock; - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; -}; - -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); - -static inline PageDesc *page_find(tb_page_addr_t index) -{ - return page_find_alloc(index, false); -} - -void page_table_config_init(void); -#else -static inline void page_table_config_init(void) { } -#endif - -/* list iterators for lists of tagged pointers in TranslationBlock */ -#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ - for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ - tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ - tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) - -#define TB_FOR_EACH_JMP(head_tb, tb, n) \ - TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) - -/* In user-mode page locks aren't used; mmap_lock is enough */ -#ifdef CONFIG_USER_ONLY -#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) -static inline void page_lock(PageDesc *pd) { } -static inline void page_unlock(PageDesc *pd) { } -#else -#ifdef CONFIG_DEBUG_TCG -void do_assert_page_locked(const PageDesc *pd, const char *file, int line); -#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE_= _) -#else -#define assert_page_locked(pd) -#endif -void page_lock(PageDesc *pd); -void page_unlock(PageDesc *pd); - -/* TODO: For now, still shared with translate-all.c for system mode. */ -typedef int PageForEachNext; -#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - -#endif -#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) +#if defined(CONFIG_SOFTMMU) && defined(CONFIG_DEBUG_TCG) void assert_no_pages_locked(void); #else static inline void assert_no_pages_locked(void) { } #endif =20 +#ifdef CONFIG_USER_ONLY +static inline void page_table_config_init(void) { } +#else +void page_table_config_init(void); +#endif + +#ifdef CONFIG_SOFTMMU +struct page_collection; +void tb_invalidate_phys_page_fast(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr); +struct page_collection *page_collection_lock(tb_page_addr_t start, + tb_page_addr_t end); +void page_collection_unlock(struct page_collection *set); +#endif /* CONFIG_SOFTMMU */ + TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, int cflags); diff --git a/include/exec/translate-all.h b/include/exec/translate-all.h index 3e9cb91565..88602ae8d8 100644 --- a/include/exec/translate-all.h +++ b/include/exec/translate-all.h @@ -23,12 +23,6 @@ =20 =20 /* translate-all.c */ -struct page_collection *page_collection_lock(tb_page_addr_t start, - tb_page_addr_t end); -void page_collection_unlock(struct page_collection *set); -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr); void tb_invalidate_phys_page(tb_page_addr_t addr); void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr); =20 diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index d32e5f80c8..1676d359f2 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -30,6 +30,15 @@ #include "internal.h" =20 =20 +/* List iterators for lists of tagged pointers in TranslationBlock. */ +#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ + for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ + tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ + tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) + +#define TB_FOR_EACH_JMP(head_tb, tb, n) \ + TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) + static bool tb_cmp(const void *ap, const void *bp) { const TranslationBlock *a =3D ap; @@ -51,7 +60,27 @@ void tb_htable_init(void) qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } =20 +typedef struct PageDesc PageDesc; + #ifdef CONFIG_USER_ONLY + +/* + * In user-mode page locks aren't used; mmap_lock is enough. + */ +#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) + +static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, + bool alloc) +{ + *ret_p1 =3D NULL; + *ret_p2 =3D NULL; +} + +static inline void page_unlock(PageDesc *pd) { } +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } + /* * For user-only, since we are protecting all of memory with a single lock, * and because the two pages of a TranslationBlock are always contiguous, @@ -157,6 +186,12 @@ static int v_l2_levels; =20 static void *l1_map[V_L1_MAX_SIZE]; =20 +struct PageDesc { + QemuSpin lock; + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; +}; + void page_table_config_init(void) { uint32_t v_l1_bits; @@ -177,7 +212,7 @@ void page_table_config_init(void) assert(v_l2_levels >=3D 0); } =20 -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) { PageDesc *pd; void **lp; @@ -233,6 +268,303 @@ PageDesc *page_find_alloc(tb_page_addr_t index, bool = alloc) return pd + (index & (V_L2_SIZE - 1)); } =20 +static inline PageDesc *page_find(tb_page_addr_t index) +{ + return page_find_alloc(index, false); +} + +/** + * struct page_entry - page descriptor entry + * @pd: pointer to the &struct PageDesc of the page this entry represe= nts + * @index: page index of the page + * @locked: whether the page is locked + * + * This struct helps us keep track of the locked state of a page, without + * bloating &struct PageDesc. + * + * A page lock protects accesses to all fields of &struct PageDesc. + * + * See also: &struct page_collection. + */ +struct page_entry { + PageDesc *pd; + tb_page_addr_t index; + bool locked; +}; + +/** + * struct page_collection - tracks a set of pages (i.e. &struct page_entry= 's) + * @tree: Binary search tree (BST) of the pages, with key =3D=3D page in= dex + * @max: Pointer to the page in @tree with the highest page index + * + * To avoid deadlock we lock pages in ascending order of page index. + * When operating on a set of pages, we need to keep track of them so that + * we can lock them in order and also unlock them later. For this we colle= ct + * pages (i.e. &struct page_entry's) in a binary search @tree. Given that = the + * @tree implementation we use does not provide an O(1) operation to obtai= n the + * highest-ranked element, we use @max to keep track of the inserted page + * with the highest index. This is valuable because if a page is not in + * the tree and its index is higher than @max's, then we can lock it + * without breaking the locking order rule. + * + * Note on naming: 'struct page_set' would be shorter, but we already have= a few + * page_set_*() helpers, so page_collection is used instead to avoid confu= sion. + * + * See also: page_collection_lock(). + */ +struct page_collection { + GTree *tree; + struct page_entry *max; +}; + +typedef int PageForEachNext; +#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + +#ifdef CONFIG_DEBUG_TCG + +static __thread GHashTable *ht_pages_locked_debug; + +static void ht_pages_locked_debug_init(void) +{ + if (ht_pages_locked_debug) { + return; + } + ht_pages_locked_debug =3D g_hash_table_new(NULL, NULL); +} + +static bool page_is_locked(const PageDesc *pd) +{ + PageDesc *found; + + ht_pages_locked_debug_init(); + found =3D g_hash_table_lookup(ht_pages_locked_debug, pd); + return !!found; +} + +static void page_lock__debug(PageDesc *pd) +{ + ht_pages_locked_debug_init(); + g_assert(!page_is_locked(pd)); + g_hash_table_insert(ht_pages_locked_debug, pd, pd); +} + +static void page_unlock__debug(const PageDesc *pd) +{ + bool removed; + + ht_pages_locked_debug_init(); + g_assert(page_is_locked(pd)); + removed =3D g_hash_table_remove(ht_pages_locked_debug, pd); + g_assert(removed); +} + +static void do_assert_page_locked(const PageDesc *pd, + const char *file, int line) +{ + if (unlikely(!page_is_locked(pd))) { + error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", + pd, file, line); + abort(); + } +} +#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE_= _) + +void assert_no_pages_locked(void) +{ + ht_pages_locked_debug_init(); + g_assert(g_hash_table_size(ht_pages_locked_debug) =3D=3D 0); +} + +#else /* !CONFIG_DEBUG_TCG */ + +static inline void page_lock__debug(const PageDesc *pd) { } +static inline void page_unlock__debug(const PageDesc *pd) { } +static inline void assert_page_locked(const PageDesc *pd) { } + +#endif /* CONFIG_DEBUG_TCG */ + +static void page_lock(PageDesc *pd) +{ + page_lock__debug(pd); + qemu_spin_lock(&pd->lock); +} + +static void page_unlock(PageDesc *pd) +{ + qemu_spin_unlock(&pd->lock); + page_unlock__debug(pd); +} + +static inline struct page_entry * +page_entry_new(PageDesc *pd, tb_page_addr_t index) +{ + struct page_entry *pe =3D g_malloc(sizeof(*pe)); + + pe->index =3D index; + pe->pd =3D pd; + pe->locked =3D false; + return pe; +} + +static void page_entry_destroy(gpointer p) +{ + struct page_entry *pe =3D p; + + g_assert(pe->locked); + page_unlock(pe->pd); + g_free(pe); +} + +/* returns false on success */ +static bool page_entry_trylock(struct page_entry *pe) +{ + bool busy; + + busy =3D qemu_spin_trylock(&pe->pd->lock); + if (!busy) { + g_assert(!pe->locked); + pe->locked =3D true; + page_lock__debug(pe->pd); + } + return busy; +} + +static void do_page_entry_lock(struct page_entry *pe) +{ + page_lock(pe->pd); + g_assert(!pe->locked); + pe->locked =3D true; +} + +static gboolean page_entry_lock(gpointer key, gpointer value, gpointer dat= a) +{ + struct page_entry *pe =3D value; + + do_page_entry_lock(pe); + return FALSE; +} + +static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer d= ata) +{ + struct page_entry *pe =3D value; + + if (pe->locked) { + pe->locked =3D false; + page_unlock(pe->pd); + } + return FALSE; +} + +/* + * Trylock a page, and if successful, add the page to a collection. + * Returns true ("busy") if the page could not be locked; false otherwise. + */ +static bool page_trylock_add(struct page_collection *set, tb_page_addr_t a= ddr) +{ + tb_page_addr_t index =3D addr >> TARGET_PAGE_BITS; + struct page_entry *pe; + PageDesc *pd; + + pe =3D g_tree_lookup(set->tree, &index); + if (pe) { + return false; + } + + pd =3D page_find(index); + if (pd =3D=3D NULL) { + return false; + } + + pe =3D page_entry_new(pd, index); + g_tree_insert(set->tree, &pe->index, pe); + + /* + * If this is either (1) the first insertion or (2) a page whose index + * is higher than any other so far, just lock the page and move on. + */ + if (set->max =3D=3D NULL || pe->index > set->max->index) { + set->max =3D pe; + do_page_entry_lock(pe); + return false; + } + /* + * Try to acquire out-of-order lock; if busy, return busy so that we a= cquire + * locks in order. + */ + return page_entry_trylock(pe); +} + +static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer = udata) +{ + tb_page_addr_t a =3D *(const tb_page_addr_t *)ap; + tb_page_addr_t b =3D *(const tb_page_addr_t *)bp; + + if (a =3D=3D b) { + return 0; + } else if (a < b) { + return -1; + } + return 1; +} + +/* + * Lock a range of pages ([@start,@end[) as well as the pages of all + * intersecting TBs. + * Locking order: acquire locks in ascending order of page index. + */ +struct page_collection * +page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) +{ + struct page_collection *set =3D g_malloc(sizeof(*set)); + tb_page_addr_t index; + PageDesc *pd; + + start >>=3D TARGET_PAGE_BITS; + end >>=3D TARGET_PAGE_BITS; + g_assert(start <=3D end); + + set->tree =3D g_tree_new_full(tb_page_addr_cmp, NULL, NULL, + page_entry_destroy); + set->max =3D NULL; + assert_no_pages_locked(); + + retry: + g_tree_foreach(set->tree, page_entry_lock, NULL); + + for (index =3D start; index <=3D end; index++) { + TranslationBlock *tb; + PageForEachNext n; + + pd =3D page_find(index); + if (pd =3D=3D NULL) { + continue; + } + if (page_trylock_add(set, index << TARGET_PAGE_BITS)) { + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + assert_page_locked(pd); + PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { + if (page_trylock_add(set, tb_page_addr0(tb)) || + (tb_page_addr1(tb) !=3D -1 && + page_trylock_add(set, tb_page_addr1(tb)))) { + /* drop all locks, and reacquire in order */ + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + } + } + return set; +} + +void page_collection_unlock(struct page_collection *set) +{ + /* entries are unlocked and freed via page_entry_destroy */ + g_tree_destroy(set->tree); + g_free(set); +} + /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void tb_remove_all_1(int level, void **lp) { @@ -329,6 +661,66 @@ static void tb_remove(TranslationBlock *tb) tb_page_remove(pd, tb); } } + +static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, bool a= lloc) +{ + PageDesc *p1, *p2; + tb_page_addr_t page1; + tb_page_addr_t page2; + + assert_memory_lock(); + g_assert(phys1 !=3D -1); + + page1 =3D phys1 >> TARGET_PAGE_BITS; + page2 =3D phys2 >> TARGET_PAGE_BITS; + + p1 =3D page_find_alloc(page1, alloc); + if (ret_p1) { + *ret_p1 =3D p1; + } + if (likely(phys2 =3D=3D -1)) { + page_lock(p1); + return; + } else if (page1 =3D=3D page2) { + page_lock(p1); + if (ret_p2) { + *ret_p2 =3D p1; + } + return; + } + p2 =3D page_find_alloc(page2, alloc); + if (ret_p2) { + *ret_p2 =3D p2; + } + if (page1 < page2) { + page_lock(p1); + page_lock(p2); + } else { + page_lock(p2); + page_lock(p1); + } +} + +/* lock the page(s) of a TB in the correct acquisition order */ +static void page_lock_tb(const TranslationBlock *tb) +{ + page_lock_pair(NULL, tb_page_addr0(tb), NULL, tb_page_addr1(tb), false= ); +} + +static void page_unlock_tb(const TranslationBlock *tb) +{ + PageDesc *p1 =3D page_find(tb_page_addr0(tb) >> TARGET_PAGE_BITS); + + page_unlock(p1); + if (unlikely(tb_page_addr1(tb) !=3D -1)) { + PageDesc *p2 =3D page_find(tb_page_addr1(tb) >> TARGET_PAGE_BITS); + + if (p2 !=3D p1) { + page_unlock(p2); + } + } +} #endif /* CONFIG_USER_ONLY */ =20 /* flush all the translation blocks */ @@ -525,78 +917,6 @@ static void tb_phys_invalidate__locked(TranslationBloc= k *tb) qemu_thread_jit_execute(); } =20 -#ifdef CONFIG_USER_ONLY -static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, - PageDesc **ret_p2, tb_page_addr_t phys2, - bool alloc) -{ - *ret_p1 =3D NULL; - *ret_p2 =3D NULL; -} -static inline void page_lock_tb(const TranslationBlock *tb) { } -static inline void page_unlock_tb(const TranslationBlock *tb) { } -#else -static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, - PageDesc **ret_p2, tb_page_addr_t phys2, bool a= lloc) -{ - PageDesc *p1, *p2; - tb_page_addr_t page1; - tb_page_addr_t page2; - - assert_memory_lock(); - g_assert(phys1 !=3D -1); - - page1 =3D phys1 >> TARGET_PAGE_BITS; - page2 =3D phys2 >> TARGET_PAGE_BITS; - - p1 =3D page_find_alloc(page1, alloc); - if (ret_p1) { - *ret_p1 =3D p1; - } - if (likely(phys2 =3D=3D -1)) { - page_lock(p1); - return; - } else if (page1 =3D=3D page2) { - page_lock(p1); - if (ret_p2) { - *ret_p2 =3D p1; - } - return; - } - p2 =3D page_find_alloc(page2, alloc); - if (ret_p2) { - *ret_p2 =3D p2; - } - if (page1 < page2) { - page_lock(p1); - page_lock(p2); - } else { - page_lock(p2); - page_lock(p1); - } -} - -/* lock the page(s) of a TB in the correct acquisition order */ -static void page_lock_tb(const TranslationBlock *tb) -{ - page_lock_pair(NULL, tb_page_addr0(tb), NULL, tb_page_addr1(tb), false= ); -} - -static void page_unlock_tb(const TranslationBlock *tb) -{ - PageDesc *p1 =3D page_find(tb_page_addr0(tb) >> TARGET_PAGE_BITS); - - page_unlock(p1); - if (unlikely(tb_page_addr1(tb) !=3D -1)) { - PageDesc *p2 =3D page_find(tb_page_addr1(tb) >> TARGET_PAGE_BITS); - - if (p2 !=3D p1) { - page_unlock(p2); - } - } -} -#endif - /* * Invalidate one TB. * Called with mmap_lock held in user-mode. @@ -746,8 +1066,7 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t add= r, uintptr_t pc) #else /* * @p must be non-NULL. - * user-mode: call with mmap_lock held. - * !user-mode: call with all @pages locked. + * Call with all @pages locked. */ static void tb_invalidate_phys_page_range__locked(struct page_collection *pages, @@ -817,8 +1136,6 @@ tb_invalidate_phys_page_range__locked(struct page_coll= ection *pages, /* * Invalidate all TBs which intersect with the target physical * address page @addr. - * - * Called with mmap_lock held for user-mode emulation */ void tb_invalidate_phys_page(tb_page_addr_t addr) { @@ -844,8 +1161,6 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) * 'is_cpu_write_access' should be true if called from a real cpu write * access: the virtual CPU will exit the current TB if code is modified in= side * this TB. - * - * Called with mmap_lock held for user-mode emulation. */ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 90787bc04f..ed6656fb14 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -63,52 +63,6 @@ #include "tb-context.h" #include "internal.h" =20 -/* make various TB consistency checks */ - -/** - * struct page_entry - page descriptor entry - * @pd: pointer to the &struct PageDesc of the page this entry represe= nts - * @index: page index of the page - * @locked: whether the page is locked - * - * This struct helps us keep track of the locked state of a page, without - * bloating &struct PageDesc. - * - * A page lock protects accesses to all fields of &struct PageDesc. - * - * See also: &struct page_collection. - */ -struct page_entry { - PageDesc *pd; - tb_page_addr_t index; - bool locked; -}; - -/** - * struct page_collection - tracks a set of pages (i.e. &struct page_entry= 's) - * @tree: Binary search tree (BST) of the pages, with key =3D=3D page in= dex - * @max: Pointer to the page in @tree with the highest page index - * - * To avoid deadlock we lock pages in ascending order of page index. - * When operating on a set of pages, we need to keep track of them so that - * we can lock them in order and also unlock them later. For this we colle= ct - * pages (i.e. &struct page_entry's) in a binary search @tree. Given that = the - * @tree implementation we use does not provide an O(1) operation to obtai= n the - * highest-ranked element, we use @max to keep track of the inserted page - * with the highest index. This is valuable because if a page is not in - * the tree and its index is higher than @max's, then we can lock it - * without breaking the locking order rule. - * - * Note on naming: 'struct page_set' would be shorter, but we already have= a few - * page_set_*() helpers, so page_collection is used instead to avoid confu= sion. - * - * See also: page_collection_lock(). - */ -struct page_collection { - GTree *tree; - struct page_entry *max; -}; - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -375,261 +329,6 @@ void page_init(void) #endif } =20 -/* In user-mode page locks aren't used; mmap_lock is enough */ -#ifdef CONFIG_USER_ONLY -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) -{ - return NULL; -} - -void page_collection_unlock(struct page_collection *set) -{ } -#else /* !CONFIG_USER_ONLY */ - -#ifdef CONFIG_DEBUG_TCG - -static __thread GHashTable *ht_pages_locked_debug; - -static void ht_pages_locked_debug_init(void) -{ - if (ht_pages_locked_debug) { - return; - } - ht_pages_locked_debug =3D g_hash_table_new(NULL, NULL); -} - -static bool page_is_locked(const PageDesc *pd) -{ - PageDesc *found; - - ht_pages_locked_debug_init(); - found =3D g_hash_table_lookup(ht_pages_locked_debug, pd); - return !!found; -} - -static void page_lock__debug(PageDesc *pd) -{ - ht_pages_locked_debug_init(); - g_assert(!page_is_locked(pd)); - g_hash_table_insert(ht_pages_locked_debug, pd, pd); -} - -static void page_unlock__debug(const PageDesc *pd) -{ - bool removed; - - ht_pages_locked_debug_init(); - g_assert(page_is_locked(pd)); - removed =3D g_hash_table_remove(ht_pages_locked_debug, pd); - g_assert(removed); -} - -void do_assert_page_locked(const PageDesc *pd, const char *file, int line) -{ - if (unlikely(!page_is_locked(pd))) { - error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", - pd, file, line); - abort(); - } -} - -void assert_no_pages_locked(void) -{ - ht_pages_locked_debug_init(); - g_assert(g_hash_table_size(ht_pages_locked_debug) =3D=3D 0); -} - -#else /* !CONFIG_DEBUG_TCG */ - -static inline void page_lock__debug(const PageDesc *pd) { } -static inline void page_unlock__debug(const PageDesc *pd) { } - -#endif /* CONFIG_DEBUG_TCG */ - -void page_lock(PageDesc *pd) -{ - page_lock__debug(pd); - qemu_spin_lock(&pd->lock); -} - -void page_unlock(PageDesc *pd) -{ - qemu_spin_unlock(&pd->lock); - page_unlock__debug(pd); -} - -static inline struct page_entry * -page_entry_new(PageDesc *pd, tb_page_addr_t index) -{ - struct page_entry *pe =3D g_malloc(sizeof(*pe)); - - pe->index =3D index; - pe->pd =3D pd; - pe->locked =3D false; - return pe; -} - -static void page_entry_destroy(gpointer p) -{ - struct page_entry *pe =3D p; - - g_assert(pe->locked); - page_unlock(pe->pd); - g_free(pe); -} - -/* returns false on success */ -static bool page_entry_trylock(struct page_entry *pe) -{ - bool busy; - - busy =3D qemu_spin_trylock(&pe->pd->lock); - if (!busy) { - g_assert(!pe->locked); - pe->locked =3D true; - page_lock__debug(pe->pd); - } - return busy; -} - -static void do_page_entry_lock(struct page_entry *pe) -{ - page_lock(pe->pd); - g_assert(!pe->locked); - pe->locked =3D true; -} - -static gboolean page_entry_lock(gpointer key, gpointer value, gpointer dat= a) -{ - struct page_entry *pe =3D value; - - do_page_entry_lock(pe); - return FALSE; -} - -static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer d= ata) -{ - struct page_entry *pe =3D value; - - if (pe->locked) { - pe->locked =3D false; - page_unlock(pe->pd); - } - return FALSE; -} - -/* - * Trylock a page, and if successful, add the page to a collection. - * Returns true ("busy") if the page could not be locked; false otherwise. - */ -static bool page_trylock_add(struct page_collection *set, tb_page_addr_t a= ddr) -{ - tb_page_addr_t index =3D addr >> TARGET_PAGE_BITS; - struct page_entry *pe; - PageDesc *pd; - - pe =3D g_tree_lookup(set->tree, &index); - if (pe) { - return false; - } - - pd =3D page_find(index); - if (pd =3D=3D NULL) { - return false; - } - - pe =3D page_entry_new(pd, index); - g_tree_insert(set->tree, &pe->index, pe); - - /* - * If this is either (1) the first insertion or (2) a page whose index - * is higher than any other so far, just lock the page and move on. - */ - if (set->max =3D=3D NULL || pe->index > set->max->index) { - set->max =3D pe; - do_page_entry_lock(pe); - return false; - } - /* - * Try to acquire out-of-order lock; if busy, return busy so that we a= cquire - * locks in order. - */ - return page_entry_trylock(pe); -} - -static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer = udata) -{ - tb_page_addr_t a =3D *(const tb_page_addr_t *)ap; - tb_page_addr_t b =3D *(const tb_page_addr_t *)bp; - - if (a =3D=3D b) { - return 0; - } else if (a < b) { - return -1; - } - return 1; -} - -/* - * Lock a range of pages ([@start,@end[) as well as the pages of all - * intersecting TBs. - * Locking order: acquire locks in ascending order of page index. - */ -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) -{ - struct page_collection *set =3D g_malloc(sizeof(*set)); - tb_page_addr_t index; - PageDesc *pd; - - start >>=3D TARGET_PAGE_BITS; - end >>=3D TARGET_PAGE_BITS; - g_assert(start <=3D end); - - set->tree =3D g_tree_new_full(tb_page_addr_cmp, NULL, NULL, - page_entry_destroy); - set->max =3D NULL; - assert_no_pages_locked(); - - retry: - g_tree_foreach(set->tree, page_entry_lock, NULL); - - for (index =3D start; index <=3D end; index++) { - TranslationBlock *tb; - PageForEachNext n; - - pd =3D page_find(index); - if (pd =3D=3D NULL) { - continue; - } - if (page_trylock_add(set, index << TARGET_PAGE_BITS)) { - g_tree_foreach(set->tree, page_entry_unlock, NULL); - goto retry; - } - assert_page_locked(pd); - PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { - if (page_trylock_add(set, tb_page_addr0(tb)) || - (tb_page_addr1(tb) !=3D -1 && - page_trylock_add(set, tb_page_addr1(tb)))) { - /* drop all locks, and reacquire in order */ - g_tree_foreach(set->tree, page_entry_unlock, NULL); - goto retry; - } - } - } - return set; -} - -void page_collection_unlock(struct page_collection *set) -{ - /* entries are unlocked and freed via page_entry_destroy */ - g_tree_destroy(set->tree); - g_free(set); -} - -#endif /* !CONFIG_USER_ONLY */ - /* * Isolate the portion of code gen which can setjmp/longjmp. * Return the size of the generated code, or negative on error. --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671216899; cv=none; d=zohomail.com; s=zohoarc; b=mP8Rh11+1zlGz3hWhSVv26y1a6mkcm0KXxUz+v+VEhjppgGB5LZg7TkN1GLwWtTvEx12FZvVH4tKAC2c9iSeg5OiApNj++fSuYo6kM8Q+BA6qmWkCeBFUEPTnDgcO/zmwufxtHkN02HQVqcy9KN3pnrpYMuDVyllplnLV8KiheA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671216899; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=znZNd3WG/BhN4qCSHGo7fJ9BmZ3G69LA+YD4Q3QuGKE=; b=NAdCMa1i9FGhs7TRdNQLsexv8VczRn6dcAktlyxvAHkldHUjOxkbPMkytxszvWSK8PsAKX5yM5rVus8blvSL8fI/Qgtm0/qzGPPQHRvvakVaPBJfy1ijNsBu957l/2rGX8MIfKhbCXxwLhccE02vh1jSrHuvMLgwGVjquG56LWc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671216899899588.5017483380841; Fri, 16 Dec 2022 10:54:59 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fq2-0000cG-Ir; Fri, 16 Dec 2022 13:53:42 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpt-0000RY-Gp for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:33 -0500 Received: from mail-pj1-x102a.google.com ([2607:f8b0:4864:20::102a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpr-0003GN-Q7 for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:33 -0500 Received: by mail-pj1-x102a.google.com with SMTP id b13-20020a17090a5a0d00b0021906102d05so3254132pjd.5 for ; Fri, 16 Dec 2022 10:53:31 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=znZNd3WG/BhN4qCSHGo7fJ9BmZ3G69LA+YD4Q3QuGKE=; b=pngyB/O0d1zApLv2MjZkIy3SPWOYTCTSo+Uz/AzLtitrg5KJGcKNxtaXoEyY9WS+YL amK/vKgwDDBjBupUpoIksvVm96F1hpcY8k8y2DO83TdiMdg0KbhNtW9k0eK/64rlQBgj K7e1y7Lc98uKMR0XxQHbeJHN/GZefsTVlQ8N8+aBORl36PlbSJmmhreLD/XtIsY4tFFM TvKKK8uqrB4CXUV3fwlosLryw9RrX+xHI6jQ5PW/FY1kXyntA4sEYfJfT+pYkmG8yIK2 wVhch7Ajgz7I/JHtGQUtNiZiRURzZIh23lOOvSTRq4LjgKWc7xoFgUiDB9kmbIJoBHLo +l5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=znZNd3WG/BhN4qCSHGo7fJ9BmZ3G69LA+YD4Q3QuGKE=; b=Qt5zaeWDEgWdVLj7YrUEnIm6CsFAUl1jS4CdYuweuV5a4vBqPY3sGfs3dCMsYERXRA gSrKFrHbYVha0NvJwetdwlPWxXeu/TWgKvE/NwfoetuIAwkG8u9PxjcUPjrErpo82QzG TOwvSq+r3Z6yk7AZmkDMSzj3IEX/B9yo7tnxv8QYB0vEge40qiVJhvs4Xz5afNBfxc+p zQ2/VNprKQgepVuPW9jf3/wPBbqs5GX44vRG1Ezz3OEOC3+iQA0uWk6c55fK7GP5Edy0 HTYsDm3Hf++TcfUmelcOa898xmtM7OcgWFrauE/3x6o+h0n6cSxxMJ1c99E4x9bmAHiz QJGg== X-Gm-Message-State: ANoB5pmRER7FNoC+zzb00L/HpoCxtN2erOof7g645ZzDOF5vbeOaOyZ5 BpTSdU7C78p8sMgFRceGRlE0l98UESl8ftvN X-Google-Smtp-Source: AA0mqf6/SAliRClePTZ4erXLkuEpUKFTcSE9fqgHWiEyO2Q1KDpfP3k1tO1x/nbYoMqZ5SYp6KLBIg== X-Received: by 2002:a17:902:ce06:b0:188:bc62:276f with SMTP id k6-20020a170902ce0600b00188bc62276fmr37998170plg.3.1671216810180; Fri, 16 Dec 2022 10:53:30 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 09/13] accel/tcg: Restrict cpu_io_recompile() to system emulation Date: Fri, 16 Dec 2022 10:53:01 -0800 Message-Id: <20221216185305.3429913-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102a; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671216900254100001 From: Philippe Mathieu-Daud=C3=A9 Missed in commit 6526919224 ("accel/tcg: Restrict cpu_io_recompile() from other accelerators"). Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-2-philmd@linaro.org> Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index e1429a53ac..35419f3fe1 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -43,12 +43,12 @@ void tb_invalidate_phys_page_fast(struct page_collectio= n *pages, struct page_collection *page_collection_lock(tb_page_addr_t start, tb_page_addr_t end); void page_collection_unlock(struct page_collection *set); +G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); #endif /* CONFIG_SOFTMMU */ =20 TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, int cflags); -G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); void page_init(void); void tb_htable_init(void); void tb_reset_jump(TranslationBlock *tb, int n); --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671216853; cv=none; d=zohomail.com; s=zohoarc; b=RgnTWU44gVjF/lLlXn0tOxp7ABWAAKujM/wZ79ZMAD8kxUMNcnOUoCt+xgWzTphDqke6MzSLzL2wNGZ3OZeTOkzquCD44AmgKBsDR55e2o/9OHwM3WgivfPOUyouzvFIZucI6HZD0TgxJ2h8A8zz4rrst0Vtm+f7lcGqvSObfmE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671216853; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=L1M2wRoJOi+BqzGwIPM2hSiFC08op8oobTL1vRrCJmc=; b=S0IuRyoTy3dGOHHfv0ARUL5PT5N/d8ZGKZL+TLVmdf2C266OylaG5JHzV3mXLYGLW5Q8CQ/2kFv2hUY1YvuMpBuaXeguJAIKSPmrA+THtCfqrRqqZstIdC3yx5uEWoLCMy01XYckEufgbgNc2FNQJwAmxmk6BE6wmgon2kZsd00= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671216853148136.7637953522999; Fri, 16 Dec 2022 10:54:13 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fpz-0000Ux-7w; Fri, 16 Dec 2022 13:53:39 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fpu-0000Ta-Vx for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:35 -0500 Received: from mail-pl1-x62f.google.com ([2607:f8b0:4864:20::62f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpt-0003Gv-89 for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:34 -0500 Received: by mail-pl1-x62f.google.com with SMTP id x2so3150954plb.13 for ; Fri, 16 Dec 2022 10:53:32 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=L1M2wRoJOi+BqzGwIPM2hSiFC08op8oobTL1vRrCJmc=; b=Vk9cX5yDaw0eTMaSyfcMtXUvf5LkIWcGWwS6OfG4EqGMUQzqV3a9sL6KTNByjCFs+x 61lyPy6YYwrqk0zZ8SiDoh7fjEy84oTqtINsSiujhI/WVbgU88tdha3xQ011TFhBsC1e AmLnpdbLqcFFXUZjgvPp6XyyYmtb2W9uRbqkjbvF3UiZRoJaj8Okiv1ANODfut0AJrGt gv/mYrkOmhlrCIa6VK90R/QvGo8AOXMDFe7hNMoD++3B1O54rdTWYrvAczhxRAqzkAzg skIOMZH7JgpdECEJT2DRn81ELc+nX/+x0RckVsEjw1BoHnCqVMFr4DOP2egcPqDTVuAb GnTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=L1M2wRoJOi+BqzGwIPM2hSiFC08op8oobTL1vRrCJmc=; b=aekSeBMEhwM51eP+bp7bDB9m+R0wLsgSgUJpxCVIyzUB+EEyYUrst9p1eclVEJloo2 zGJYkAk3oraewY33sYGubz/QIN9yuoRqnrypicQJ0ZYjjBWzOmrWh6EI2MbA6cTT80Ks cvdIaXPC53ksX3sza/7no+6YNi8RP2QheGnCs5gXwkFOouFe+dqrBu2FWI6gmRoLDkZb ZLBaSJ9Ll7tsb61F57icbAhEXQDBUQ2sja9O5TMc1eUnJZ5xruKW0J4qq+jXJ1XjFTHs J71BntAPhsMivwT8D+Pcfn8MlH7IiUEk2MpzsjQ1aaBg+eRUc9CjoQzEFBgVJKj+1wxY 3iyw== X-Gm-Message-State: ANoB5pmVtrvFAwOzKQi2gY9XOHz4XJvPqklRK7jOI/GF9u2efVQelm8J Oe80UOcvkzEO9XnxjzTZWpRkVAw872GBkGOP X-Google-Smtp-Source: AA0mqf6O8eFJyk9teiWrgqxtkBMcANMaE61fT/rTu8MYidd49o0C+YbaiBho0vIxLpP7WareA6pSHA== X-Received: by 2002:a05:6a21:2d8b:b0:a0:462f:8e3e with SMTP id ty11-20020a056a212d8b00b000a0462f8e3emr42296030pzb.55.1671216811875; Fri, 16 Dec 2022 10:53:31 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 10/13] accel/tcg: Remove trace events from trace-root.h Date: Fri, 16 Dec 2022 10:53:02 -0800 Message-Id: <20221216185305.3429913-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::62f; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671216854041100001 From: Philippe Mathieu-Daud=C3=A9 Commit d9bb58e510 ("tcg: move tcg related files into accel/tcg/ subdirectory") introduced accel/tcg/trace-events, so we don't need to use the root trace-events anymore. Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-3-philmd@linaro.org> Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 2 +- accel/tcg/trace-events | 4 ++++ trace-events | 4 ---- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 6f1c00682b..ac459478f4 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -33,7 +33,7 @@ #include "qemu/atomic.h" #include "qemu/atomic128.h" #include "exec/translate-all.h" -#include "trace/trace-root.h" +#include "trace.h" #include "tb-hash.h" #include "internal.h" #ifdef CONFIG_PLUGIN diff --git a/accel/tcg/trace-events b/accel/tcg/trace-events index 59eab96f26..4e9b450520 100644 --- a/accel/tcg/trace-events +++ b/accel/tcg/trace-events @@ -6,5 +6,9 @@ exec_tb(void *tb, uintptr_t pc) "tb:%p pc=3D0x%"PRIxPTR exec_tb_nocache(void *tb, uintptr_t pc) "tb:%p pc=3D0x%"PRIxPTR exec_tb_exit(void *last_tb, unsigned int flags) "tb:%p flags=3D0x%x" =20 +# cputlb.c +memory_notdirty_write_access(uint64_t vaddr, uint64_t ram_addr, unsigned s= ize) "0x%" PRIx64 " ram_addr 0x%" PRIx64 " size %u" +memory_notdirty_set_dirty(uint64_t vaddr) "0x%" PRIx64 + # translate-all.c translate_block(void *tb, uintptr_t pc, const void *tb_code) "tb:%p, pc:0x= %"PRIxPTR", tb_code:%p" diff --git a/trace-events b/trace-events index 035f3d570d..b6b84b175e 100644 --- a/trace-events +++ b/trace-events @@ -42,10 +42,6 @@ find_ram_offset(uint64_t size, uint64_t offset) "size: 0= x%" PRIx64 " @ 0x%" PRIx find_ram_offset_loop(uint64_t size, uint64_t candidate, uint64_t offset, u= int64_t next, uint64_t mingap) "trying size: 0x%" PRIx64 " @ 0x%" PRIx64 ",= offset: 0x%" PRIx64" next: 0x%" PRIx64 " mingap: 0x%" PRIx64 ram_block_discard_range(const char *rbname, void *hva, size_t length, bool= need_madvise, bool need_fallocate, int ret) "%s@%p + 0x%zx: madvise: %d fa= llocate: %d ret: %d" =20 -# accel/tcg/cputlb.c -memory_notdirty_write_access(uint64_t vaddr, uint64_t ram_addr, unsigned s= ize) "0x%" PRIx64 " ram_addr 0x%" PRIx64 " size %u" -memory_notdirty_set_dirty(uint64_t vaddr) "0x%" PRIx64 - # job.c job_state_transition(void *job, int ret, const char *legal, const char *s= 0, const char *s1) "job %p (ret: %d) attempting %s transition (%s-->%s)" job_apply_verb(void *job, const char *state, const char *verb, const char = *legal) "job %p in state %s; applying verb %s (%s)" --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671218080712192.81178787541694; Fri, 16 Dec 2022 11:14:40 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fq3-0000cu-Po; Fri, 16 Dec 2022 13:53:43 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fq0-0000bC-Mx for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:40 -0500 Received: from mail-pj1-x102f.google.com ([2607:f8b0:4864:20::102f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpy-0003Hg-GW for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:39 -0500 Received: by mail-pj1-x102f.google.com with SMTP id js9so3389789pjb.2 for ; Fri, 16 Dec 2022 10:53:38 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KNHQY/4RRXvJU0V3H4/K7OQJyFUdroRHt7gYQp+f6XQ=; b=XJDTqtEjiwgdScx7A9RGOSdszs0y4Mc+i5zwf4YKbUlfOWMZfpd6JNtQZ+0aNCxFKU 0rRZRSQPSJzalz8lTuWUtNcnZRuUpeYJltJq03f/Ga2dBqCFhUth3aeeJvn1y+On1rdi JIZ1sQm/IUZjh7BXXBZDCx7Xsfvl+6ZSsZv6plmay6VdYm134dxi/vqdgOhKsu4Baysa x6LMoLSpqc0Clze/IluEbwVqIcR9ZN4RQ19hfTDwLFStPqNkTaRUQNXfq0TzZ/CNcOuK xBtm8pqNpkRM4WCXqCXe8zxjWlHbQiX1NvOxHQQipFvW8QXQAJQ/G2Ensazd+zELZrTA mL3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KNHQY/4RRXvJU0V3H4/K7OQJyFUdroRHt7gYQp+f6XQ=; b=XgCfUTisv1vJ3ooPjurIlXXs5RfQ5HT/S8Iy+24ABj+U3rkURxhucrLJ85VsIdQQzT kN75G5NqV3m+dTmMMiX+B48DzRpAQ3++8vUo6lCUz0n452lDFNWSvBlURkrx1OZkqAd6 QCbWWVZBnd16OJFe6eybobjs2YL6COOKK3tB8bRWxIHlBn7pKJbJYu1VyOIyzmqqmrnJ pOqbc3s53sPdes1YdqFwXO+xEd8h6ITPULn6AvX9az5+wNtg7eRFIpJFptgr8FhQLXsx AIKI/z4MMqlDiWG3v5fJgvoWsB0oVsLhHD/jKKQVYnNiy1sGywYu1FCWf99hirQasji5 1oAA== X-Gm-Message-State: ANoB5plknwq4dn9ZFTXiXIMF5ShUqRD3PGMlU4fWchBekoDtQQQibRCO 0cd2tLODW4mXBfEJKOlj3qFTl2B4EjsdS6V4 X-Google-Smtp-Source: AA0mqf5FFKD+anYAuIHNfB7msWbNtPZFlRYcmo4Q40FUgbxqo/2hmnmQwELampsLr+wFdOiJd7AMUQ== X-Received: by 2002:a17:90a:af85:b0:20d:bd60:ad8d with SMTP id w5-20020a17090aaf8500b0020dbd60ad8dmr42303027pjq.9.1671216817170; Fri, 16 Dec 2022 10:53:37 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 11/13] accel/tcg: Rename tb_invalidate_phys_page_fast{, __locked}() Date: Fri, 16 Dec 2022 10:53:03 -0800 Message-Id: <20221216185305.3429913-12-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102f; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1671218081559100001 From: Philippe Mathieu-Daud=C3=A9 Emphasize this function is called with pages locked. Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-4-philmd@linaro.org> [rth: Use "__locked" suffix, to match other instances.] Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 6 +++--- accel/tcg/cputlb.c | 2 +- accel/tcg/tb-maint.c | 6 +++--- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 35419f3fe1..d10ab69ed0 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -37,9 +37,9 @@ void page_table_config_init(void); =20 #ifdef CONFIG_SOFTMMU struct page_collection; -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr); +void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr); struct page_collection *page_collection_lock(tb_page_addr_t start, tb_page_addr_t end); void page_collection_unlock(struct page_collection *set); diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index ac459478f4..f7963d3af8 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1510,7 +1510,7 @@ static void notdirty_write(CPUState *cpu, vaddr mem_v= addr, unsigned size, if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { struct page_collection *pages =3D page_collection_lock(ram_addr, ram_addr + size); - tb_invalidate_phys_page_fast(pages, ram_addr, size, retaddr); + tb_invalidate_phys_page_fast__locked(pages, ram_addr, size, retadd= r); page_collection_unlock(pages); } =20 diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 1676d359f2..8edfd910c4 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -1190,9 +1190,9 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) * * Call with all @pages in the range [@start, @start + len[ locked. */ -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr) +void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr) { PageDesc *p; =20 --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671218072; cv=none; d=zohomail.com; s=zohoarc; b=a38+46cxG+hCO0uDi8U/ucQ3BkpcCMKxpYHN7EYv4OloSciSSHdEqkWvlJ/3Qcjx0JT7mKHUY48edduwy3rl2Q/Tp3H0qrBvsUfYJ9FAhJzerZq6VPZgQdQd8B9ijO8Gw7xXc3o1nJaN2OthBrcPzubU5sQ2nS/jIp1KMKIgqFI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671218072; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=cckxZafOwtkUwQnY45Ez0scvGBO1+RBqrS/Kwfy+GPI=; b=V4mv8YawB+/Vob+/cA0HJRkrxv6rYPLuy1bkLPz17t/OKSjnuojVuaCyt3+vOcg05fzL7awPQSJEJWn6TNPGXll8HUgfRWxVx+OxZa2HW+IZOmFR4JNk4TXhubH1JR9aMt10uMrv4m2+qvnJf9bysQu8AKD6RVTgPLlVzewM1YE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671218072257163.2732664270161; Fri, 16 Dec 2022 11:14:32 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6Fq5-0000e6-1z; Fri, 16 Dec 2022 13:53:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fq1-0000cF-Ft for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:42 -0500 Received: from mail-pj1-x1030.google.com ([2607:f8b0:4864:20::1030]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fpz-0003Hk-HJ for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:41 -0500 Received: by mail-pj1-x1030.google.com with SMTP id q17-20020a17090aa01100b002194cba32e9so7020175pjp.1 for ; Fri, 16 Dec 2022 10:53:39 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cckxZafOwtkUwQnY45Ez0scvGBO1+RBqrS/Kwfy+GPI=; b=aRBHGlo9cl1yPib7XKMPGRns2I2FuCcGsqzL38bNbe+ykJKRQeWyqWftrZo2B9c2j+ 5jZQD7V5LL3Okz0gdiaLkLFPyCYSep2vujo5Tff/rV5EgVN+88nncap9OU8voQe29mkb g3AWzhDXECzdq/OZYu3uLQo7TA9KE/LpQFQylgX5sWvXtO+12MLqIhqp5ECyk1FdN1cL u4t9EoP1+2lPirc09dsqWermqFfn3MRHv4qGXoNKxB5ScxNXPQTwGAQ/Sl39Oc/3bSZ2 223v2DdC46XS+HFFW9a2IgkuFYcay1SY9R69bx8bKyPPk0EeuO5IpravkmM6mTbWxzjY xVPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cckxZafOwtkUwQnY45Ez0scvGBO1+RBqrS/Kwfy+GPI=; b=QxjKa7EMY4hwHKd/7rVogY8Tc62TCEwyNNp69ciYNx0OHSag0khLJZOEnvGT8FfPTz cy5qtirOygfxVugeOKYugLy6QmQIJtZOcCkWV2QCCXmgmv6bFaV17IZjhG91HPKBBhm0 hd+tgQDI6U8Y/N68vytPlWFogGMDrztOAIxl47jnyy/LjXx86tlZl9FRy8rTUT/kNq2B /nH0X0gm3mK2J6BiS8+xBtcnh+W7uxy2NQ27uEI2DrjBxlc0dkj/LM16eiuS/xmmv7Z3 kJNxXwpWX5ZcAJkFuJC69chtGy+1Sjmjd7MtMpi5QxenMOVzHXSIKiD33nJ+VuPigrj0 MB/A== X-Gm-Message-State: ANoB5pnwQC29WBIGCslULSuxssWDJj24PJHApuhsj6uGGlG0PJEmhyHZ pGPfjfFXzK0H6LpKMNh9jUyuuEUx/XAW2bBj X-Google-Smtp-Source: AA0mqf63LncSPUCHIeKN5DdCrjlg5EvvvIuUPul94n915cqd/Om7721TEoev5VqTIr4N7oQYjZ+jkw== X-Received: by 2002:a05:6a20:1bdc:b0:ad:97cc:e948 with SMTP id cv28-20020a056a201bdc00b000ad97cce948mr21762670pzb.45.1671216818016; Fri, 16 Dec 2022 10:53:38 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 12/13] accel/tcg: Factor tb_invalidate_phys_range_fast() out Date: Fri, 16 Dec 2022 10:53:04 -0800 Message-Id: <20221216185305.3429913-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1030; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1030.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671218073713100002 From: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-5-philmd@linaro.org> Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 3 +++ accel/tcg/cputlb.c | 5 +---- accel/tcg/tb-maint.c | 21 +++++++++++++++++---- 3 files changed, 21 insertions(+), 8 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index d10ab69ed0..8f8c44d06b 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -42,6 +42,9 @@ void tb_invalidate_phys_page_fast__locked(struct page_col= lection *pages, uintptr_t retaddr); struct page_collection *page_collection_lock(tb_page_addr_t start, tb_page_addr_t end); +void tb_invalidate_phys_range_fast(ram_addr_t ram_addr, + unsigned size, + uintptr_t retaddr); void page_collection_unlock(struct page_collection *set); G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); #endif /* CONFIG_SOFTMMU */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index f7963d3af8..03674d598f 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1508,10 +1508,7 @@ static void notdirty_write(CPUState *cpu, vaddr mem_= vaddr, unsigned size, trace_memory_notdirty_write_access(mem_vaddr, ram_addr, size); =20 if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { - struct page_collection *pages - =3D page_collection_lock(ram_addr, ram_addr + size); - tb_invalidate_phys_page_fast__locked(pages, ram_addr, size, retadd= r); - page_collection_unlock(pages); + tb_invalidate_phys_range_fast(ram_addr, size, retaddr); } =20 /* diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 8edfd910c4..d557013f00 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -1184,10 +1184,6 @@ void tb_invalidate_phys_range(tb_page_addr_t start, = tb_page_addr_t end) } =20 /* - * len must be <=3D 8 and start must be a multiple of len. - * Called via softmmu_template.h when code areas are written to with - * iothread mutex not held. - * * Call with all @pages in the range [@start, @start + len[ locked. */ void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, @@ -1205,4 +1201,21 @@ void tb_invalidate_phys_page_fast__locked(struct pag= e_collection *pages, tb_invalidate_phys_page_range__locked(pages, p, start, start + len, retaddr); } + +/* + * len must be <=3D 8 and start must be a multiple of len. + * Called via softmmu_template.h when code areas are written to with + * iothread mutex not held. + */ +void tb_invalidate_phys_range_fast(ram_addr_t ram_addr, + unsigned size, + uintptr_t retaddr) +{ + struct page_collection *pages; + + pages =3D page_collection_lock(ram_addr, ram_addr + size); + tb_invalidate_phys_page_fast__locked(pages, ram_addr, size, retaddr); + page_collection_unlock(pages); +} + #endif /* CONFIG_USER_ONLY */ --=20 2.34.1 From nobody Sun May 19 12:45:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1671216862; cv=none; d=zohomail.com; s=zohoarc; b=JWLmxmBjzm/BUjaKKsbhIx1TEXqdw8hfNGLmE7KwrvwMklokEhKRq4Fw8uFjp/j+gjfscMG1NC07R2/w7gGIKUDc3f3quiga4PXsUmECgxLAs4/4QHMPg/VTEdOzm1f3ZasSzzkxD8TG0CqCCOf9RKTnnahuiUqbUm2lFo1zUQ8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1671216862; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+vlIhK8RBdnajCC8bpvbYLAUqtt9KGeNEpdZJB0XJn4=; b=FZtdpi7u5E3yqZG5IEpPq6rJ08q2yDVV+ZN5FNe20dqXi3GEJRbOilOQhT3ajZwncStZ1pf8yZKqE5NmE0F83ir1B/6XEQYwYzszJhwNd4Kn3spJAMzUDubuSO/+m2gppOGHWobGLh2uNhCihzDYZfN0msUgWkKaBptea4uBibo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1671216862040480.61981306651137; Fri, 16 Dec 2022 10:54:22 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p6FqJ-0000kU-K4; Fri, 16 Dec 2022 13:53:59 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p6Fq8-0000hC-9j for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:52 -0500 Received: from mail-pj1-x102b.google.com ([2607:f8b0:4864:20::102b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1p6Fq5-0003Id-Pl for qemu-devel@nongnu.org; Fri, 16 Dec 2022 13:53:47 -0500 Received: by mail-pj1-x102b.google.com with SMTP id u5so3357890pjy.5 for ; Fri, 16 Dec 2022 10:53:45 -0800 (PST) Received: from stoup.. ([2602:47:d48c:8101:c606:9489:98df:6a3b]) by smtp.gmail.com with ESMTPSA id 13-20020a630b0d000000b0046ff3634a78sm1761300pgl.71.2022.12.16.10.53.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 10:53:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+vlIhK8RBdnajCC8bpvbYLAUqtt9KGeNEpdZJB0XJn4=; b=SKR0xnXFoJPP3MVlyXAHYu/uwyHqj62PcsVrkLddyiWb3NaAUUyiq23vDWF0aZgbgy 8B4YUPGNP5L3VCYEO7sqCBWuos/cRMKDgg5SaNGMFJKA8XlvLhFf2mtBabHKzoj0a17X M7C5wPBWMABa73BvwjuPsLzaIzlI6NDNxAtIuS8aaq+CZFVknhBjVsaztqKaiztHo/XO VU5S2TWPgHir+6zMA+tvNPBxSXxfEg7ewrR7DiY24mvbypJxs+BxfgLEytVfvpnqFKkU ABN2bNpfXzH8/gi5uZOKMBiMvH1X4xg/0xrQiGrzkH2e8djasO2dXCWc0wix1ty4nqrw NQpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+vlIhK8RBdnajCC8bpvbYLAUqtt9KGeNEpdZJB0XJn4=; b=GgkGPMJZztIwEodTXLJyDWzqry44u/bDcINZyf+rZhAWcB6K3wehBm3gnAqKEhJeAm saUHjQqfbx3wiRO2Ve2IF9hLnPsncOxoZYanNHccpcrOiCVMt8fFH0ZoSSawNO1bNlCc +Oi6xdUIT20Xu9c0VYD+MogLBrazh323xUK9srZn6+ja8o6sUBgYDgK1x3mQiUFqDuAp i6uUjnRn0ADf/V3/WwHQJ3Zl84ED1EnoNZ+Z+0qN+sdPWsrSBes/bIj3MjG/zOdzxSp1 3H7SXQ+9R98O4YxjvZoE5rA1I8DztYjt+xh8jMNHfMj33SXFqQwh36ewCEsoAljMVSr+ CHBg== X-Gm-Message-State: ANoB5pn/FBkExV2/AqaMrbPK8e9/M2mGhp3lPF6e966It7UKm/g8d2ep BSiYJKsGX0ssfFnBOPrN5tfzn5WlQ9VDpOyQ X-Google-Smtp-Source: AA0mqf4qtnYpJy48brJA+pil1vPx9feYC/+7/ejk03FRpiEWYBjkShiASnNyVRUDxBT8Pk1BML5twg== X-Received: by 2002:a05:6a20:4f10:b0:ad:b91d:9873 with SMTP id gi16-20020a056a204f1000b000adb91d9873mr22392930pzb.33.1671216824376; Fri, 16 Dec 2022 10:53:44 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PULL 13/13] accel/tcg: Restrict page_collection structure to system TB maintainance Date: Fri, 16 Dec 2022 10:53:05 -0800 Message-Id: <20221216185305.3429913-14-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221216185305.3429913-1-richard.henderson@linaro.org> References: <20221216185305.3429913-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102b; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1671216864170100001 From: Philippe Mathieu-Daud=C3=A9 Only the system emulation part of TB maintainance uses the page_collection structure. Restrict its declaration (and the functions requiring it) to tb-maint.c. Convert the 'len' argument of tb_invalidate_phys_page_fast__locked() from signed to unsigned. Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Alex Benn=C3=A9e Message-Id: <20221209093649.43738-6-philmd@linaro.org> Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 7 ------- accel/tcg/tb-maint.c | 15 +++++++-------- 2 files changed, 7 insertions(+), 15 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 8f8c44d06b..6edff16fb0 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -36,16 +36,9 @@ void page_table_config_init(void); #endif =20 #ifdef CONFIG_SOFTMMU -struct page_collection; -void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr); -struct page_collection *page_collection_lock(tb_page_addr_t start, - tb_page_addr_t end); void tb_invalidate_phys_range_fast(ram_addr_t ram_addr, unsigned size, uintptr_t retaddr); -void page_collection_unlock(struct page_collection *set); G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); #endif /* CONFIG_SOFTMMU */ =20 diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index d557013f00..1b8e860647 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -513,8 +513,8 @@ static gint tb_page_addr_cmp(gconstpointer ap, gconstpo= inter bp, gpointer udata) * intersecting TBs. * Locking order: acquire locks in ascending order of page index. */ -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) +static struct page_collection *page_collection_lock(tb_page_addr_t start, + tb_page_addr_t end) { struct page_collection *set =3D g_malloc(sizeof(*set)); tb_page_addr_t index; @@ -558,7 +558,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) return set; } =20 -void page_collection_unlock(struct page_collection *set) +static void page_collection_unlock(struct page_collection *set) { /* entries are unlocked and freed via page_entry_destroy */ g_tree_destroy(set->tree); @@ -1186,9 +1186,9 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) /* * Call with all @pages in the range [@start, @start + len[ locked. */ -void tb_invalidate_phys_page_fast__locked(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr) +static void tb_invalidate_phys_page_fast__locked(struct page_collection *p= ages, + tb_page_addr_t start, + unsigned len, uintptr_t r= a) { PageDesc *p; =20 @@ -1198,8 +1198,7 @@ void tb_invalidate_phys_page_fast__locked(struct page= _collection *pages, } =20 assert_page_locked(p); - tb_invalidate_phys_page_range__locked(pages, p, start, start + len, - retaddr); + tb_invalidate_phys_page_range__locked(pages, p, start, start + len, ra= ); } =20 /* --=20 2.34.1