From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981588076186.99309038051933; Thu, 5 Apr 2018 19:26:28 -0700 (PDT) Received: from localhost ([::1]:44443 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H59-0005Ej-7s for importer@patchew.org; Thu, 05 Apr 2018 22:26:27 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58420) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002GG-8H for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsH-0003Sq-Tx for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:43269) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsH-0003Ry-Ou for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:09 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 7AD5C20AAB; Thu, 5 Apr 2018 22:13:09 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:09 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 22A7D1025C; Thu, 5 Apr 2018 22:13:09 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=expVsqLzO6cuZWR/ID4ZkLvZtS G9vOnn+2pUR0ubsA8=; b=11RSFgxVw9PRp1f0p60bckxurfN9hnan6WbOluH530 FtIlWa/e4ygYLvM239ZonH9QDYC0vHSZmbusxYDZ8RE5p0knw8/hnJAto64YudvM ZCZVZ4iYS6ho9oTiHyfujldS4VK940TfPhH75A1axe3VH9E1FENfT4lhjK0i5di5 w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=expVsq LzO6cuZWR/ID4ZkLvZtSG9vOnn+2pUR0ubsA8=; b=DHoRrEeLrvxEvzl/iwpRpm prcmaJBFO2OfopSSkkabfDOlIkjQZlq1UAwL/w+D/dLZ9G/ujahWzVJcdPBxPx5C od3dcdSnfiOYgPG0Aaxehi7I4f6bcGpz6WhZbQShFUOiN0LYpV7CRCrx8lq0t+pp 3hz0Vqwgak1dTYfdmS84IGVpyWw/29ZHjcUyqgGW9FNZZ/gFtlKf/6fCmrxg1qSK AJjyiiZKulUXBK9hUjSgcwjT/XpSZgcdkMXPm3pzH+fWcMwNifHUI61i3jL9S438 MpyuXiKh+eZ9qXOJEfnmxLyXd9N/nIarlmq94gvS3Om46CJPhysiyfzSMUTf+nfQ == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:12:52 -0400 Message-Id: <1522980788-1252-2-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 01/17] qht: require a default comparison function X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 qht_lookup now uses the default cmp function. qht_lookup_custom is defined to retain the old behaviour, that is a cmp function is explicitly provided. qht_insert will gain use of the default cmp in the next patch. Note that we move qht_lookup_custom's @func to be the last argument, which makes the new qht_lookup as simple as possible. Instead of this (i.e. keeping @func 2nd): 0000000000010750 : 10750: 89 d1 mov %edx,%ecx 10752: 48 89 f2 mov %rsi,%rdx 10755: 48 8b 77 08 mov 0x8(%rdi),%rsi 10759: e9 22 ff ff ff jmpq 10680 1075e: 66 90 xchg %ax,%ax We get: 0000000000010740 : 10740: 48 8b 4f 08 mov 0x8(%rdi),%rcx 10744: e9 37 ff ff ff jmpq 10680 10749: 0f 1f 80 00 00 00 00 nopl 0x0(%rax) Reviewed-by: Richard Henderson Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- include/qemu/qht.h | 25 ++++++++++++++++++++----- accel/tcg/cpu-exec.c | 4 ++-- accel/tcg/translate-all.c | 16 +++++++++++++++- tests/qht-bench.c | 14 +++++++------- tests/test-qht.c | 15 ++++++++++----- util/qht.c | 14 +++++++++++--- 6 files changed, 65 insertions(+), 23 deletions(-) diff --git a/include/qemu/qht.h b/include/qemu/qht.h index 531aa95..5f03a0f 100644 --- a/include/qemu/qht.h +++ b/include/qemu/qht.h @@ -11,8 +11,11 @@ #include "qemu/thread.h" #include "qemu/qdist.h" =20 +typedef bool (*qht_cmp_func_t)(const void *a, const void *b); + struct qht { struct qht_map *map; + qht_cmp_func_t cmp; QemuMutex lock; /* serializes setters of ht->map */ unsigned int mode; }; @@ -47,10 +50,12 @@ typedef void (*qht_iter_func_t)(struct qht *ht, void *p= , uint32_t h, void *up); /** * qht_init - Initialize a QHT * @ht: QHT to be initialized + * @cmp: default comparison function. Cannot be NULL. * @n_elems: number of entries the hash table should be optimized for. * @mode: bitmask with OR'ed QHT_MODE_* */ -void qht_init(struct qht *ht, size_t n_elems, unsigned int mode); +void qht_init(struct qht *ht, qht_cmp_func_t cmp, size_t n_elems, + unsigned int mode); =20 /** * qht_destroy - destroy a previously initialized QHT @@ -78,11 +83,11 @@ void qht_destroy(struct qht *ht); bool qht_insert(struct qht *ht, void *p, uint32_t hash); =20 /** - * qht_lookup - Look up a pointer in a QHT + * qht_lookup_custom - Look up a pointer using a custom comparison functio= n. * @ht: QHT to be looked up - * @func: function to compare existing pointers against @userp * @userp: pointer to pass to @func * @hash: hash of the pointer to be looked up + * @func: function to compare existing pointers against @userp * * Needs to be called under an RCU read-critical section. * @@ -94,8 +99,18 @@ bool qht_insert(struct qht *ht, void *p, uint32_t hash); * Returns the corresponding pointer when a match is found. * Returns NULL otherwise. */ -void *qht_lookup(struct qht *ht, qht_lookup_func_t func, const void *userp, - uint32_t hash); +void *qht_lookup_custom(struct qht *ht, const void *userp, uint32_t hash, + qht_lookup_func_t func); + +/** + * qht_lookup - Look up a pointer in a QHT + * @ht: QHT to be looked up + * @userp: pointer to pass to the comparison function + * @hash: hash of the pointer to be looked up + * + * Calls qht_lookup_custom() using @ht's default comparison function. + */ +void *qht_lookup(struct qht *ht, const void *userp, uint32_t hash); =20 /** * qht_remove - remove a pointer from the hash table diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index 9cc6972..dabf292 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -293,7 +293,7 @@ struct tb_desc { uint32_t trace_vcpu_dstate; }; =20 -static bool tb_cmp(const void *p, const void *d) +static bool tb_lookup_cmp(const void *p, const void *d) { const TranslationBlock *tb =3D p; const struct tb_desc *desc =3D d; @@ -338,7 +338,7 @@ TranslationBlock *tb_htable_lookup(CPUState *cpu, targe= t_ulong pc, phys_pc =3D get_page_addr_code(desc.env, pc); desc.phys_page1 =3D phys_pc & TARGET_PAGE_MASK; h =3D tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate); - return qht_lookup(&tb_ctx.htable, tb_cmp, &desc, h); + return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp); } =20 void tb_set_jmp_target(TranslationBlock *tb, int n, uintptr_t addr) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index d419060..7af5f36 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -785,11 +785,25 @@ static inline void code_gen_alloc(size_t tb_size) qemu_mutex_init(&tb_ctx.tb_lock); } =20 +static bool tb_cmp(const void *ap, const void *bp) +{ + const TranslationBlock *a =3D ap; + const TranslationBlock *b =3D bp; + + return a->pc =3D=3D b->pc && + a->cs_base =3D=3D b->cs_base && + a->flags =3D=3D b->flags && + (tb_cflags(a) & CF_HASH_MASK) =3D=3D (tb_cflags(b) & CF_HASH_MASK)= && + a->trace_vcpu_dstate =3D=3D b->trace_vcpu_dstate && + a->page_addr[0] =3D=3D b->page_addr[0] && + a->page_addr[1] =3D=3D b->page_addr[1]; +} + static void tb_htable_init(void) { unsigned int mode =3D QHT_MODE_AUTO_RESIZE; =20 - qht_init(&tb_ctx.htable, CODE_GEN_HTABLE_SIZE, mode); + qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } =20 /* Must be called before using the QEMU cpus. 'tb_size' is the size diff --git a/tests/qht-bench.c b/tests/qht-bench.c index 4cabdfd..c94ac25 100644 --- a/tests/qht-bench.c +++ b/tests/qht-bench.c @@ -93,10 +93,10 @@ static void usage_complete(int argc, char *argv[]) exit(-1); } =20 -static bool is_equal(const void *obj, const void *userp) +static bool is_equal(const void *ap, const void *bp) { - const long *a =3D obj; - const long *b =3D userp; + const long *a =3D ap; + const long *b =3D bp; =20 return *a =3D=3D *b; } @@ -150,7 +150,7 @@ static void do_rw(struct thread_info *info) =20 p =3D &keys[info->r & (lookup_range - 1)]; hash =3D h(*p); - read =3D qht_lookup(&ht, is_equal, p, hash); + read =3D qht_lookup(&ht, p, hash); if (read) { stats->rd++; } else { @@ -162,7 +162,7 @@ static void do_rw(struct thread_info *info) if (info->write_op) { bool written =3D false; =20 - if (qht_lookup(&ht, is_equal, p, hash) =3D=3D NULL) { + if (qht_lookup(&ht, p, hash) =3D=3D NULL) { written =3D qht_insert(&ht, p, hash); } if (written) { @@ -173,7 +173,7 @@ static void do_rw(struct thread_info *info) } else { bool removed =3D false; =20 - if (qht_lookup(&ht, is_equal, p, hash)) { + if (qht_lookup(&ht, p, hash)) { removed =3D qht_remove(&ht, p, hash); } if (removed) { @@ -308,7 +308,7 @@ static void htable_init(void) } =20 /* initialize the hash table */ - qht_init(&ht, qht_n_elems, qht_mode); + qht_init(&ht, is_equal, qht_n_elems, qht_mode); assert(init_size <=3D init_range); =20 pr_params(); diff --git a/tests/test-qht.c b/tests/test-qht.c index 9b7423a..b069881 100644 --- a/tests/test-qht.c +++ b/tests/test-qht.c @@ -13,10 +13,10 @@ static struct qht ht; static int32_t arr[N * 2]; =20 -static bool is_equal(const void *obj, const void *userp) +static bool is_equal(const void *ap, const void *bp) { - const int32_t *a =3D obj; - const int32_t *b =3D userp; + const int32_t *a =3D ap; + const int32_t *b =3D bp; =20 return *a =3D=3D *b; } @@ -60,7 +60,12 @@ static void check(int a, int b, bool expected) =20 val =3D i; hash =3D i; - p =3D qht_lookup(&ht, is_equal, &val, hash); + /* test both lookup variants; results should be the same */ + if (i % 2) { + p =3D qht_lookup(&ht, &val, hash); + } else { + p =3D qht_lookup_custom(&ht, &val, hash, is_equal); + } g_assert_true(!!p =3D=3D expected); } rcu_read_unlock(); @@ -102,7 +107,7 @@ static void qht_do_test(unsigned int mode, size_t init_= entries) /* under KVM we might fetch stats from an uninitialized qht */ check_n(0); =20 - qht_init(&ht, 0, mode); + qht_init(&ht, is_equal, 0, mode); =20 check_n(0); insert(0, N); diff --git a/util/qht.c b/util/qht.c index ff4d2e6..8610ce3 100644 --- a/util/qht.c +++ b/util/qht.c @@ -351,11 +351,14 @@ static struct qht_map *qht_map_create(size_t n_bucket= s) return map; } =20 -void qht_init(struct qht *ht, size_t n_elems, unsigned int mode) +void qht_init(struct qht *ht, qht_cmp_func_t cmp, size_t n_elems, + unsigned int mode) { struct qht_map *map; size_t n_buckets =3D qht_elems_to_buckets(n_elems); =20 + g_assert(cmp); + ht->cmp =3D cmp; ht->mode =3D mode; qemu_mutex_init(&ht->lock); map =3D qht_map_create(n_buckets); @@ -479,8 +482,8 @@ void *qht_lookup__slowpath(struct qht_bucket *b, qht_lo= okup_func_t func, return ret; } =20 -void *qht_lookup(struct qht *ht, qht_lookup_func_t func, const void *userp, - uint32_t hash) +void *qht_lookup_custom(struct qht *ht, const void *userp, uint32_t hash, + qht_lookup_func_t func) { struct qht_bucket *b; struct qht_map *map; @@ -502,6 +505,11 @@ void *qht_lookup(struct qht *ht, qht_lookup_func_t fun= c, const void *userp, return qht_lookup__slowpath(b, func, userp, hash); } =20 +void *qht_lookup(struct qht *ht, const void *userp, uint32_t hash) +{ + return qht_lookup_custom(ht, userp, hash, ht->cmp); +} + /* call with head->lock held */ static bool qht_insert__locked(struct qht *ht, struct qht_map *map, struct qht_bucket *head, void *p, uint32_t = hash, --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981118456581.0072439501105; Thu, 5 Apr 2018 19:18:38 -0700 (PDT) Received: from localhost ([::1]:44013 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GxZ-0006SL-Gj for importer@patchew.org; Thu, 05 Apr 2018 22:18:37 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58409) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002G9-5D for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsI-0003TC-Ct for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:50247) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsI-0003SQ-6k for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:10 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id ADAA7207F3; Thu, 5 Apr 2018 22:13:09 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:09 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 63CBDE4F68; Thu, 5 Apr 2018 22:13:09 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=mesmtp; bh=QxvdvE5heYUGio zm8zjz8HRbelnB+/ScJmHQhaPybk8=; b=dY5IBwp4+rlksKenNIW7xS4+1E3CXE V+Ar0pXNOu444DgjT/AB+pMLkod6k4OizYpAJV/bXiGU+ZfRp63lxSO3pkgj/0mN /X7jNXJ6/UmOQ1ChCTrJNiC7f2DofoGSllslkkI/YCWum0vpqjGFbxRnsYgekjAd ld/pUM5ooo58M= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=QxvdvE5heYUGiozm8zjz8HRbelnB+/ScJmHQhaPybk8=; b=GkRkCfaj tN2ccHDGA+dp215MyoP1MeMeBSnfYaOjGVit4NB3/jHMWyepUlUrlY4tsnj31eMV Y3sgjezqQ82JdEWT7AUFygnMfBk5hKxhyHHrmmE/k/a1AI6Nk/DEA5/G6qfVLUwJ AWVsihOZ2NKVuLpFHot3whT6bljmj4G+wnFooh6mclZk3B85HaLBtqTTwU8MrOx+ GzYEw5lyOS+Y5ny+pKaghGzpRc1/eTVG9WlOqKi3fPihB/6iKuVWAWyHbQg+yoGe j7AR08MsYZtynaxfDGLpNaAUPQ2+j70v83LHoBEv3ZDdNNx9INEbj9RfsIjqEHBB avbP1ecW2Hzfcg== X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:12:53 -0400 Message-Id: <1522980788-1252-3-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 02/17] qht: return existing entry when qht_insert fails X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The meaning of "existing" is now changed to "matches in hash and ht->cmp result". This is saner than just checking the pointer value. Suggested-by: Richard Henderson Reviewed-by: Richard Henderson Signed-off-by: Emilio G. Cota --- include/qemu/qht.h | 7 +++++-- accel/tcg/translate-all.c | 2 +- tests/qht-bench.c | 4 ++-- tests/test-qht.c | 8 +++++++- util/qht.c | 27 +++++++++++++++++---------- 5 files changed, 32 insertions(+), 16 deletions(-) diff --git a/include/qemu/qht.h b/include/qemu/qht.h index 5f03a0f..1fb9116 100644 --- a/include/qemu/qht.h +++ b/include/qemu/qht.h @@ -70,6 +70,7 @@ void qht_destroy(struct qht *ht); * @ht: QHT to insert to * @p: pointer to be inserted * @hash: hash corresponding to @p + * @existing: address where the pointer to an existing entry can be copied= to * * Attempting to insert a NULL @p is a bug. * Inserting the same pointer @p with different @hash values is a bug. @@ -78,9 +79,11 @@ void qht_destroy(struct qht *ht); * inserted into the hash table. * * Returns true on success. - * Returns false if the @p-@hash pair already exists in the hash table. + * Returns false if there is an existing entry in the table that is equiva= lent + * (i.e. ht->cmp matches and the hash is the same) to @p-@h. If @existing + * is !NULL, a pointer to this existing entry is copied to it. */ -bool qht_insert(struct qht *ht, void *p, uint32_t hash); +bool qht_insert(struct qht *ht, void *p, uint32_t hash, void **existing); =20 /** * qht_lookup_custom - Look up a pointer using a custom comparison functio= n. diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 7af5f36..d16e094 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1245,7 +1245,7 @@ static void tb_link_page(TranslationBlock *tb, tb_pag= e_addr_t phys_pc, /* add in the hash table */ h =3D tb_hash_func(phys_pc, tb->pc, tb->flags, tb->cflags & CF_HASH_MA= SK, tb->trace_vcpu_dstate); - qht_insert(&tb_ctx.htable, tb, h); + qht_insert(&tb_ctx.htable, tb, h, NULL); =20 #ifdef CONFIG_USER_ONLY if (DEBUG_TB_CHECK_GATE) { diff --git a/tests/qht-bench.c b/tests/qht-bench.c index c94ac25..f492b3a 100644 --- a/tests/qht-bench.c +++ b/tests/qht-bench.c @@ -163,7 +163,7 @@ static void do_rw(struct thread_info *info) bool written =3D false; =20 if (qht_lookup(&ht, p, hash) =3D=3D NULL) { - written =3D qht_insert(&ht, p, hash); + written =3D qht_insert(&ht, p, hash, NULL); } if (written) { stats->in++; @@ -322,7 +322,7 @@ static void htable_init(void) r =3D xorshift64star(r); p =3D &keys[r & (init_range - 1)]; hash =3D h(*p); - if (qht_insert(&ht, p, hash)) { + if (qht_insert(&ht, p, hash, NULL)) { break; } retries++; diff --git a/tests/test-qht.c b/tests/test-qht.c index b069881..dda6a06 100644 --- a/tests/test-qht.c +++ b/tests/test-qht.c @@ -27,11 +27,17 @@ static void insert(int a, int b) =20 for (i =3D a; i < b; i++) { uint32_t hash; + void *existing; + bool inserted; =20 arr[i] =3D i; hash =3D i; =20 - qht_insert(&ht, &arr[i], hash); + inserted =3D qht_insert(&ht, &arr[i], hash, NULL); + g_assert_true(inserted); + inserted =3D qht_insert(&ht, &arr[i], hash, &existing); + g_assert_false(inserted); + g_assert_true(existing =3D=3D &arr[i]); } } =20 diff --git a/util/qht.c b/util/qht.c index 8610ce3..9d030e7 100644 --- a/util/qht.c +++ b/util/qht.c @@ -511,9 +511,9 @@ void *qht_lookup(struct qht *ht, const void *userp, uin= t32_t hash) } =20 /* call with head->lock held */ -static bool qht_insert__locked(struct qht *ht, struct qht_map *map, - struct qht_bucket *head, void *p, uint32_t = hash, - bool *needs_resize) +static void *qht_insert__locked(struct qht *ht, struct qht_map *map, + struct qht_bucket *head, void *p, uint32_t= hash, + bool *needs_resize) { struct qht_bucket *b =3D head; struct qht_bucket *prev =3D NULL; @@ -523,8 +523,9 @@ static bool qht_insert__locked(struct qht *ht, struct q= ht_map *map, do { for (i =3D 0; i < QHT_BUCKET_ENTRIES; i++) { if (b->pointers[i]) { - if (unlikely(b->pointers[i] =3D=3D p)) { - return false; + if (unlikely(b->hashes[i] =3D=3D hash && + ht->cmp(b->pointers[i], p))) { + return b->pointers[i]; } } else { goto found; @@ -553,7 +554,7 @@ static bool qht_insert__locked(struct qht *ht, struct q= ht_map *map, atomic_set(&b->hashes[i], hash); atomic_set(&b->pointers[i], p); seqlock_write_end(&head->sequence); - return true; + return NULL; } =20 static __attribute__((noinline)) void qht_grow_maybe(struct qht *ht) @@ -577,25 +578,31 @@ static __attribute__((noinline)) void qht_grow_maybe(= struct qht *ht) qemu_mutex_unlock(&ht->lock); } =20 -bool qht_insert(struct qht *ht, void *p, uint32_t hash) +bool qht_insert(struct qht *ht, void *p, uint32_t hash, void **existing) { struct qht_bucket *b; struct qht_map *map; bool needs_resize =3D false; - bool ret; + void *prev; =20 /* NULL pointers are not supported */ qht_debug_assert(p); =20 b =3D qht_bucket_lock__no_stale(ht, hash, &map); - ret =3D qht_insert__locked(ht, map, b, p, hash, &needs_resize); + prev =3D qht_insert__locked(ht, map, b, p, hash, &needs_resize); qht_bucket_debug__locked(b); qemu_spin_unlock(&b->lock); =20 if (unlikely(needs_resize) && ht->mode & QHT_MODE_AUTO_RESIZE) { qht_grow_maybe(ht); } - return ret; + if (likely(prev =3D=3D NULL)) { + return true; + } + if (existing) { + *existing =3D prev; + } + return false; } =20 static inline bool qht_entry_is_last(struct qht_bucket *b, int pos) --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981318490686.2153261073576; Thu, 5 Apr 2018 19:21:58 -0700 (PDT) Received: from localhost ([::1]:44034 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H0l-0000xL-Vu for importer@patchew.org; Thu, 05 Apr 2018 22:21:56 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58422) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002GH-8M for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsI-0003Td-Ng for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:55965) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsI-0003T4-IB for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:10 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id E220E20A90; Thu, 5 Apr 2018 22:13:09 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:09 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 956121025C; Thu, 5 Apr 2018 22:13:09 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=fyI1HLd+Zf74IeQLJ9wiTRt2wc vy9OpC4zfjQcO1TCM=; b=omsxkEC4nJzMZd9mavikWKgR2H2OCZFI3onCLKHQ5U Z4L+dhd1ZCbhHbJnlL/0o6Hci0ZRwL9Hh7aYDfELQ4YSHHXOcti7kQSqpH15PBmI GA7LQZwJZE0sWixcWyx9HGb5+281JcHFtMEmi7uVW5nGBVBJMZEvsPboCfvJm+Gx c= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=fyI1HL d+Zf74IeQLJ9wiTRt2wcvy9OpC4zfjQcO1TCM=; b=ZI0MfFYKQC4Dq7i2T4GeEr L/VmKYHjJSkYrLtXjUIkOgOyYgd373Tzq9ym7iUEimtI6V62j1g2YwuD1O8l7zd2 sPdYEHhk88zMNiX6pr/7qo7Eb0ccdcUyC+YcwO0/u1UFfFvizz0BwhXAgKSF8smK e/BLVJqp4slNRWoVeanZ3CjvFvM2esuGOPv2dG0CgPULKdN4rCMXgI0CnXK6nMzG Nuuyx+bQLv/ft6BWUarHgAH5NXCGyr8M++nGTVF1+HTEDmk2vMpNgW0L4utGIfeO znUzZLq9/QfnWYAyf28Kxq2bsjrR1OVipmwQ1PHaBAg3WqobYnx1c3fje6Wd0e1A == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:12:54 -0400 Message-Id: <1522980788-1252-4-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 03/17] tcg: track TBs with per-region BST's X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 This paves the way for enabling scalable parallel generation of TCG code. Instead of tracking TBs with a single binary search tree (BST), use a BST for each TCG region, protecting it with a lock. This is as scalable as it gets, since each TCG thread operates on a separate region. The core of this change is the introduction of struct tcg_region_tree, which contains a pointer to a GTree and an associated lock to serialize accesses to it. We then allocate an array of tcg_region_tree's, adding the appropriate padding to avoid false sharing based on qemu_dcache_linesize. Given a tc_ptr, we first find the corresponding region_tree. This is done by special-casing the first and last regions first, since they might be of size !=3D region.size; otherwise we just divide the offset by region.stride. I was worried about this division (several dozen cycles of latency), but profiling shows that this is not a fast path. Note that region.stride is not required to be a power of two; it is only required to be a multiple of the host's page size. Note that with this design we can also provide consistent snapshots about all region trees at once; for instance, tcg_tb_foreach acquires/releases all region_tree locks before/after iterating over them. For this reason we now drop tb_lock in dump_exec_info(). As an alternative I considered implementing a concurrent BST, but this can be tricky to get right, offers no consistent snapshots of the BST, and performance and scalability-wise I don't think it could ever beat having separate GTrees, given that our workload is insert-mostly (all concurrent BST designs I've seen focus, understandably, on making lookups fast, which comes at the expense of convoluted, non-wait-free insertions/removals). Reviewed-by: Richard Henderson Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- include/exec/exec-all.h | 1 - include/exec/tb-context.h | 1 - tcg/tcg.h | 6 ++ accel/tcg/cpu-exec.c | 2 +- accel/tcg/translate-all.c | 101 ++++-------------------- tcg/tcg.c | 191 ++++++++++++++++++++++++++++++++++++++++++= ++++ 6 files changed, 213 insertions(+), 89 deletions(-) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index e5afd2e..17e08b3 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -401,7 +401,6 @@ static inline uint32_t curr_cflags(void) | (use_icount ? CF_USE_ICOUNT : 0); } =20 -void tb_remove(TranslationBlock *tb); void tb_flush(CPUState *cpu); void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr); TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc, diff --git a/include/exec/tb-context.h b/include/exec/tb-context.h index 1d41202..d8472c8 100644 --- a/include/exec/tb-context.h +++ b/include/exec/tb-context.h @@ -31,7 +31,6 @@ typedef struct TBContext TBContext; =20 struct TBContext { =20 - GTree *tb_tree; struct qht htable; /* any access to the tbs or the page table must use this lock */ QemuMutex tb_lock; diff --git a/tcg/tcg.h b/tcg/tcg.h index 9e2d909..8bf29cc 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -850,6 +850,12 @@ void tcg_region_reset_all(void); size_t tcg_code_size(void); size_t tcg_code_capacity(void); =20 +void tcg_tb_insert(TranslationBlock *tb); +void tcg_tb_remove(TranslationBlock *tb); +TranslationBlock *tcg_tb_lookup(uintptr_t tc_ptr); +void tcg_tb_foreach(GTraverseFunc func, gpointer user_data); +size_t tcg_nb_tbs(void); + /* user-mode: Called with tb_lock held. */ static inline void *tcg_malloc(int size) { diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index dabf292..778801a 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -222,7 +222,7 @@ static void cpu_exec_nocache(CPUState *cpu, int max_cyc= les, =20 tb_lock(); tb_phys_invalidate(tb, -1); - tb_remove(tb); + tcg_tb_remove(tb); tb_unlock(); } #endif diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index d16e094..449d4de 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -205,8 +205,6 @@ void tb_lock_reset(void) } } =20 -static TranslationBlock *tb_find_pc(uintptr_t tc_ptr); - void cpu_gen_init(void) { tcg_context_init(&tcg_init_ctx); @@ -375,13 +373,13 @@ bool cpu_restore_state(CPUState *cpu, uintptr_t host_= pc) =20 if (check_offset < tcg_init_ctx.code_gen_buffer_size) { tb_lock(); - tb =3D tb_find_pc(host_pc); + tb =3D tcg_tb_lookup(host_pc); if (tb) { cpu_restore_state_from_tb(cpu, tb, host_pc); if (tb->cflags & CF_NOCACHE) { /* one-shot translation, invalidate it immediately */ tb_phys_invalidate(tb, -1); - tb_remove(tb); + tcg_tb_remove(tb); } r =3D true; } @@ -731,48 +729,6 @@ static inline void *alloc_code_gen_buffer(void) } #endif /* USE_STATIC_CODE_GEN_BUFFER, WIN32, POSIX */ =20 -/* compare a pointer @ptr and a tb_tc @s */ -static int ptr_cmp_tb_tc(const void *ptr, const struct tb_tc *s) -{ - if (ptr >=3D s->ptr + s->size) { - return 1; - } else if (ptr < s->ptr) { - return -1; - } - return 0; -} - -static gint tb_tc_cmp(gconstpointer ap, gconstpointer bp) -{ - const struct tb_tc *a =3D ap; - const struct tb_tc *b =3D bp; - - /* - * When both sizes are set, we know this isn't a lookup. - * This is the most likely case: every TB must be inserted; lookups - * are a lot less frequent. - */ - if (likely(a->size && b->size)) { - if (a->ptr > b->ptr) { - return 1; - } else if (a->ptr < b->ptr) { - return -1; - } - /* a->ptr =3D=3D b->ptr should happen only on deletions */ - g_assert(a->size =3D=3D b->size); - return 0; - } - /* - * All lookups have either .size field set to 0. - * From the glib sources we see that @ap is always the lookup key. How= ever - * the docs provide no guarantee, so we just mark this case as likely. - */ - if (likely(a->size =3D=3D 0)) { - return ptr_cmp_tb_tc(a->ptr, b); - } - return ptr_cmp_tb_tc(b->ptr, a); -} - static inline void code_gen_alloc(size_t tb_size) { tcg_ctx->code_gen_buffer_size =3D size_code_gen_buffer(tb_size); @@ -781,7 +737,6 @@ static inline void code_gen_alloc(size_t tb_size) fprintf(stderr, "Could not allocate dynamic translator buffer\n"); exit(1); } - tb_ctx.tb_tree =3D g_tree_new(tb_tc_cmp); qemu_mutex_init(&tb_ctx.tb_lock); } =20 @@ -842,14 +797,6 @@ static TranslationBlock *tb_alloc(target_ulong pc) return tb; } =20 -/* Called with tb_lock held. */ -void tb_remove(TranslationBlock *tb) -{ - assert_tb_locked(); - - g_tree_remove(tb_ctx.tb_tree, &tb->tc); -} - static inline void invalidate_page_bitmap(PageDesc *p) { #ifdef CONFIG_SOFTMMU @@ -914,10 +861,10 @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_dat= a tb_flush_count) } =20 if (DEBUG_TB_FLUSH_GATE) { - size_t nb_tbs =3D g_tree_nnodes(tb_ctx.tb_tree); + size_t nb_tbs =3D tcg_nb_tbs(); size_t host_size =3D 0; =20 - g_tree_foreach(tb_ctx.tb_tree, tb_host_size_iter, &host_size); + tcg_tb_foreach(tb_host_size_iter, &host_size); printf("qemu: flush code_size=3D%zu nb_tbs=3D%zu avg_tb_size=3D%zu= \n", tcg_code_size(), nb_tbs, nb_tbs > 0 ? host_size / nb_tbs : = 0); } @@ -926,10 +873,6 @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_data= tb_flush_count) cpu_tb_jmp_cache_clear(cpu); } =20 - /* Increment the refcount first so that destroy acts as a reset */ - g_tree_ref(tb_ctx.tb_tree); - g_tree_destroy(tb_ctx.tb_tree); - qht_reset_size(&tb_ctx.htable, CODE_GEN_HTABLE_SIZE); page_flush_tb(); =20 @@ -1409,7 +1352,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu, * through the physical hash table and physical page list. */ tb_link_page(tb, phys_pc, phys_page2); - g_tree_insert(tb_ctx.tb_tree, &tb->tc, tb); + tcg_tb_insert(tb); return tb; } =20 @@ -1513,7 +1456,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t sta= rt, tb_page_addr_t end, current_tb =3D NULL; if (cpu->mem_io_pc) { /* now we have a real cpu fault */ - current_tb =3D tb_find_pc(cpu->mem_io_pc); + current_tb =3D tcg_tb_lookup(cpu->mem_io_pc); } } if (current_tb =3D=3D tb && @@ -1629,7 +1572,7 @@ static bool tb_invalidate_phys_page(tb_page_addr_t ad= dr, uintptr_t pc) tb =3D p->first_tb; #ifdef TARGET_HAS_PRECISE_SMC if (tb && pc !=3D 0) { - current_tb =3D tb_find_pc(pc); + current_tb =3D tcg_tb_lookup(pc); } if (cpu !=3D NULL) { env =3D cpu->env_ptr; @@ -1672,18 +1615,6 @@ static bool tb_invalidate_phys_page(tb_page_addr_t a= ddr, uintptr_t pc) } #endif =20 -/* - * Find the TB 'tb' such that - * tb->tc.ptr <=3D tc_ptr < tb->tc.ptr + tb->tc.size - * Return NULL if not found. - */ -static TranslationBlock *tb_find_pc(uintptr_t tc_ptr) -{ - struct tb_tc s =3D { .ptr =3D (void *)tc_ptr }; - - return g_tree_lookup(tb_ctx.tb_tree, &s); -} - #if !defined(CONFIG_USER_ONLY) void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr) { @@ -1711,7 +1642,7 @@ void tb_check_watchpoint(CPUState *cpu) { TranslationBlock *tb; =20 - tb =3D tb_find_pc(cpu->mem_io_pc); + tb =3D tcg_tb_lookup(cpu->mem_io_pc); if (tb) { /* We can use retranslation to find the PC. */ cpu_restore_state_from_tb(cpu, tb, cpu->mem_io_pc); @@ -1745,7 +1676,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retadd= r) uint32_t n; =20 tb_lock(); - tb =3D tb_find_pc(retaddr); + tb =3D tcg_tb_lookup(retaddr); if (!tb) { cpu_abort(cpu, "cpu_io_recompile: could not find TB for pc=3D%p", (void *)retaddr); @@ -1784,7 +1715,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retadd= r) * cpu_exec_nocache() */ tb_phys_invalidate(tb->orig_tb, -1); } - tb_remove(tb); + tcg_tb_remove(tb); } =20 /* TODO: If env->pc !=3D tb->pc (i.e. the faulting instruction was not @@ -1855,6 +1786,7 @@ static void print_qht_statistics(FILE *f, fprintf_fun= ction cpu_fprintf, } =20 struct tb_tree_stats { + size_t nb_tbs; size_t host_size; size_t target_size; size_t max_target_size; @@ -1868,6 +1800,7 @@ static gboolean tb_tree_stats_iter(gpointer key, gpoi= nter value, gpointer data) const TranslationBlock *tb =3D value; struct tb_tree_stats *tst =3D data; =20 + tst->nb_tbs++; tst->host_size +=3D tb->tc.size; tst->target_size +=3D tb->size; if (tb->size > tst->max_target_size) { @@ -1891,10 +1824,8 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fp= rintf) struct qht_stats hst; size_t nb_tbs; =20 - tb_lock(); - - nb_tbs =3D g_tree_nnodes(tb_ctx.tb_tree); - g_tree_foreach(tb_ctx.tb_tree, tb_tree_stats_iter, &tst); + tcg_tb_foreach(tb_tree_stats_iter, &tst); + nb_tbs =3D tst.nb_tbs; /* XXX: avoid using doubles ? */ cpu_fprintf(f, "Translation buffer state:\n"); /* @@ -1929,8 +1860,6 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fpr= intf) cpu_fprintf(f, "TB invalidate count %d\n", tb_ctx.tb_phys_invalidate_c= ount); cpu_fprintf(f, "TLB flush count %zu\n", tlb_flush_count()); tcg_dump_info(f, cpu_fprintf); - - tb_unlock(); } =20 void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf) @@ -2198,7 +2127,7 @@ int page_unprotect(target_ulong address, uintptr_t pc) * set the page to PAGE_WRITE and did the TB invalidate for us. */ #ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb =3D tb_find_pc(pc); + TranslationBlock *current_tb =3D tcg_tb_lookup(pc); if (current_tb) { current_tb_invalidated =3D tb_cflags(current_tb) & CF_INVA= LID; } diff --git a/tcg/tcg.c b/tcg/tcg.c index bb24526..b471708 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -135,6 +135,12 @@ static TCGContext **tcg_ctxs; static unsigned int n_tcg_ctxs; TCGv_env cpu_env =3D 0; =20 +struct tcg_region_tree { + QemuMutex lock; + GTree *tree; + /* padding to avoid false sharing is computed at run-time */ +}; + /* * We divide code_gen_buffer into equally-sized "regions" that TCG threads * dynamically allocate from as demand dictates. Given appropriate region @@ -158,6 +164,13 @@ struct tcg_region_state { }; =20 static struct tcg_region_state region; +/* + * This is an array of struct tcg_region_tree's, with padding. + * We use void * to simplify the computation of region_trees[i]; each + * struct is found every tree_size bytes. + */ +static void *region_trees; +static size_t tree_size; static TCGRegSet tcg_target_available_regs[TCG_TYPE_COUNT]; static TCGRegSet tcg_target_call_clobber_regs; =20 @@ -295,6 +308,180 @@ TCGLabel *gen_new_label(void) =20 #include "tcg-target.inc.c" =20 +/* compare a pointer @ptr and a tb_tc @s */ +static int ptr_cmp_tb_tc(const void *ptr, const struct tb_tc *s) +{ + if (ptr >=3D s->ptr + s->size) { + return 1; + } else if (ptr < s->ptr) { + return -1; + } + return 0; +} + +static gint tb_tc_cmp(gconstpointer ap, gconstpointer bp) +{ + const struct tb_tc *a =3D ap; + const struct tb_tc *b =3D bp; + + /* + * When both sizes are set, we know this isn't a lookup. + * This is the most likely case: every TB must be inserted; lookups + * are a lot less frequent. + */ + if (likely(a->size && b->size)) { + if (a->ptr > b->ptr) { + return 1; + } else if (a->ptr < b->ptr) { + return -1; + } + /* a->ptr =3D=3D b->ptr should happen only on deletions */ + g_assert(a->size =3D=3D b->size); + return 0; + } + /* + * All lookups have either .size field set to 0. + * From the glib sources we see that @ap is always the lookup key. How= ever + * the docs provide no guarantee, so we just mark this case as likely. + */ + if (likely(a->size =3D=3D 0)) { + return ptr_cmp_tb_tc(a->ptr, b); + } + return ptr_cmp_tb_tc(b->ptr, a); +} + +static void tcg_region_trees_init(void) +{ + size_t i; + + tree_size =3D ROUND_UP(sizeof(struct tcg_region_tree), qemu_dcache_lin= esize); + region_trees =3D qemu_memalign(qemu_dcache_linesize, region.n * tree_s= ize); + for (i =3D 0; i < region.n; i++) { + struct tcg_region_tree *rt =3D region_trees + i * tree_size; + + qemu_mutex_init(&rt->lock); + rt->tree =3D g_tree_new(tb_tc_cmp); + } +} + +static struct tcg_region_tree *tc_ptr_to_region_tree(void *p) +{ + size_t region_idx; + + if (p < region.start_aligned) { + region_idx =3D 0; + } else { + ptrdiff_t offset =3D p - region.start_aligned; + + if (offset > region.stride * (region.n - 1)) { + region_idx =3D region.n - 1; + } else { + region_idx =3D offset / region.stride; + } + } + return region_trees + region_idx * tree_size; +} + +void tcg_tb_insert(TranslationBlock *tb) +{ + struct tcg_region_tree *rt =3D tc_ptr_to_region_tree(tb->tc.ptr); + + qemu_mutex_lock(&rt->lock); + g_tree_insert(rt->tree, &tb->tc, tb); + qemu_mutex_unlock(&rt->lock); +} + +void tcg_tb_remove(TranslationBlock *tb) +{ + struct tcg_region_tree *rt =3D tc_ptr_to_region_tree(tb->tc.ptr); + + qemu_mutex_lock(&rt->lock); + g_tree_remove(rt->tree, &tb->tc); + qemu_mutex_unlock(&rt->lock); +} + +/* + * Find the TB 'tb' such that + * tb->tc.ptr <=3D tc_ptr < tb->tc.ptr + tb->tc.size + * Return NULL if not found. + */ +TranslationBlock *tcg_tb_lookup(uintptr_t tc_ptr) +{ + struct tcg_region_tree *rt =3D tc_ptr_to_region_tree((void *)tc_ptr); + TranslationBlock *tb; + struct tb_tc s =3D { .ptr =3D (void *)tc_ptr }; + + qemu_mutex_lock(&rt->lock); + tb =3D g_tree_lookup(rt->tree, &s); + qemu_mutex_unlock(&rt->lock); + return tb; +} + +static void tcg_region_tree_lock_all(void) +{ + size_t i; + + for (i =3D 0; i < region.n; i++) { + struct tcg_region_tree *rt =3D region_trees + i * tree_size; + + qemu_mutex_lock(&rt->lock); + } +} + +static void tcg_region_tree_unlock_all(void) +{ + size_t i; + + for (i =3D 0; i < region.n; i++) { + struct tcg_region_tree *rt =3D region_trees + i * tree_size; + + qemu_mutex_unlock(&rt->lock); + } +} + +void tcg_tb_foreach(GTraverseFunc func, gpointer user_data) +{ + size_t i; + + tcg_region_tree_lock_all(); + for (i =3D 0; i < region.n; i++) { + struct tcg_region_tree *rt =3D region_trees + i * tree_size; + + g_tree_foreach(rt->tree, func, user_data); + } + tcg_region_tree_unlock_all(); +} + +size_t tcg_nb_tbs(void) +{ + size_t nb_tbs =3D 0; + size_t i; + + tcg_region_tree_lock_all(); + for (i =3D 0; i < region.n; i++) { + struct tcg_region_tree *rt =3D region_trees + i * tree_size; + + nb_tbs +=3D g_tree_nnodes(rt->tree); + } + tcg_region_tree_unlock_all(); + return nb_tbs; +} + +static void tcg_region_tree_reset_all(void) +{ + size_t i; + + tcg_region_tree_lock_all(); + for (i =3D 0; i < region.n; i++) { + struct tcg_region_tree *rt =3D region_trees + i * tree_size; + + /* Increment the refcount first so that destroy acts as a reset */ + g_tree_ref(rt->tree); + g_tree_destroy(rt->tree); + } + tcg_region_tree_unlock_all(); +} + static void tcg_region_bounds(size_t curr_region, void **pstart, void **pe= nd) { void *start, *end; @@ -380,6 +567,8 @@ void tcg_region_reset_all(void) g_assert(!err); } qemu_mutex_unlock(®ion.lock); + + tcg_region_tree_reset_all(); } =20 #ifdef CONFIG_USER_ONLY @@ -496,6 +685,8 @@ void tcg_region_init(void) g_assert(!rc); } =20 + tcg_region_trees_init(); + /* In user-mode we support only one ctx, so do the initial allocation = now */ #ifdef CONFIG_USER_ONLY { --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981119279288.68382288212206; Thu, 5 Apr 2018 19:18:39 -0700 (PDT) Received: from localhost ([::1]:44015 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4Gxa-0006TZ-Db for importer@patchew.org; Thu, 05 Apr 2018 22:18:38 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58416) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002GC-7g for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsI-0003Tf-Nr for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:58103) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsI-0003TH-Jh for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:10 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 2DA7620ABC; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:10 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id CB013E4F68; Thu, 5 Apr 2018 22:13:09 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=KLVqntbK0uo6dZQNtJgLgiKau0 FLEZc9QP34tZ192OA=; b=wVomU7YKVnQ0t1sA12JgD4biSz2fXsPrTwND2shwvH zZVM5txsg1EIUHbyPZ0kw/Pkm+FbCZONAUd0pbOmtCm5nRQ0K0GPCrGVHqznQBWu Vh2EcMVhTosGycjW6QdVFVIBT6JyyUVVT+RQ0/2DlL1IXmBaTl+1LzCG6RZCNUhP s= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=KLVqnt bK0uo6dZQNtJgLgiKau0FLEZc9QP34tZ192OA=; b=g+kQt3GgXCVnjr3IcW/oAZ V2gnJADcnLisgHv8WEcecxAxkm0c4KTTzerK+g5upQ8ifbES4g36lvqGhoDa9wY6 OadIRnPXI68ezkz4YweTkUV9S1txMlVrL6oRqQpbuOxYMzFxRPe5QRgfE0sovjPY OcANUbx9qGdvLQcoJRF9lkpGdr//D4SXOxduV0qc4a1BMQRy8I7BnrCVX2l4ntPW hJXhnCEsHPy5bTo5oMZyLOw6gGtAl9KWKcFG5/j3NAud1vT2bRy+YNswcEN/GidD 6u+LvKyyvhcqe69gBVZ1ZaeDCD3oa7sA1eWu6wYLFSFgfuYoDirXB0gYRW8o6NlQ == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:12:55 -0400 Message-Id: <1522980788-1252-5-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 04/17] tcg: move tb_ctx.tb_phys_invalidate_count to tcg_ctx X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Thereby making it per-TCGContext. Once we remove tb_lock, this will avoid an atomic increment every time a TB is invalidated. Reviewed-by: Richard Henderson Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- include/exec/tb-context.h | 1 - tcg/tcg.h | 3 +++ accel/tcg/translate-all.c | 5 +++-- tcg/tcg.c | 14 ++++++++++++++ 4 files changed, 20 insertions(+), 3 deletions(-) diff --git a/include/exec/tb-context.h b/include/exec/tb-context.h index d8472c8..8c9b49c 100644 --- a/include/exec/tb-context.h +++ b/include/exec/tb-context.h @@ -37,7 +37,6 @@ struct TBContext { =20 /* statistics */ unsigned tb_flush_count; - int tb_phys_invalidate_count; }; =20 extern TBContext tb_ctx; diff --git a/tcg/tcg.h b/tcg/tcg.h index 8bf29cc..9dd9448 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -694,6 +694,8 @@ struct TCGContext { /* Threshold to flush the translated code buffer. */ void *code_gen_highwater; =20 + size_t tb_phys_invalidate_count; + /* Track which vCPU triggers events */ CPUState *cpu; /* *_trans */ =20 @@ -852,6 +854,7 @@ size_t tcg_code_capacity(void); =20 void tcg_tb_insert(TranslationBlock *tb); void tcg_tb_remove(TranslationBlock *tb); +size_t tcg_tb_phys_invalidate_count(void); TranslationBlock *tcg_tb_lookup(uintptr_t tc_ptr); void tcg_tb_foreach(GTraverseFunc func, gpointer user_data); size_t tcg_nb_tbs(void); diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 449d4de..feac8ce 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1072,7 +1072,8 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page= _addr_t page_addr) /* suppress any remaining jumps to this TB */ tb_jmp_unlink(tb); =20 - tb_ctx.tb_phys_invalidate_count++; + atomic_set(&tcg_ctx->tb_phys_invalidate_count, + tcg_ctx->tb_phys_invalidate_count + 1); } =20 #ifdef CONFIG_SOFTMMU @@ -1857,7 +1858,7 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fpr= intf) cpu_fprintf(f, "\nStatistics:\n"); cpu_fprintf(f, "TB flush count %u\n", atomic_read(&tb_ctx.tb_flush_count)); - cpu_fprintf(f, "TB invalidate count %d\n", tb_ctx.tb_phys_invalidate_c= ount); + cpu_fprintf(f, "TB invalidate count %zu\n", tcg_tb_phys_invalidate_cou= nt()); cpu_fprintf(f, "TLB flush count %zu\n", tlb_flush_count()); tcg_dump_info(f, cpu_fprintf); } diff --git a/tcg/tcg.c b/tcg/tcg.c index b471708..a7b596e 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -791,6 +791,20 @@ size_t tcg_code_capacity(void) return capacity; } =20 +size_t tcg_tb_phys_invalidate_count(void) +{ + unsigned int n_ctxs =3D atomic_read(&n_tcg_ctxs); + unsigned int i; + size_t total =3D 0; + + for (i =3D 0; i < n_ctxs; i++) { + const TCGContext *s =3D atomic_read(&tcg_ctxs[i]); + + total +=3D atomic_read(&s->tb_phys_invalidate_count); + } + return total; +} + /* pool based memory allocation */ void *tcg_malloc_internal(TCGContext *s, int size) { --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981473648686.1514644100859; Thu, 5 Apr 2018 19:24:33 -0700 (PDT) Received: from localhost ([::1]:44275 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H3I-0003OC-SY for importer@patchew.org; Thu, 05 Apr 2018 22:24:32 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58417) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002GD-7T for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsI-0003Ts-QL for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:37031) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsI-0003TJ-LI for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:10 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 5B27B20AFA; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:10 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 07B141025D; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=mesmtp; bh=L0dJfUWEpwPBnO b+py31UmQxftTsRbNEV1BQAmPyQ14=; b=FD/4WT7UMtvWvamj04XuLHGfrIwtZ+ +5nXeJefM+qT6IoWf/kEEuJk6rhU8SBr+s2OGvJbePwX182jPnpMcLO9UZurk+Zd vHSnQcy+lUJAdT6cbIK2/55/QUwijF0s6aFGaPh9GyH6hDWMlfZg361whNs1Q/X/ vwRvf58QwPJJ4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=L0dJfUWEpwPBnOb+py31UmQxftTsRbNEV1BQAmPyQ14=; b=Cd5JCYAf GQ1lzMOSENmHQM2wEMYMbnLPvZiqiyLv+ZHnB0iWxzOiWX/ElRY0R3ULmzPd3JPb TLlKKGzx0aA8rnlp1CInxFeIF+AkQnTOXIcdz7tC+IFcukMRXKvFBSEa2B3YW9O3 vBBvObcXZEn9VXrWZNZlb9sjS2frfDU85FyDc0HHLL7RT/79vW9uOolev2ulJ+0g 5zAhWZPx4g4x9DUEztNCiXoH6qDp6aK39qNnE+EDhbHYRWEtRN7cZ7GAdfDkILTp DKqf/UJ/veelqjlyd7l6xx1Idzm6xub9pxHMNObFW3uQKxou0pokzTOuGXU96anH 3B2ZnVZ0w6zYSw== X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:12:56 -0400 Message-Id: <1522980788-1252-6-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 05/17] translate-all: iterate over TBs in a page with PAGE_FOR_EACH_TB X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit does several things, but to avoid churn I merged them all into the same commit. To wit: - Use uintptr_t instead of TranslationBlock * for the list of TBs in a page. Just like we did in (c37e6d7e "tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB"), the rationale is the same: these are tagged pointers, not pointers. So use a more appropriate type. - Only check the least significant bit of the tagged pointers. Masking with 3/~3 is unnecessary and confusing. - Introduce the TB_FOR_EACH_TAGGED macro, and use it to define PAGE_FOR_EACH_TB, which improves readability. Note that TB_FOR_EACH_TAGGED will gain another user in a subsequent patch. - Update tb_page_remove to use PAGE_FOR_EACH_TB. In case there is a bug and we attempt to remove a TB that is not in the list, instead of segfaulting (since the list is NULL-terminated) we will reach g_assert_not_reached(). Reviewed-by: Richard Henderson Signed-off-by: Emilio G. Cota --- include/exec/exec-all.h | 2 +- accel/tcg/translate-all.c | 62 ++++++++++++++++++++++---------------------= ---- 2 files changed, 30 insertions(+), 34 deletions(-) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 17e08b3..5f7e65a 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -356,7 +356,7 @@ struct TranslationBlock { struct TranslationBlock *orig_tb; /* first and second physical page containing code. The lower bit of the pointer tells the index in page_next[] */ - struct TranslationBlock *page_next[2]; + uintptr_t page_next[2]; tb_page_addr_t page_addr[2]; =20 /* The following data are used to directly call another TB from diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index feac8ce..ecf898c 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -103,7 +103,7 @@ =20 typedef struct PageDesc { /* list of TBs intersecting this ram page */ - TranslationBlock *first_tb; + uintptr_t first_tb; #ifdef CONFIG_SOFTMMU /* in order to optimize self modifying code, we count the number of lookups we do to a given page to use a bitmap */ @@ -114,6 +114,15 @@ typedef struct PageDesc { #endif } PageDesc; =20 +/* list iterators for lists of tagged pointers in TranslationBlock */ +#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ + for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ + tb; tb =3D (TranslationBlock *)tb->field[n], n =3D (uintptr_t)tb = & 1, \ + tb =3D (TranslationBlock *)((uintptr_t)tb & ~1)) + +#define PAGE_FOR_EACH_TB(pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + /* In system mode we want L1_MAP to be based on ram offsets, while in user mode we want it to be based on virtual addresses. */ #if !defined(CONFIG_USER_ONLY) @@ -818,7 +827,7 @@ static void page_flush_tb_1(int level, void **lp) PageDesc *pd =3D *lp; =20 for (i =3D 0; i < V_L2_SIZE; ++i) { - pd[i].first_tb =3D NULL; + pd[i].first_tb =3D (uintptr_t)NULL; invalidate_page_bitmap(pd + i); } } else { @@ -946,21 +955,21 @@ static void tb_page_check(void) =20 #endif /* CONFIG_USER_ONLY */ =20 -static inline void tb_page_remove(TranslationBlock **ptb, TranslationBlock= *tb) +static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) { TranslationBlock *tb1; + uintptr_t *pprev; unsigned int n1; =20 - for (;;) { - tb1 =3D *ptb; - n1 =3D (uintptr_t)tb1 & 3; - tb1 =3D (TranslationBlock *)((uintptr_t)tb1 & ~3); + pprev =3D &pd->first_tb; + PAGE_FOR_EACH_TB(pd, tb1, n1) { if (tb1 =3D=3D tb) { - *ptb =3D tb1->page_next[n1]; - break; + *pprev =3D tb1->page_next[n1]; + return; } - ptb =3D &tb1->page_next[n1]; + pprev =3D &tb1->page_next[n1]; } + g_assert_not_reached(); } =20 /* remove the TB from a list of TBs jumping to the n-th jump target of the= TB */ @@ -1048,12 +1057,12 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_pa= ge_addr_t page_addr) /* remove the TB from the page list */ if (tb->page_addr[0] !=3D page_addr) { p =3D page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); - tb_page_remove(&p->first_tb, tb); + tb_page_remove(p, tb); invalidate_page_bitmap(p); } if (tb->page_addr[1] !=3D -1 && tb->page_addr[1] !=3D page_addr) { p =3D page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); - tb_page_remove(&p->first_tb, tb); + tb_page_remove(p, tb); invalidate_page_bitmap(p); } =20 @@ -1084,10 +1093,7 @@ static void build_page_bitmap(PageDesc *p) =20 p->code_bitmap =3D bitmap_new(TARGET_PAGE_SIZE); =20 - tb =3D p->first_tb; - while (tb !=3D NULL) { - n =3D (uintptr_t)tb & 3; - tb =3D (TranslationBlock *)((uintptr_t)tb & ~3); + PAGE_FOR_EACH_TB(p, tb, n) { /* NOTE: this is subtle as a TB may span two physical pages */ if (n =3D=3D 0) { /* NOTE: tb_end may be after the end of the page, but @@ -1102,7 +1108,6 @@ static void build_page_bitmap(PageDesc *p) tb_end =3D ((tb->pc + tb->size) & ~TARGET_PAGE_MASK); } bitmap_set(p->code_bitmap, tb_start, tb_end - tb_start); - tb =3D tb->page_next[n]; } } #endif @@ -1125,9 +1130,9 @@ static inline void tb_alloc_page(TranslationBlock *tb, p =3D page_find_alloc(page_addr >> TARGET_PAGE_BITS, 1); tb->page_next[n] =3D p->first_tb; #ifndef CONFIG_USER_ONLY - page_already_protected =3D p->first_tb !=3D NULL; + page_already_protected =3D p->first_tb !=3D (uintptr_t)NULL; #endif - p->first_tb =3D (TranslationBlock *)((uintptr_t)tb | n); + p->first_tb =3D (uintptr_t)tb | n; invalidate_page_bitmap(p); =20 #if defined(CONFIG_USER_ONLY) @@ -1404,7 +1409,7 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t en= d, int is_cpu_write_access) { - TranslationBlock *tb, *tb_next; + TranslationBlock *tb; tb_page_addr_t tb_start, tb_end; PageDesc *p; int n; @@ -1435,11 +1440,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t st= art, tb_page_addr_t end, /* we remove all the TBs in the range [start, end[ */ /* XXX: see if in some cases it could be faster to invalidate all the code */ - tb =3D p->first_tb; - while (tb !=3D NULL) { - n =3D (uintptr_t)tb & 3; - tb =3D (TranslationBlock *)((uintptr_t)tb & ~3); - tb_next =3D tb->page_next[n]; + PAGE_FOR_EACH_TB(p, tb, n) { /* NOTE: this is subtle as a TB may span two physical pages */ if (n =3D=3D 0) { /* NOTE: tb_end may be after the end of the page, but @@ -1476,7 +1477,6 @@ void tb_invalidate_phys_page_range(tb_page_addr_t sta= rt, tb_page_addr_t end, #endif /* TARGET_HAS_PRECISE_SMC */ tb_phys_invalidate(tb, -1); } - tb =3D tb_next; } #if !defined(CONFIG_USER_ONLY) /* if no code remaining, no need to continue to use slow writes */ @@ -1570,18 +1570,15 @@ static bool tb_invalidate_phys_page(tb_page_addr_t = addr, uintptr_t pc) } =20 tb_lock(); - tb =3D p->first_tb; #ifdef TARGET_HAS_PRECISE_SMC - if (tb && pc !=3D 0) { + if (p->first_tb && pc !=3D 0) { current_tb =3D tcg_tb_lookup(pc); } if (cpu !=3D NULL) { env =3D cpu->env_ptr; } #endif - while (tb !=3D NULL) { - n =3D (uintptr_t)tb & 3; - tb =3D (TranslationBlock *)((uintptr_t)tb & ~3); + PAGE_FOR_EACH_TB(p, tb, n) { #ifdef TARGET_HAS_PRECISE_SMC if (current_tb =3D=3D tb && (current_tb->cflags & CF_COUNT_MASK) !=3D 1) { @@ -1598,9 +1595,8 @@ static bool tb_invalidate_phys_page(tb_page_addr_t ad= dr, uintptr_t pc) } #endif /* TARGET_HAS_PRECISE_SMC */ tb_phys_invalidate(tb, addr); - tb =3D tb->page_next[n]; } - p->first_tb =3D NULL; + p->first_tb =3D (uintptr_t)NULL; #ifdef TARGET_HAS_PRECISE_SMC if (current_tb_modified) { /* Force execution of one insn next time. */ --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522980925393224.0881322713659; Thu, 5 Apr 2018 19:15:25 -0700 (PDT) Received: from localhost ([::1]:43993 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GuN-0003nk-Fx for importer@patchew.org; Thu, 05 Apr 2018 22:15:19 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58414) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002GA-6c for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsI-0003UB-Tv for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:56219) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsI-0003TO-Oj for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:10 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 800F820B3B; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:10 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 3EC80E502E; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=gWwkbrRXcVE5d5yKbFRBcT36K3 8CuBm+X+hItImPtpY=; b=wyZE0Y3Jk7R4R64eSvvkMeG7RupbYhU2HdnUVlb/gq 1044LWPom5B2fsWe9X3klptSsBnJRqfz/93+/OBoznthRyDvrXhCM+U/IFklNq6K qrXjB21jkGuaf+n+qzYWss0aZfw57aPJG2UVYDMEeH2UUdnj9yWAa+bF8cYZB6xj U= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=gWwkbr RXcVE5d5yKbFRBcT36K38CuBm+X+hItImPtpY=; b=G33L4bB7f1jeco717o0X94 m9dgMZGv99rRJgul+dnLGfOQQV6IyLAOncCnEGSc1akLoOhVr4MRtwrZjisO8MMw ZVxht1f4p6FvupuDalt56aSXqSSXuIw9ZBFRJeezKs7r9zCTqeQYhoegAwexUt4w xb/jtg48NitTvZ5hCCXc6gaFANAWIKfxvgJqWj3R9jZarIXcpL2+X2rpTcab2Sxb wb71r4mXcX1s4nV21dYSLw9o3dhi1i0dFHvGF07xpu4UxwWeN8bUSL30Kspb/4Yd Va8XBF5NSScK1TzwC//WR/ZDBt4DfjDl7SuZrBoiS2K9WvZCvsNaeUT2UxlGgHZw == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:12:57 -0400 Message-Id: <1522980788-1252-7-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 06/17] translate-all: make l1_map lockless X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Groundwork for supporting parallel TCG generation. We never remove entries from the radix tree, so we can use cmpxchg to implement lockless insertions. Reviewed-by: Richard Henderson Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- docs/devel/multi-thread-tcg.txt | 4 ++-- accel/tcg/translate-all.c | 24 ++++++++++++++---------- 2 files changed, 16 insertions(+), 12 deletions(-) diff --git a/docs/devel/multi-thread-tcg.txt b/docs/devel/multi-thread-tcg.= txt index a99b456..faf8918 100644 --- a/docs/devel/multi-thread-tcg.txt +++ b/docs/devel/multi-thread-tcg.txt @@ -134,8 +134,8 @@ tb_set_jmp_target() code. Modification to the linked li= sts that allow searching for linked pages are done under the protect of the tb_lock(). =20 -The global page table is protected by the tb_lock() in system-mode and -mmap_lock() in linux-user mode. +The global page table is a lockless radix tree; cmpxchg is used +to atomically insert new elements. =20 The lookup caches are updated atomically and the lookup hash uses QHT which is designed for concurrent safe lookup. diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index ecf898c..83aec43 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -469,20 +469,12 @@ static void page_init(void) #endif } =20 -/* If alloc=3D1: - * Called with tb_lock held for system emulation. - * Called with mmap_lock held for user-mode emulation. - */ static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc) { PageDesc *pd; void **lp; int i; =20 - if (alloc) { - assert_memory_lock(); - } - /* Level 1. Always allocated. */ lp =3D l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); =20 @@ -491,11 +483,17 @@ static PageDesc *page_find_alloc(tb_page_addr_t index= , int alloc) void **p =3D atomic_rcu_read(lp); =20 if (p =3D=3D NULL) { + void *existing; + if (!alloc) { return NULL; } p =3D g_new0(void *, V_L2_SIZE); - atomic_rcu_set(lp, p); + existing =3D atomic_cmpxchg(lp, NULL, p); + if (unlikely(existing)) { + g_free(p); + p =3D existing; + } } =20 lp =3D p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); @@ -503,11 +501,17 @@ static PageDesc *page_find_alloc(tb_page_addr_t index= , int alloc) =20 pd =3D atomic_rcu_read(lp); if (pd =3D=3D NULL) { + void *existing; + if (!alloc) { return NULL; } pd =3D g_new0(PageDesc, V_L2_SIZE); - atomic_rcu_set(lp, pd); + existing =3D atomic_cmpxchg(lp, NULL, pd); + if (unlikely(existing)) { + g_free(pd); + pd =3D existing; + } } =20 return pd + (index & (V_L2_SIZE - 1)); --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522980924795545.5489193405482; Thu, 5 Apr 2018 19:15:24 -0700 (PDT) Received: from localhost ([::1]:43994 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GuL-0003oD-MH for importer@patchew.org; Thu, 05 Apr 2018 22:15:17 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58418) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002GE-7c for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsJ-0003Ug-70 for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:52767) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsJ-0003U1-1Z for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:11 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id B38E320B44; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:10 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 6FBEE1025C; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=5R7JN6z3Xmq0/8fQvk4hz5nZ/H SWyWfbrfgKOkjnWEY=; b=vawkuIS7LnJjskunPpKIhtNfCD53ARnTQV4PVN9S/F xjeCnVBYS7rLotHIVkgD65y6te4fXAek74pMRnKKGqQ3QnbCBnmLJf950lwr3Pqt rrMfL/sDeDXeR4PEPvWri6tfdwN9lyikuDMyisQi1fo+TPB9ytYQu6GiziDMJBgk I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=5R7JN6 z3Xmq0/8fQvk4hz5nZ/HSWyWfbrfgKOkjnWEY=; b=JTbel14ryxLAjU5rBX1qni xwBS7A9/011Buvc8PWcW2JkAY9vZgKC3mOMgwdZfiiHWYCEEb77PNTElCgBKATj9 yC1hs1w3noJOJINyrQ3I2yEiieJpRfdLy6xLH0Ct4BGNMi+eSTP6nwW02AmVhE51 NYcozxphP1/9lOcihJ+ubymvYqq9DQHl33rJKeRUZv0clU3a978ONJ4INQsR/T8o s/7CGJsLxH5rSotj0CCu7GgTho2HNayePKAnrfobyLgTWvI3/DyaDt/T14mmM6Mg 0FEKobP3XTI/tRecuJOw0cFKvE9NcfZRh5brSyWBiE4GTwFrlmaRYIn75YecPMyw == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:12:58 -0400 Message-Id: <1522980788-1252-8-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 07/17] translate-all: remove hole in PageDesc X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Groundwork for supporting parallel TCG generation. Move the hole to the end of the struct, so that a u32 field can be added there without bloating the struct. Reviewed-by: Richard Henderson Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- accel/tcg/translate-all.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 83aec43..7c72354 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -107,8 +107,8 @@ typedef struct PageDesc { #ifdef CONFIG_SOFTMMU /* in order to optimize self modifying code, we count the number of lookups we do to a given page to use a bitmap */ - unsigned int code_write_count; unsigned long *code_bitmap; + unsigned int code_write_count; #else unsigned long flags; #endif --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981117549810.9847815279505; Thu, 5 Apr 2018 19:18:37 -0700 (PDT) Received: from localhost ([::1]:44014 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GxY-0006TX-N1 for importer@patchew.org; Thu, 05 Apr 2018 22:18:36 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58408) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002G8-4K for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsJ-0003Ut-DA for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:48677) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsJ-0003UL-7P for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:11 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id E1D4020B49; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:10 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id A5D57E47F2; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=DYQhoCXYz6knjCoh3M9zPNV5kv 7E18y5ZIFLZtuC7Pk=; b=2CJKoK4Up6AUH8hFMNsIoBZlGNv/+YZJlH2zp9dh3j CYtktSlbH1VI3BQCtbxoMQ4GBsNOCof6fBDdWBersHaNVcvi6AphnnBhMGdtPQW6 BEDmzFqrfd9iz+aUYgsvtt7gjCRUWsf0V9oNDc3HGh8+BJfyg1k9/4Fiso7tMlQ/ 0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=DYQhoC XYz6knjCoh3M9zPNV5kv7E18y5ZIFLZtuC7Pk=; b=E5hNzXbKpRKz2WD9WAifaT fg+uOC2/Ayp6JgTYUZ8kiSVw2nygzIFix/06Pc6q85XpPsmvHzY/zFUw7/3dmx36 s0EPJkI8JOXJuydIQmS9BCME8WzWJp8EZeE2sBaKBxTqZ3fclgEa1K/EfOBcz6dE VmBdP05mPENmiQZwet4mAp7gPjPIwCLBU2//agV+JAA4yP289NXDTZF9nRAV77bf 5EGt0O5TaI7ebBIkqIUBTWLnRwadxS6/CTifxB3X3XUolRbm22iX775D0lMn4KDH pst58/NIfyvGUfWUBhzLfK4CKv7YwUgFhZlgh1PyoJZFTBaFr60IzTknLGB9iiFw == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:12:59 -0400 Message-Id: <1522980788-1252-9-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 08/17] translate-all: work page-by-page in tb_invalidate_phys_range_1 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 So that we pass a same-page range to tb_invalidate_phys_page_range, instead of always passing an end address that could be on a different page. As discussed with Peter Maydell on the list [1], tb_invalidate_phys_page_ra= nge doesn't actually do much with 'end', which explains why we have never hit a bug despite going against what the comment on top of tb_invalidate_phys_page_range requires: > * Invalidate all TBs which intersect with the target physical address ran= ge > * [start;end[. NOTE: start and end must refer to the *same* physical page. The appended honours the comment, which avoids confusion. While at it, rework the loop into a for loop, which is less error prone (e.g. "continue" won't result in an infinite loop). [1] https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg09165.html Reviewed-by: Richard Henderson Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- accel/tcg/translate-all.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 7c72354..b27aacc 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1378,10 +1378,14 @@ TranslationBlock *tb_gen_code(CPUState *cpu, */ static void tb_invalidate_phys_range_1(tb_page_addr_t start, tb_page_addr_= t end) { - while (start < end) { - tb_invalidate_phys_page_range(start, end, 0); - start &=3D TARGET_PAGE_MASK; - start +=3D TARGET_PAGE_SIZE; + tb_page_addr_t next; + + for (next =3D (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; + start < end; + start =3D next, next +=3D TARGET_PAGE_SIZE) { + tb_page_addr_t bound =3D MIN(next, end); + + tb_invalidate_phys_page_range(start, bound, 0); } } =20 --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522980925410298.5667327699613; Thu, 5 Apr 2018 19:15:25 -0700 (PDT) Received: from localhost ([::1]:43995 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GuN-0003ou-R3 for importer@patchew.org; Thu, 05 Apr 2018 22:15:19 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58415) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002GB-73 for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsJ-0003V7-Fb for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:54075) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsJ-0003UY-CG for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:11 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 1CABB209B0; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:11 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id D40091025C; Thu, 5 Apr 2018 22:13:10 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=EmCScM/m2vi9M1aF1e7T4FTaNq RgeHqtDvKxDFiRfYs=; b=IZ5LKKKTC4mNdLMOiFis0ylJdWiEQhyWlItPBvuPBv +3/gm5uES2dB4RpspD4/GtVTo4xAUrOj0+QsQL+Fg6ghXhLn0+xdUOMGQkThI9yU mPGBmlZNiYro//IM20R8KKp0bAIjoh065Tut//E1E46XcxcBEc9kzYL98iz7vglF g= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=EmCScM /m2vi9M1aF1e7T4FTaNqRgeHqtDvKxDFiRfYs=; b=kFe6zXcEPsBlPYmB5VFs0t Zbp+tozySmJmDH4BMP2x+uoDqcBlUW2WQV0otZTRLwWcnPd6G2wjLehlwG+U/ChK SNc5EGql4uBWR1Slh4GzMKY3Vk/ELqpySwWD8ol0+kFNAumSEOofY5nziLPpU63r PEK0jTvUQAYBxjOWX8eGf9XjMfyBZkLooTlgYCd6MVTAOlYYH/OJa/TwDkLDjI77 sHHd8wd1HSV+ff8fA/eNWvOvz4n5s3N8kKTf6Q0/+QYHvPRkCJO6eCdhrAxQAml6 irXq/kltNoj3a56hWAwqaUxV+s6Nmc8APp7Kfm5dtME1yMCbL+zFX1TbSp5uQcaw == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:00 -0400 Message-Id: <1522980788-1252-10-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 09/17] translate-all: move tb_invalidate_phys_page_range up in the file X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 This greatly simplifies next commit's diff. Reviewed-by: Richard Henderson Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- accel/tcg/translate-all.c | 77 ++++++++++++++++++++++++-------------------= ---- 1 file changed, 39 insertions(+), 38 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index b27aacc..4d4391f 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1368,44 +1368,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu, =20 /* * Invalidate all TBs which intersect with the target physical address ran= ge - * [start;end[. NOTE: start and end may refer to *different* physical page= s. - * 'is_cpu_write_access' should be true if called from a real cpu write - * access: the virtual CPU will exit the current TB if code is modified in= side - * this TB. - * - * Called with mmap_lock held for user-mode emulation, grabs tb_lock - * Called with tb_lock held for system-mode emulation - */ -static void tb_invalidate_phys_range_1(tb_page_addr_t start, tb_page_addr_= t end) -{ - tb_page_addr_t next; - - for (next =3D (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; - start < end; - start =3D next, next +=3D TARGET_PAGE_SIZE) { - tb_page_addr_t bound =3D MIN(next, end); - - tb_invalidate_phys_page_range(start, bound, 0); - } -} - -#ifdef CONFIG_SOFTMMU -void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) -{ - assert_tb_locked(); - tb_invalidate_phys_range_1(start, end); -} -#else -void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) -{ - assert_memory_lock(); - tb_lock(); - tb_invalidate_phys_range_1(start, end); - tb_unlock(); -} -#endif -/* - * Invalidate all TBs which intersect with the target physical address ran= ge * [start;end[. NOTE: start and end must refer to the *same* physical page. * 'is_cpu_write_access' should be true if called from a real cpu write * access: the virtual CPU will exit the current TB if code is modified in= side @@ -1502,6 +1464,45 @@ void tb_invalidate_phys_page_range(tb_page_addr_t st= art, tb_page_addr_t end, #endif } =20 +/* + * Invalidate all TBs which intersect with the target physical address ran= ge + * [start;end[. NOTE: start and end may refer to *different* physical page= s. + * 'is_cpu_write_access' should be true if called from a real cpu write + * access: the virtual CPU will exit the current TB if code is modified in= side + * this TB. + * + * Called with mmap_lock held for user-mode emulation, grabs tb_lock + * Called with tb_lock held for system-mode emulation + */ +static void tb_invalidate_phys_range_1(tb_page_addr_t start, tb_page_addr_= t end) +{ + tb_page_addr_t next; + + for (next =3D (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; + start < end; + start =3D next, next +=3D TARGET_PAGE_SIZE) { + tb_page_addr_t bound =3D MIN(next, end); + + tb_invalidate_phys_page_range(start, bound, 0); + } +} + +#ifdef CONFIG_SOFTMMU +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +{ + assert_tb_locked(); + tb_invalidate_phys_range_1(start, end); +} +#else +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +{ + assert_memory_lock(); + tb_lock(); + tb_invalidate_phys_range_1(start, end); + tb_unlock(); +} +#endif + #ifdef CONFIG_SOFTMMU /* len must be <=3D 8 and start must be a multiple of len. * Called via softmmu_template.h when code areas are written to with --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981701552583.3528397591385; Thu, 5 Apr 2018 19:28:21 -0700 (PDT) Received: from localhost ([::1]:44503 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H6u-0006Yy-IK for importer@patchew.org; Thu, 05 Apr 2018 22:28:16 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58532) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsO-0002HX-Ke for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsJ-0003VS-Pc for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:35521) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsJ-0003V3-Kq for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:11 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 5F7B220B4D; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:11 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 0E9BDE47F2; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=mesmtp; bh=FlO0On4OqEUgJs 3imPSOeNvfq/fFHee4cskEhqd1TXw=; b=ALB/KHEXHpmfJT3Kn0IElSNxcEdkuE VGEWZE8SzeDwFe0sIannR8vNwrrkMLbNPpA7e6I/pelHA1NetRp6xCrSAE+zr4Cq L4JjFltiYdGBx5Jfv9tzr65sP97qvRT/2m+T6u+uBAqGfSWyC6/QM7BZmqv2dnKg L2KqP55CF/ris= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=FlO0On4OqEUgJs3imPSOeNvfq/fFHee4cskEhqd1TXw=; b=WbPMMa7C qnrFU1zyhxtRRy6XMUvKDtnZAlrYFkpqJ8jUSFodOpVgXZqIn+aD6CWZIDaCuuLR V8jUbaFYKtRICvWI3ThJEnWdGpm98klOAxRZV8Lwhry3a1lLcJCFca88fEntlPUe gouLPiCv1Kv2BrqGgSd0vMQPbxz77+FHIYpEoz++gmxohAAteWCACVBcaL+jhcH5 +2jf9IHZwxe/W+n4i0003Tr8aZY1deVitQW4dD/8B8aByOWQYGFayUywY3HPnRFk 04iy91TWK3OSiMHf0194fFAYgiktdKzuebP7F8FIhihOdNfirRF8htyurP7tFqEi 1riD3NrUsk6Riw== X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:01 -0400 Message-Id: <1522980788-1252-11-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 10/17] translate-all: use per-page locking in !user-mode X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Groundwork for supporting parallel TCG generation. Instead of using a global lock (tb_lock) to protect changes to pages, use fine-grained, per-page locks in !user-mode. User-mode stays with mmap_lock. Sometimes changes need to happen atomically on more than one page (e.g. when a TB that spans across two pages is added/invalidated, or when a range of pages is invalidated). We therefore introduce struct page_collection, which helps us keep track of a set of pages that have been locked in the appropriate locking order (i.e. by ascending page index). This commit first introduces the structs and the function helpers, to then convert the calling code to use per-page locking. Note that tb_lock is not removed yet. While at it, rename tb_alloc_page to tb_page_add, which pairs with tb_page_remove. Signed-off-by: Emilio G. Cota --- accel/tcg/translate-all.h | 3 + include/exec/exec-all.h | 3 +- accel/tcg/translate-all.c | 432 +++++++++++++++++++++++++++++++++++++++++-= ---- 3 files changed, 396 insertions(+), 42 deletions(-) diff --git a/accel/tcg/translate-all.h b/accel/tcg/translate-all.h index ba8e4d6..6d1d258 100644 --- a/accel/tcg/translate-all.h +++ b/accel/tcg/translate-all.h @@ -23,6 +23,9 @@ =20 =20 /* translate-all.c */ +struct page_collection *page_collection_lock(tb_page_addr_t start, + tb_page_addr_t end); +void page_collection_unlock(struct page_collection *set); void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len); void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t en= d, int is_cpu_write_access); diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 5f7e65a..aeaa127 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -355,7 +355,8 @@ struct TranslationBlock { /* original tb when cflags has CF_NOCACHE */ struct TranslationBlock *orig_tb; /* first and second physical page containing code. The lower bit - of the pointer tells the index in page_next[] */ + of the pointer tells the index in page_next[]. + The list is protected by the TB's page('s) lock(s) */ uintptr_t page_next[2]; tb_page_addr_t page_addr[2]; =20 diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 4d4391f..042378a 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -112,8 +112,55 @@ typedef struct PageDesc { #else unsigned long flags; #endif +#ifndef CONFIG_USER_ONLY + QemuSpin lock; +#endif } PageDesc; =20 +/** + * struct page_entry - page descriptor entry + * @pd: pointer to the &struct PageDesc of the page this entry represe= nts + * @index: page index of the page + * @locked: whether the page is locked + * + * This struct helps us keep track of the locked state of a page, without + * bloating &struct PageDesc. + * + * A page lock protects accesses to all fields of &struct PageDesc. + * + * See also: &struct page_collection. + */ +struct page_entry { + PageDesc *pd; + tb_page_addr_t index; + bool locked; +}; + +/** + * struct page_collection - tracks a set of pages (i.e. &struct page_entry= 's) + * @tree: Binary search tree (BST) of the pages, with key =3D=3D page in= dex + * @max: Pointer to the page in @tree with the highest page index + * + * To avoid deadlock we lock pages in ascending order of page index. + * When operating on a set of pages, we need to keep track of them so that + * we can lock them in order and also unlock them later. For this we colle= ct + * pages (i.e. &struct page_entry's) in a binary search @tree. Given that = the + * @tree implementation we use does not provide an O(1) operation to obtai= n the + * highest-ranked element, we use @max to keep track of the inserted page + * with the highest index. This is valuable because if a page is not in + * the tree and its index is higher than @max's, then we can lock it + * without breaking the locking order rule. + * + * Note on naming: 'struct page_set' would be shorter, but we already have= a few + * page_set_*() helpers, so page_collection is used instead to avoid confu= sion. + * + * See also: page_collection_lock(). + */ +struct page_collection { + GTree *tree; + struct page_entry *max; +}; + /* list iterators for lists of tagged pointers in TranslationBlock */ #define TB_FOR_EACH_TAGGED(head, tb, n, field) \ for (n =3D (head) & 1, tb =3D (TranslationBlock *)((head) & ~1); = \ @@ -507,6 +554,15 @@ static PageDesc *page_find_alloc(tb_page_addr_t index,= int alloc) return NULL; } pd =3D g_new0(PageDesc, V_L2_SIZE); +#ifndef CONFIG_USER_ONLY + { + int i; + + for (i =3D 0; i < V_L2_SIZE; i++) { + qemu_spin_init(&pd[i].lock); + } + } +#endif existing =3D atomic_cmpxchg(lp, NULL, pd); if (unlikely(existing)) { g_free(pd); @@ -522,6 +578,228 @@ static inline PageDesc *page_find(tb_page_addr_t inde= x) return page_find_alloc(index, 0); } =20 +/* In user-mode page locks aren't used; mmap_lock is enough */ +#ifdef CONFIG_USER_ONLY +static inline void page_lock(PageDesc *pd) +{ } + +static inline void page_unlock(PageDesc *pd) +{ } + +static inline void page_lock_tb(const TranslationBlock *tb) +{ } + +static inline void page_unlock_tb(const TranslationBlock *tb) +{ } + +struct page_collection * +page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) +{ + return NULL; +} + +void page_collection_unlock(struct page_collection *set) +{ } +#else /* !CONFIG_USER_ONLY */ + +static inline void page_lock(PageDesc *pd) +{ + qemu_spin_lock(&pd->lock); +} + +static inline void page_unlock(PageDesc *pd) +{ + qemu_spin_unlock(&pd->lock); +} + +/* lock the page(s) of a TB in the correct acquisition order */ +static inline void page_lock_tb(const TranslationBlock *tb) +{ + if (likely(tb->page_addr[1] =3D=3D -1)) { + page_lock(page_find(tb->page_addr[0] >> TARGET_PAGE_BITS)); + return; + } + if (tb->page_addr[0] < tb->page_addr[1]) { + page_lock(page_find(tb->page_addr[0] >> TARGET_PAGE_BITS)); + page_lock(page_find(tb->page_addr[1] >> TARGET_PAGE_BITS)); + } else { + page_lock(page_find(tb->page_addr[1] >> TARGET_PAGE_BITS)); + page_lock(page_find(tb->page_addr[0] >> TARGET_PAGE_BITS)); + } +} + +static inline void page_unlock_tb(const TranslationBlock *tb) +{ + page_unlock(page_find(tb->page_addr[0] >> TARGET_PAGE_BITS)); + if (unlikely(tb->page_addr[1] !=3D -1)) { + page_unlock(page_find(tb->page_addr[1] >> TARGET_PAGE_BITS)); + } +} + +static inline struct page_entry * +page_entry_new(PageDesc *pd, tb_page_addr_t index) +{ + struct page_entry *pe =3D g_malloc(sizeof(*pe)); + + pe->index =3D index; + pe->pd =3D pd; + pe->locked =3D false; + return pe; +} + +static void page_entry_destroy(gpointer p) +{ + struct page_entry *pe =3D p; + + g_assert(pe->locked); + page_unlock(pe->pd); + g_free(pe); +} + +/* returns false on success */ +static bool page_entry_trylock(struct page_entry *pe) +{ + bool busy; + + busy =3D qemu_spin_trylock(&pe->pd->lock); + if (!busy) { + g_assert(!pe->locked); + pe->locked =3D true; + } + return busy; +} + +static void do_page_entry_lock(struct page_entry *pe) +{ + page_lock(pe->pd); + g_assert(!pe->locked); + pe->locked =3D true; +} + +static gboolean page_entry_lock(gpointer key, gpointer value, gpointer dat= a) +{ + struct page_entry *pe =3D value; + + do_page_entry_lock(pe); + return FALSE; +} + +static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer d= ata) +{ + struct page_entry *pe =3D value; + + if (pe->locked) { + pe->locked =3D false; + page_unlock(pe->pd); + } + return FALSE; +} + +/* + * Trylock a page, and if successful, add the page to a collection. + * Returns true ("busy") if the page could not be locked; false otherwise. + */ +static bool page_trylock_add(struct page_collection *set, tb_page_addr_t a= ddr) +{ + tb_page_addr_t index =3D addr >> TARGET_PAGE_BITS; + struct page_entry *pe; + PageDesc *pd; + + pe =3D g_tree_lookup(set->tree, &index); + if (pe) { + return false; + } + + pd =3D page_find(index); + if (pd =3D=3D NULL) { + return false; + } + + pe =3D page_entry_new(pd, index); + g_tree_insert(set->tree, &pe->index, pe); + + /* + * If this is either (1) the first insertion or (2) a page whose index + * is higher than any other so far, just lock the page and move on. + */ + if (set->max =3D=3D NULL || pe->index > set->max->index) { + set->max =3D pe; + do_page_entry_lock(pe); + return false; + } + /* + * Try to acquire out-of-order lock; if busy, return busy so that we a= cquire + * locks in order. + */ + return page_entry_trylock(pe); +} + +static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer = udata) +{ + tb_page_addr_t a =3D *(const tb_page_addr_t *)ap; + tb_page_addr_t b =3D *(const tb_page_addr_t *)bp; + + if (a =3D=3D b) { + return 0; + } else if (a < b) { + return -1; + } + return 1; +} + +/* + * Lock a range of pages ([@start,@end[) as well as the pages of all + * intersecting TBs. + * Locking order: acquire locks in ascending order of page index. + */ +struct page_collection * +page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) +{ + struct page_collection *set =3D g_malloc(sizeof(*set)); + tb_page_addr_t index; + PageDesc *pd; + + start >>=3D TARGET_PAGE_BITS; + end >>=3D TARGET_PAGE_BITS; + g_assert(start <=3D end); + + set->tree =3D g_tree_new_full(tb_page_addr_cmp, NULL, NULL, + page_entry_destroy); + set->max =3D NULL; + + retry: + g_tree_foreach(set->tree, page_entry_lock, NULL); + + for (index =3D start; index <=3D end; index++) { + TranslationBlock *tb; + int n; + + pd =3D page_find(index); + if (pd =3D=3D NULL) { + continue; + } + PAGE_FOR_EACH_TB(pd, tb, n) { + if (page_trylock_add(set, tb->page_addr[0]) || + (tb->page_addr[1] !=3D -1 && + page_trylock_add(set, tb->page_addr[1]))) { + /* drop all locks, and reacquire in order */ + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + } + } + return set; +} + +void page_collection_unlock(struct page_collection *set) +{ + /* entries are unlocked and freed via page_entry_destroy */ + g_tree_destroy(set->tree); + g_free(set); +} + +#endif /* !CONFIG_USER_ONLY */ + #if defined(CONFIG_USER_ONLY) /* Currently it is not recommended to allocate big chunks of data in user mode. It will change when a dedicated libc will be used. */ @@ -810,6 +1088,7 @@ static TranslationBlock *tb_alloc(target_ulong pc) return tb; } =20 +/* call with @p->lock held */ static inline void invalidate_page_bitmap(PageDesc *p) { #ifdef CONFIG_SOFTMMU @@ -831,8 +1110,10 @@ static void page_flush_tb_1(int level, void **lp) PageDesc *pd =3D *lp; =20 for (i =3D 0; i < V_L2_SIZE; ++i) { + page_lock(&pd[i]); pd[i].first_tb =3D (uintptr_t)NULL; invalidate_page_bitmap(pd + i); + page_unlock(&pd[i]); } } else { void **pp =3D *lp; @@ -959,6 +1240,7 @@ static void tb_page_check(void) =20 #endif /* CONFIG_USER_ONLY */ =20 +/* call with @pd->lock held */ static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) { TranslationBlock *tb1; @@ -1035,11 +1317,8 @@ static inline void tb_jmp_unlink(TranslationBlock *t= b) } } =20 -/* invalidate one TB - * - * Called with tb_lock held. - */ -void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr) +/* If @rm_from_page_list is set, call with the TB's pages' locks held */ +static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_= list) { CPUState *cpu; PageDesc *p; @@ -1059,15 +1338,15 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_pa= ge_addr_t page_addr) } =20 /* remove the TB from the page list */ - if (tb->page_addr[0] !=3D page_addr) { + if (rm_from_page_list) { p =3D page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); tb_page_remove(p, tb); invalidate_page_bitmap(p); - } - if (tb->page_addr[1] !=3D -1 && tb->page_addr[1] !=3D page_addr) { - p =3D page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - invalidate_page_bitmap(p); + if (tb->page_addr[1] !=3D -1) { + p =3D page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + tb_page_remove(p, tb); + invalidate_page_bitmap(p); + } } =20 /* remove the TB from the hash list */ @@ -1089,7 +1368,28 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_pag= e_addr_t page_addr) tcg_ctx->tb_phys_invalidate_count + 1); } =20 +static void tb_phys_invalidate__locked(TranslationBlock *tb) +{ + do_tb_phys_invalidate(tb, true); +} + +/* invalidate one TB + * + * Called with tb_lock held. + */ +void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr) +{ + if (page_addr =3D=3D -1) { + page_lock_tb(tb); + do_tb_phys_invalidate(tb, true); + page_unlock_tb(tb); + } else { + do_tb_phys_invalidate(tb, false); + } +} + #ifdef CONFIG_SOFTMMU +/* call with @p->lock held */ static void build_page_bitmap(PageDesc *p) { int n, tb_start, tb_end; @@ -1119,11 +1419,11 @@ static void build_page_bitmap(PageDesc *p) /* add the tb in the target page and protect it if necessary * * Called with mmap_lock held for user-mode emulation. + * Called with @p->lock held. */ -static inline void tb_alloc_page(TranslationBlock *tb, - unsigned int n, tb_page_addr_t page_addr) +static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, + unsigned int n, tb_page_addr_t page_addr) { - PageDesc *p; #ifndef CONFIG_USER_ONLY bool page_already_protected; #endif @@ -1131,7 +1431,6 @@ static inline void tb_alloc_page(TranslationBlock *tb, assert_memory_lock(); =20 tb->page_addr[n] =3D page_addr; - p =3D page_find_alloc(page_addr >> TARGET_PAGE_BITS, 1); tb->page_next[n] =3D p->first_tb; #ifndef CONFIG_USER_ONLY page_already_protected =3D p->first_tb !=3D (uintptr_t)NULL; @@ -1183,17 +1482,38 @@ static inline void tb_alloc_page(TranslationBlock *= tb, static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, tb_page_addr_t phys_page2) { + PageDesc *p; + PageDesc *p2 =3D NULL; uint32_t h; =20 assert_memory_lock(); =20 - /* add in the page list */ - tb_alloc_page(tb, 0, phys_pc & TARGET_PAGE_MASK); - if (phys_page2 !=3D -1) { - tb_alloc_page(tb, 1, phys_page2); - } else { + /* + * Add the TB to the page list. + * To avoid deadlock, acquire first the lock of the lower-addressed pa= ge. + */ + p =3D page_find_alloc(phys_pc >> TARGET_PAGE_BITS, 1); + if (likely(phys_page2 =3D=3D -1)) { tb->page_addr[1] =3D -1; + page_lock(p); + tb_page_add(p, tb, 0, phys_pc & TARGET_PAGE_MASK); + } else { + p2 =3D page_find_alloc(phys_page2 >> TARGET_PAGE_BITS, 1); + if (phys_pc < phys_page2) { + page_lock(p); + page_lock(p2); + } else { + page_lock(p2); + page_lock(p); + } + tb_page_add(p, tb, 0, phys_pc & TARGET_PAGE_MASK); + tb_page_add(p2, tb, 1, phys_page2); + } + + if (p2) { + page_unlock(p2); } + page_unlock(p); =20 /* add in the hash table */ h =3D tb_hash_func(phys_pc, tb->pc, tb->flags, tb->cflags & CF_HASH_MA= SK, @@ -1367,21 +1687,17 @@ TranslationBlock *tb_gen_code(CPUState *cpu, } =20 /* - * Invalidate all TBs which intersect with the target physical address ran= ge - * [start;end[. NOTE: start and end must refer to the *same* physical page. - * 'is_cpu_write_access' should be true if called from a real cpu write - * access: the virtual CPU will exit the current TB if code is modified in= side - * this TB. - * - * Called with tb_lock/mmap_lock held for user-mode emulation - * Called with tb_lock held for system-mode emulation + * Call with all @pages locked. + * @p must be non-NULL. */ -void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t en= d, - int is_cpu_write_access) +static void +tb_invalidate_phys_page_range__locked(struct page_collection *pages, + PageDesc *p, tb_page_addr_t start, + tb_page_addr_t end, + int is_cpu_write_access) { TranslationBlock *tb; tb_page_addr_t tb_start, tb_end; - PageDesc *p; int n; #ifdef TARGET_HAS_PRECISE_SMC CPUState *cpu =3D current_cpu; @@ -1397,10 +1713,6 @@ void tb_invalidate_phys_page_range(tb_page_addr_t st= art, tb_page_addr_t end, assert_memory_lock(); assert_tb_locked(); =20 - p =3D page_find(start >> TARGET_PAGE_BITS); - if (!p) { - return; - } #if defined(TARGET_HAS_PRECISE_SMC) if (cpu !=3D NULL) { env =3D cpu->env_ptr; @@ -1445,7 +1757,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t sta= rt, tb_page_addr_t end, ¤t_flags); } #endif /* TARGET_HAS_PRECISE_SMC */ - tb_phys_invalidate(tb, -1); + tb_phys_invalidate__locked(tb); } } #if !defined(CONFIG_USER_ONLY) @@ -1457,6 +1769,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t sta= rt, tb_page_addr_t end, #endif #ifdef TARGET_HAS_PRECISE_SMC if (current_tb_modified) { + page_collection_unlock(pages); /* Force execution of one insn next time. */ cpu->cflags_next_tb =3D 1 | curr_cflags(); cpu_loop_exit_noexc(cpu); @@ -1466,6 +1779,35 @@ void tb_invalidate_phys_page_range(tb_page_addr_t st= art, tb_page_addr_t end, =20 /* * Invalidate all TBs which intersect with the target physical address ran= ge + * [start;end[. NOTE: start and end must refer to the *same* physical page. + * 'is_cpu_write_access' should be true if called from a real cpu write + * access: the virtual CPU will exit the current TB if code is modified in= side + * this TB. + * + * Called with tb_lock/mmap_lock held for user-mode emulation + * Called with tb_lock held for system-mode emulation + */ +void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t en= d, + int is_cpu_write_access) +{ + struct page_collection *pages; + PageDesc *p; + + assert_memory_lock(); + assert_tb_locked(); + + p =3D page_find(start >> TARGET_PAGE_BITS); + if (p =3D=3D NULL) { + return; + } + pages =3D page_collection_lock(start, end); + tb_invalidate_phys_page_range__locked(pages, p, start, end, + is_cpu_write_access); + page_collection_unlock(pages); +} + +/* + * Invalidate all TBs which intersect with the target physical address ran= ge * [start;end[. NOTE: start and end may refer to *different* physical page= s. * 'is_cpu_write_access' should be true if called from a real cpu write * access: the virtual CPU will exit the current TB if code is modified in= side @@ -1476,15 +1818,22 @@ void tb_invalidate_phys_page_range(tb_page_addr_t s= tart, tb_page_addr_t end, */ static void tb_invalidate_phys_range_1(tb_page_addr_t start, tb_page_addr_= t end) { + struct page_collection *pages; tb_page_addr_t next; =20 + pages =3D page_collection_lock(start, end); for (next =3D (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; start < end; start =3D next, next +=3D TARGET_PAGE_SIZE) { + PageDesc *pd =3D page_find(start >> TARGET_PAGE_BITS); tb_page_addr_t bound =3D MIN(next, end); =20 - tb_invalidate_phys_page_range(start, bound, 0); + if (pd =3D=3D NULL) { + continue; + } + tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0); } + page_collection_unlock(pages); } =20 #ifdef CONFIG_SOFTMMU @@ -1510,6 +1859,7 @@ void tb_invalidate_phys_range(tb_page_addr_t start, t= b_page_addr_t end) */ void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len) { + struct page_collection *pages; PageDesc *p; =20 #if 0 @@ -1527,11 +1877,10 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t st= art, int len) if (!p) { return; } + + pages =3D page_collection_lock(start, start + len); if (!p->code_bitmap && ++p->code_write_count >=3D SMC_BITMAP_USE_THRESHOLD) { - /* build code bitmap. FIXME: writes should be protected by - * tb_lock, reads by tb_lock or RCU. - */ build_page_bitmap(p); } if (p->code_bitmap) { @@ -1545,8 +1894,9 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t star= t, int len) } } else { do_invalidate: - tb_invalidate_phys_page_range(start, start + len, 1); + tb_invalidate_phys_page_range__locked(pages, p, start, start + len= , 1); } + page_collection_unlock(pages); } #else /* Called with mmap_lock held. If pc is not 0 then it indicates the --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981473048898.3525927880901; Thu, 5 Apr 2018 19:24:33 -0700 (PDT) Received: from localhost ([::1]:44271 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H3I-0003Ne-6r for importer@patchew.org; Thu, 05 Apr 2018 22:24:32 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58438) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsL-0002GW-UJ for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsJ-0003Vn-UJ for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:35083) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsJ-0003VI-QN for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:11 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 8684B20B5F; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:11 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 3EF6B1025D; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=v0vb8rkcWPaXIXnKGFutJwCF1S j7MYnFB1QSMmLOOig=; b=tBO7gWWeiX3VJTx0j8LDXKLMeISe8lTjPK4eaLRCyx KkCLY2KNe+O1wD7vHRemW0s9dXQUY+fZTaLOiKiQ82AR6sWKZHFKIJGg8dt/boqK hhNbqbwGFBYP1Kyg1ymS+CijhVJyDtotmg7v5GdCdBtBkwKiFULAbCV9mk1mTDu0 A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=v0vb8r kcWPaXIXnKGFutJwCF1Sj7MYnFB1QSMmLOOig=; b=XIuIvRlZ3H3JeCT3EePPtV oVonaCRqrfd9YPjQXI7m316sKMHRbgzHF/t3cBQOgEbDrKGoMAPKr4gtcA646fwo MLBlMJ8hBmL1UhqyF9vwuiLpdK/u2rE3mcVVhDHVutIYviMTj19iZR1qxGkrKJvD ri/sZgVyB9XAWmoXtViCONv6c3bcZ5mL+9XA0r6+ISxASDOaLBgToopzSIPVI3lf 2vVJrdEfmkDnrjYIffk1FmxqDW8mW/Nx4zKFQLZNFLpKSS+Rw46djXFPaO5NuDFu V6qUU148/vCPkpl6ow7+E77ajlnmHgwZGgRWapzUW5SU+HcQc7mCVagIHuUOiGyQ == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:02 -0400 Message-Id: <1522980788-1252-12-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 11/17] translate-all: add page_locked assertions X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 This is only compiled under CONFIG_DEBUG_TCG to avoid bloating the binary. In user-mode, assert_page_locked is equivalent to assert_mmap_lock. Note: There are some tb_lock assertions left that will be removed by later patches. Suggested-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- accel/tcg/translate-all.c | 90 +++++++++++++++++++++++++++++++++++++++++++= ++-- 1 file changed, 87 insertions(+), 3 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 042378a..29bc1da 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -580,6 +580,9 @@ static inline PageDesc *page_find(tb_page_addr_t index) =20 /* In user-mode page locks aren't used; mmap_lock is enough */ #ifdef CONFIG_USER_ONLY + +#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) + static inline void page_lock(PageDesc *pd) { } =20 @@ -602,14 +605,91 @@ void page_collection_unlock(struct page_collection *s= et) { } #else /* !CONFIG_USER_ONLY */ =20 +#ifdef CONFIG_DEBUG_TCG + +struct page_lock_debug { + const PageDesc *pd; + QLIST_ENTRY(page_lock_debug) entry; +}; + +static __thread QLIST_HEAD(, page_lock_debug) page_lock_debug_head; + +static struct page_lock_debug *get_page_lock_debug(const PageDesc *pd) +{ + struct page_lock_debug *pld; + + QLIST_FOREACH(pld, &page_lock_debug_head, entry) { + if (pld->pd =3D=3D pd) { + return pld; + } + } + return NULL; +} + +static bool page_is_locked(const PageDesc *pd) +{ + struct page_lock_debug *pld; + + pld =3D get_page_lock_debug(pd); + return pld !=3D NULL; +} + +static void page_lock__debug(const PageDesc *pd) +{ + struct page_lock_debug *pld; + + g_assert(!page_is_locked(pd)); + pld =3D g_new(struct page_lock_debug, 1); + pld->pd =3D pd; + QLIST_INSERT_HEAD(&page_lock_debug_head, pld, entry); +} + +static void page_unlock__debug(const PageDesc *pd) +{ + struct page_lock_debug *pld; + + pld =3D get_page_lock_debug(pd); + g_assert(pld); + QLIST_REMOVE(pld, entry); + g_free(pld); +} + +static void +do_assert_page_locked(const PageDesc *pd, const char *file, int line) +{ + if (unlikely(!page_is_locked(pd))) { + error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", + pd, file, line); + abort(); + } +} + +#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE_= _) + +#else /* !CONFIG_DEBUG_TCG */ + +#define assert_page_locked(pd) + +static inline void page_lock__debug(const PageDesc *pd) +{ +} + +static inline void page_unlock__debug(const PageDesc *pd) +{ +} + +#endif /* CONFIG_DEBUG_TCG */ + static inline void page_lock(PageDesc *pd) { + page_lock__debug(pd); qemu_spin_lock(&pd->lock); } =20 static inline void page_unlock(PageDesc *pd) { qemu_spin_unlock(&pd->lock); + page_unlock__debug(pd); } =20 /* lock the page(s) of a TB in the correct acquisition order */ @@ -1091,6 +1171,7 @@ static TranslationBlock *tb_alloc(target_ulong pc) /* call with @p->lock held */ static inline void invalidate_page_bitmap(PageDesc *p) { + assert_page_locked(p); #ifdef CONFIG_SOFTMMU g_free(p->code_bitmap); p->code_bitmap =3D NULL; @@ -1247,6 +1328,7 @@ static inline void tb_page_remove(PageDesc *pd, Trans= lationBlock *tb) uintptr_t *pprev; unsigned int n1; =20 + assert_page_locked(pd); pprev =3D &pd->first_tb; PAGE_FOR_EACH_TB(pd, tb1, n1) { if (tb1 =3D=3D tb) { @@ -1395,6 +1477,7 @@ static void build_page_bitmap(PageDesc *p) int n, tb_start, tb_end; TranslationBlock *tb; =20 + assert_page_locked(p); p->code_bitmap =3D bitmap_new(TARGET_PAGE_SIZE); =20 PAGE_FOR_EACH_TB(p, tb, n) { @@ -1428,7 +1511,7 @@ static inline void tb_page_add(PageDesc *p, Translati= onBlock *tb, bool page_already_protected; #endif =20 - assert_memory_lock(); + assert_page_locked(p); =20 tb->page_addr[n] =3D page_addr; tb->page_next[n] =3D p->first_tb; @@ -1710,8 +1793,7 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, uint32_t current_flags =3D 0; #endif /* TARGET_HAS_PRECISE_SMC */ =20 - assert_memory_lock(); - assert_tb_locked(); + assert_page_locked(p); =20 #if defined(TARGET_HAS_PRECISE_SMC) if (cpu !=3D NULL) { @@ -1723,6 +1805,7 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, /* XXX: see if in some cases it could be faster to invalidate all the code */ PAGE_FOR_EACH_TB(p, tb, n) { + assert_page_locked(p); /* NOTE: this is subtle as a TB may span two physical pages */ if (n =3D=3D 0) { /* NOTE: tb_end may be after the end of the page, but @@ -1879,6 +1962,7 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t star= t, int len) } =20 pages =3D page_collection_lock(start, start + len); + assert_page_locked(p); if (!p->code_bitmap && ++p->code_write_count >=3D SMC_BITMAP_USE_THRESHOLD) { build_page_bitmap(p); --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981318474357.71575287434086; Thu, 5 Apr 2018 19:21:58 -0700 (PDT) Received: from localhost ([::1]:44031 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H0g-0000tG-6T for importer@patchew.org; Thu, 05 Apr 2018 22:21:50 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58462) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsM-0002GY-IR for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsK-0003WA-Ar for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:14 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:51093) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsK-0003Vf-3h for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:12 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id BDAB020AAB; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:11 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 78616E47F2; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=uqRllghnUkBKksf2jlpdVeuO7e vjSaWxdutgQHV4V0c=; b=wGNAqjMoM1RZlbl+E5oIve0Bw64NIIOaS7K+5p0lde 3UKsYEDz/JGOq4CR4BWIrsC4290u3WMjmNHCba0DzfdbK+3JEhXNGikHL+YZ8Sc2 6Kh4DeSn5cQXeIcCSMyNCGhadDjjc5ht+g2mr9X8R+GmA05dnGQafDaVn5knl6ko 4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=uqRllg hnUkBKksf2jlpdVeuO7evjSaWxdutgQHV4V0c=; b=k+TiAx/Kerqx+v3zY3c9y7 qPV8/8h23j0KzLXkSyogpYKVvdmxLJiB1wWiZ+07sha2fO0D8e71NRyBkGGicrUM bJSMTnKBqbOr08RUnpT9X0eSxou+9lHW+bkQJRc8EQnbRFMYCAmzN9Jg43VUQFZc i/s2WTxC0HNe/aeMQ65NHGH2j4/qEXwpPCvvQjIlGnmOsyfk2yXRyt6oElfAHdsE muGCBv893/r5zle7rni9YzNsb3BADQ2V7EAKtEa1bb8XZdVqNVPBP50Lv9scKEzn nvRhYPi8vQEHGy1OrGauPwrPJ0/Ag0ExcODjpEePxjBR1buL5sMs2QPyZhGMmD6Q == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:03 -0400 Message-Id: <1522980788-1252-13-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 12/17] translate-all: add page_collection assertions X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 The appended adds assertions to make sure we do not longjmp with page locks held. Some notes: - user-mode has nothing to check, since page_locks are !user-mode only. - The checks only apply to page collections, since these have relatively complex callers. - Some simple page_lock/unlock callers have been left unchecked -- namely page_lock_tb, tb_phys_invalidate and tb_link_page. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota --- include/exec/exec-all.h | 8 ++++++++ accel/tcg/cpu-exec.c | 1 + accel/tcg/translate-all.c | 20 ++++++++++++++++++++ 3 files changed, 29 insertions(+) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index aeaa127..7911e69 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -431,6 +431,14 @@ void tb_lock(void); void tb_unlock(void); void tb_lock_reset(void); =20 +#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) +void assert_page_collection_locked(bool val); +#else +static inline void assert_page_collection_locked(bool val) +{ +} +#endif + #if !defined(CONFIG_USER_ONLY) =20 struct MemoryRegion *iotlb_to_region(CPUState *cpu, diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index 778801a..6a3a21d 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -271,6 +271,7 @@ void cpu_exec_step_atomic(CPUState *cpu) tcg_debug_assert(!have_mmap_lock()); #endif tb_lock_reset(); + assert_page_collection_locked(false); } =20 if (in_exclusive_region) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 29bc1da..f8862f6 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -666,6 +666,18 @@ do_assert_page_locked(const PageDesc *pd, const char *= file, int line) =20 #define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE_= _) =20 +static __thread bool page_collection_locked; + +void assert_page_collection_locked(bool val) +{ + tcg_debug_assert(page_collection_locked =3D=3D val); +} + +static inline void set_page_collection_locked(bool val) +{ + page_collection_locked =3D val; +} + #else /* !CONFIG_DEBUG_TCG */ =20 #define assert_page_locked(pd) @@ -678,6 +690,10 @@ static inline void page_unlock__debug(const PageDesc *= pd) { } =20 +static inline void set_page_collection_locked(bool val) +{ +} + #endif /* CONFIG_DEBUG_TCG */ =20 static inline void page_lock(PageDesc *pd) @@ -754,6 +770,7 @@ static void do_page_entry_lock(struct page_entry *pe) page_lock(pe->pd); g_assert(!pe->locked); pe->locked =3D true; + set_page_collection_locked(true); } =20 static gboolean page_entry_lock(gpointer key, gpointer value, gpointer dat= a) @@ -846,6 +863,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) set->tree =3D g_tree_new_full(tb_page_addr_cmp, NULL, NULL, page_entry_destroy); set->max =3D NULL; + assert_page_collection_locked(false); =20 retry: g_tree_foreach(set->tree, page_entry_lock, NULL); @@ -864,6 +882,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr= _t end) page_trylock_add(set, tb->page_addr[1]))) { /* drop all locks, and reacquire in order */ g_tree_foreach(set->tree, page_entry_unlock, NULL); + set_page_collection_locked(false); goto retry; } } @@ -876,6 +895,7 @@ void page_collection_unlock(struct page_collection *set) /* entries are unlocked and freed via page_entry_destroy */ g_tree_destroy(set->tree); g_free(set); + set_page_collection_locked(false); } =20 #endif /* !CONFIG_USER_ONLY */ --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 152298158764099.38505228366569; Thu, 5 Apr 2018 19:26:27 -0700 (PDT) Received: from localhost ([::1]:44441 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H58-0005ED-QL for importer@patchew.org; Thu, 05 Apr 2018 22:26:26 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58472) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsM-0002GZ-QQ for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsK-0003WK-Cy for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:14 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:47019) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsK-0003Vv-8D for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:12 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 0A8A4207F3; Thu, 5 Apr 2018 22:13:12 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:12 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id A87C01025D; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=mesmtp; bh=9X/cFvYW55vgiV J3zLa/rV2b1YznqZ5wxUgDWnMpHZQ=; b=jO2trkLPJ3BClwXQcFpFywjGADk09R Yj/5EE0qP4EsVDeiz9jxUCryoNBFfzKTu+GXJ7QawVobe2dzf3IfdkVFivSg3WKP krFmrv4+g3PGskpR+4NtaYcjiESYmGhjHMJfgxRpT44hi4N8zd4y0viTb7eQlRSL Xq3epsyg8QUSM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=9X/cFvYW55vgiVJ3zLa/rV2b1YznqZ5wxUgDWnMpHZQ=; b=c5aWoEpG B27iq6PyvSOEyYYXSFlOEGCOWf42hcjUu2nm6sTKx+XVmYw4ThQ9yeNNpcVzjkYm MYnF9w+U2RFbTdGD2eFf8cBfUsE0Y+19ufv6YrbC3NRGPW7qOkIT2hDXgzRGTgu7 LBVBPECJRqhLvAwFXTircVz3qnA3uvjQ9d9+kZCQhnbm0KBSJ5nq5wvo1dHsaHw5 S2JNiSdoCwn+Km0JQPPgEspPBGsZYt7b2FDZvw5oasMaGZsHZ9ZoTTWEimlSw7j5 ubKsdUM7mBFwstwE+C8FeO5rWLavf7gyftyvP+CkSlEJSA2D9hO3Pdyl/+8Cfp5E GfJ4D6AryATBoQ== X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:04 -0400 Message-Id: <1522980788-1252-14-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 13/17] translate-all: discard TB when tb_link_page returns an existing matching TB X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Use the recently-gained QHT feature of returning the matching TB if it already exists. This allows us to get rid of the lookup we perform right after acquiring tb_lock. Suggested-by: Richard Henderson Signed-off-by: Emilio G. Cota Reviewed-by: Richard Henderson --- docs/devel/multi-thread-tcg.txt | 3 +++ accel/tcg/cpu-exec.c | 14 ++---------- accel/tcg/translate-all.c | 50 +++++++++++++++++++++++++++++++++----= ---- 3 files changed, 46 insertions(+), 21 deletions(-) diff --git a/docs/devel/multi-thread-tcg.txt b/docs/devel/multi-thread-tcg.= txt index faf8918..faf09c6 100644 --- a/docs/devel/multi-thread-tcg.txt +++ b/docs/devel/multi-thread-tcg.txt @@ -140,6 +140,9 @@ to atomically insert new elements. The lookup caches are updated atomically and the lookup hash uses QHT which is designed for concurrent safe lookup. =20 +Parallel code generation is supported. QHT is used at insertion time +as the synchronization point across threads, thereby ensuring that we only +keep track of a single TranslationBlock for each guest code block. =20 Memory maps and TLBs -------------------- diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index 6a3a21d..c79c43b 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -243,10 +243,7 @@ void cpu_exec_step_atomic(CPUState *cpu) if (tb =3D=3D NULL) { mmap_lock(); tb_lock(); - tb =3D tb_htable_lookup(cpu, pc, cs_base, flags, cf_mask); - if (likely(tb =3D=3D NULL)) { - tb =3D tb_gen_code(cpu, pc, cs_base, flags, cflags); - } + tb =3D tb_gen_code(cpu, pc, cs_base, flags, cflags); tb_unlock(); mmap_unlock(); } @@ -396,14 +393,7 @@ static inline TranslationBlock *tb_find(CPUState *cpu, tb_lock(); acquired_tb_lock =3D true; =20 - /* There's a chance that our desired tb has been translated while - * taking the locks so we check again inside the lock. - */ - tb =3D tb_htable_lookup(cpu, pc, cs_base, flags, cf_mask); - if (likely(tb =3D=3D NULL)) { - /* if no translated code available, then translate it now */ - tb =3D tb_gen_code(cpu, pc, cs_base, flags, cf_mask); - } + tb =3D tb_gen_code(cpu, pc, cs_base, flags, cf_mask); =20 mmap_unlock(); /* We add the TB in the virtual pc hash table for the fast lookup = */ diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index f8862f6..aabde7e 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1581,12 +1581,19 @@ static inline void tb_page_add(PageDesc *p, Transla= tionBlock *tb, * (-1) to indicate that only one page contains the TB. * * Called with mmap_lock held for user-mode emulation. + * + * Returns a pointer @tb, or a pointer to an existing TB that matches @tb. + * Note that in !user-mode, another thread might have already added a TB + * for the same block of guest code that @tb corresponds to. In that case, + * the caller should discard the original @tb, and use instead the returne= d TB. */ -static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, - tb_page_addr_t phys_page2) +static TranslationBlock * +tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, + tb_page_addr_t phys_page2) { PageDesc *p; PageDesc *p2 =3D NULL; + void *existing_tb =3D NULL; uint32_t h; =20 assert_memory_lock(); @@ -1594,6 +1601,11 @@ static void tb_link_page(TranslationBlock *tb, tb_pa= ge_addr_t phys_pc, /* * Add the TB to the page list. * To avoid deadlock, acquire first the lock of the lower-addressed pa= ge. + * We keep the locks held until after inserting the TB in the hash tab= le, + * so that if the insertion fails we know for sure that the TBs are st= ill + * in the page descriptors. + * Note that inserting into the hash table first isn't an option, since + * we can only insert TBs that are fully initialized. */ p =3D page_find_alloc(phys_pc >> TARGET_PAGE_BITS, 1); if (likely(phys_page2 =3D=3D -1)) { @@ -1613,21 +1625,33 @@ static void tb_link_page(TranslationBlock *tb, tb_p= age_addr_t phys_pc, tb_page_add(p2, tb, 1, phys_page2); } =20 + /* add in the hash table */ + h =3D tb_hash_func(phys_pc, tb->pc, tb->flags, tb->cflags & CF_HASH_MA= SK, + tb->trace_vcpu_dstate); + qht_insert(&tb_ctx.htable, tb, h, &existing_tb); + + /* remove TB from the page(s) if we couldn't insert it */ + if (unlikely(existing_tb)) { + tb_page_remove(p, tb); + invalidate_page_bitmap(p); + if (p2) { + tb_page_remove(p2, tb); + invalidate_page_bitmap(p2); + } + tb =3D existing_tb; + } + if (p2) { page_unlock(p2); } page_unlock(p); =20 - /* add in the hash table */ - h =3D tb_hash_func(phys_pc, tb->pc, tb->flags, tb->cflags & CF_HASH_MA= SK, - tb->trace_vcpu_dstate); - qht_insert(&tb_ctx.htable, tb, h, NULL); - #ifdef CONFIG_USER_ONLY if (DEBUG_TB_CHECK_GATE) { tb_page_check(); } #endif + return tb; } =20 /* Called with mmap_lock held for user mode emulation. */ @@ -1636,7 +1660,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu, uint32_t flags, int cflags) { CPUArchState *env =3D cpu->env_ptr; - TranslationBlock *tb; + TranslationBlock *tb, *existing_tb; tb_page_addr_t phys_pc, phys_page2; target_ulong virt_page2; tcg_insn_unit *gen_code_buf; @@ -1784,7 +1808,15 @@ TranslationBlock *tb_gen_code(CPUState *cpu, * memory barrier is required before tb_link_page() makes the TB visib= le * through the physical hash table and physical page list. */ - tb_link_page(tb, phys_pc, phys_page2); + existing_tb =3D tb_link_page(tb, phys_pc, phys_page2); + /* if the TB already exists, discard what we just translated */ + if (unlikely(existing_tb !=3D tb)) { + uintptr_t orig_aligned =3D (uintptr_t)gen_code_buf; + + orig_aligned -=3D ROUND_UP(sizeof(*tb), qemu_icache_linesize); + atomic_set(&tcg_ctx->code_gen_ptr, orig_aligned); + return existing_tb; + } tcg_tb_insert(tb); return tb; } --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981318442629.1289400991451; Thu, 5 Apr 2018 19:21:58 -0700 (PDT) Received: from localhost ([::1]:44033 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H0k-0000w6-EW for importer@patchew.org; Thu, 05 Apr 2018 22:21:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58540) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsO-0002I1-VE for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsK-0003Wh-Rk for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:39197) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsK-0003W4-Kf for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:12 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 35E3C20B71; Thu, 5 Apr 2018 22:13:12 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:12 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id E429DE47F2; Thu, 5 Apr 2018 22:13:11 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=0gZKbq1o4s8mvrgrJrr1bJyKcC rKjcUl61DzQxw/oHw=; b=1I7Ygxm/9KZlCmtGM6uNdJgUCetGGT5U4yQ1/Qlg3v YdLIjryC+w7EeWnVb3ObYQomXa8Y3hX61V351kNB2zr7NA3vmC+MDNvRiXvAry4P VBIf4PplNDmT+7vLNfp0eFm8wwlcpb4Lg2zCPwzgad1cMpPi7jTVSoOFkcQ3EgBB Q= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=0gZKbq 1o4s8mvrgrJrr1bJyKcCrKjcUl61DzQxw/oHw=; b=G/d4yLOKO8sUty+gfXRzND +0aey7JDa4wYBBZD3+fmxgE2MoMH1sKaIgPcZ3bbhFvB0ysjN4u/zJ5Z4KSKtGgd yWslxFhvOklUySLX1LJDkwuL0WBmZXicN+KbCxiw4WUnwVyC8+CEPSg7C/SJp7q3 cwD/LUBJZngvM651svjrTa8vGBeczCMaCHhGH7MrintJNOY1iPabU3LZJh58yt2C O+smkDK1LDsfmAHXFgEZOR1+Eoj9P8aAge/Uq3x8xH0e42TwHQoY6Mtdi1Hltqqv Cw7g0z/rOcMIAFFJ661IDv3U440BGE7r/FqPvlORDZ3AnrR9IJH5+NWs4ZRXLNbQ == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:05 -0400 Message-Id: <1522980788-1252-15-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 14/17] translate-all: protect TB jumps with a per-destination-TB lock X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 This applies to both user-mode and !user-mode emulation. Instead of relying on a global lock, protect the list of incoming jumps with tb->jmp_lock. This lock also protects tb->cflags, so update all tb->cflags readers outside tb->jmp_lock to use atomic reads via tb_cflags(). In order to find the destination TB (and therefore its jmp_lock) from the origin TB, we introduce tb->jmp_dest[]. I considered not using a linked list of jumps, which simplifies code and makes the struct smaller. However, it unnecessarily increases memory usage, which results in a performance decrease. See for instance these numbers booting+shutting down debian-arm: Time (s) Rel. err (%) Abs. err (s) Rel. slowdown (= %) ---------------------------------------------------------------------------= --- before 20.88 0.74 0.154512 = 0. after 20.81 0.38 0.079078 -0.335249= 04 GTree 21.02 0.28 0.058856 0.670498= 08 GHashTable + xxhash 21.63 1.08 0.233604 3.59195= 40 Using a hash table or a binary tree to keep track of the jumps doesn't really pay off, not only due to the increased memory usage, but also because most TBs have only 0 or 1 jumps to them. The maximum number of jumps when booting debian-arm that I measured is 35, but as we can see in the histogram below a TB with that many incoming jumps is extremely rare; the average TB has 0.80 incoming jumps. n_jumps: 379208; avg jumps/tb: 0.801099 dist: [0.0,1.0)|=E2=96=84=E2=96=88=E2=96=81=E2=96=81=E2=96=81=E2=96=81=E2= =96=81=E2=96=81=E2=96=81=E2=96=81=E2=96=81=E2=96=81=E2=96=81 =E2=96=81=E2= =96=81=E2=96=81=E2=96=81=E2=96=81=E2=96=81 =E2=96=81=E2=96=81=E2=96=81 =E2= =96=81=E2=96=81=E2=96=81 =E2=96=81|[34.0,35.0] Signed-off-by: Emilio G. Cota --- docs/devel/multi-thread-tcg.txt | 6 +- include/exec/exec-all.h | 35 +++++++----- accel/tcg/cpu-exec.c | 41 +++++++++----- accel/tcg/translate-all.c | 118 ++++++++++++++++++++++++------------= ---- 4 files changed, 124 insertions(+), 76 deletions(-) diff --git a/docs/devel/multi-thread-tcg.txt b/docs/devel/multi-thread-tcg.= txt index faf09c6..df83445 100644 --- a/docs/devel/multi-thread-tcg.txt +++ b/docs/devel/multi-thread-tcg.txt @@ -131,8 +131,10 @@ DESIGN REQUIREMENT: Safely handle invalidation of TBs =20 The direct jump themselves are updated atomically by the TCG tb_set_jmp_target() code. Modification to the linked lists that allow -searching for linked pages are done under the protect of the -tb_lock(). +searching for linked pages are done under the protection of tb->jmp_lock, +where tb is the destination block of a jump. Each origin block keeps a +pointer to its destinations so that the appropriate lock can be acquired b= efore +iterating over a jump list. =20 The global page table is a lockless radix tree; cmpxchg is used to atomically insert new elements. diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 7911e69..f8adeb8 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -341,7 +341,7 @@ struct TranslationBlock { #define CF_LAST_IO 0x00008000 /* Last insn may be an IO access. */ #define CF_NOCACHE 0x00010000 /* To be freed after execution */ #define CF_USE_ICOUNT 0x00020000 -#define CF_INVALID 0x00040000 /* TB is stale. Setters need tb_lock */ +#define CF_INVALID 0x00040000 /* TB is stale. Set with @jmp_lock held = */ #define CF_PARALLEL 0x00080000 /* Generate code for a parallel context = */ /* cflags' mask for hashing/comparison */ #define CF_HASH_MASK \ @@ -360,6 +360,9 @@ struct TranslationBlock { uintptr_t page_next[2]; tb_page_addr_t page_addr[2]; =20 + /* jmp_lock placed here to fill a 4-byte hole. Its documentation is be= low */ + QemuSpin jmp_lock; + /* The following data are used to directly call another TB from * the code of this one. This can be done either by emitting direct or * indirect native jump instructions. These jumps are reset so that th= e TB @@ -371,20 +374,26 @@ struct TranslationBlock { #define TB_JMP_RESET_OFFSET_INVALID 0xffff /* indicates no jump generated = */ uintptr_t jmp_target_arg[2]; /* target address or offset */ =20 - /* Each TB has an associated circular list of TBs jumping to this one. - * jmp_list_first points to the first TB jumping to this one. - * jmp_list_next is used to point to the next TB in a list. - * Since each TB can have two jumps, it can participate in two lists. - * jmp_list_first and jmp_list_next are 4-byte aligned pointers to a - * TranslationBlock structure, but the two least significant bits of - * them are used to encode which data field of the pointed TB should - * be used to traverse the list further from that TB: - * 0 =3D> jmp_list_next[0], 1 =3D> jmp_list_next[1], 2 =3D> jmp_list_f= irst. - * In other words, 0/1 tells which jump is used in the pointed TB, - * and 2 means that this is a pointer back to the target TB of this li= st. + /* + * Each TB has a NULL-terminated list (jmp_list_head) of incoming jump= s. + * Each TB can have two outgoing jumps, and therefore can participate + * in two lists. The list entries are kept in jmp_list_next[2]. The le= ast + * significant bit (LSB) of the pointers in these lists is used to enc= ode + * which of the two list entries is to be used in the pointed TB. + * + * List traversals are protected by jmp_lock. The destination TB of ea= ch + * outgoing jump is kept in jmp_dest[] so that the appropriate jmp_lock + * can be acquired from any origin TB. + * + * jmp_dest[] are tagged pointers as well. The LSB is set when the TB = is + * being invalidated, so that no further outgoing jumps from it can be= set. + * + * jmp_lock also protects the CF_INVALID cflag; a jump must not be cha= ined + * to a destination TB that has CF_INVALID set. */ + uintptr_t jmp_list_head; uintptr_t jmp_list_next[2]; - uintptr_t jmp_list_first; + uintptr_t jmp_dest[2]; }; =20 extern bool parallel_cpus; diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index c79c43b..178452a 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -350,28 +350,43 @@ void tb_set_jmp_target(TranslationBlock *tb, int n, u= intptr_t addr) } } =20 -/* Called with tb_lock held. */ static inline void tb_add_jump(TranslationBlock *tb, int n, TranslationBlock *tb_next) { + uintptr_t old; + assert(n < ARRAY_SIZE(tb->jmp_list_next)); - if (tb->jmp_list_next[n]) { - /* Another thread has already done this while we were - * outside of the lock; nothing to do in this case */ - return; + qemu_spin_lock(&tb_next->jmp_lock); + + /* make sure the destination TB is valid */ + if (tb_next->cflags & CF_INVALID) { + goto out_unlock_next; + } + /* Atomically claim the jump destination slot only if it was NULL */ + old =3D atomic_cmpxchg(&tb->jmp_dest[n], (uintptr_t)NULL, (uintptr_t)t= b_next); + if (old) { + goto out_unlock_next; } + + /* patch the native jump address */ + tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc.ptr); + + /* add in TB jmp list */ + tb->jmp_list_next[n] =3D tb_next->jmp_list_head; + tb_next->jmp_list_head =3D (uintptr_t)tb | n; + + qemu_spin_unlock(&tb_next->jmp_lock); + qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc, "Linking TBs %p [" TARGET_FMT_lx "] index %d -> %p [" TARGET_FMT_lx "]\n", tb->tc.ptr, tb->pc, n, tb_next->tc.ptr, tb_next->pc); + return; =20 - /* patch the native jump address */ - tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc.ptr); - - /* add in TB jmp circular list */ - tb->jmp_list_next[n] =3D tb_next->jmp_list_first; - tb_next->jmp_list_first =3D (uintptr_t)tb | n; + out_unlock_next: + qemu_spin_unlock(&tb_next->jmp_lock); + return; } =20 static inline TranslationBlock *tb_find(CPUState *cpu, @@ -414,9 +429,7 @@ static inline TranslationBlock *tb_find(CPUState *cpu, tb_lock(); acquired_tb_lock =3D true; } - if (!(tb->cflags & CF_INVALID)) { - tb_add_jump(last_tb, tb_exit, tb); - } + tb_add_jump(last_tb, tb_exit, tb); } if (acquired_tb_lock) { tb_unlock(); diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index aabde7e..486c9df 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -170,6 +170,9 @@ struct page_collection { #define PAGE_FOR_EACH_TB(pagedesc, tb, n) \ TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) =20 +#define TB_FOR_EACH_JMP(head_tb, tb, n) \ + TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) + /* In system mode we want L1_MAP to be based on ram offsets, while in user mode we want it to be based on virtual addresses. */ #if !defined(CONFIG_USER_ONLY) @@ -387,7 +390,7 @@ static int cpu_restore_state_from_tb(CPUState *cpu, Tra= nslationBlock *tb, return -1; =20 found: - if (tb->cflags & CF_USE_ICOUNT) { + if (tb_cflags(tb) & CF_USE_ICOUNT) { assert(use_icount); /* Reset the cycle counter to the start of the block. */ cpu->icount_decr.u16.low +=3D num_insns; @@ -432,7 +435,7 @@ bool cpu_restore_state(CPUState *cpu, uintptr_t host_pc) tb =3D tcg_tb_lookup(host_pc); if (tb) { cpu_restore_state_from_tb(cpu, tb, host_pc); - if (tb->cflags & CF_NOCACHE) { + if (tb_cflags(tb) & CF_NOCACHE) { /* one-shot translation, invalidate it immediately */ tb_phys_invalidate(tb, -1); tcg_tb_remove(tb); @@ -1360,34 +1363,53 @@ static inline void tb_page_remove(PageDesc *pd, Tra= nslationBlock *tb) g_assert_not_reached(); } =20 -/* remove the TB from a list of TBs jumping to the n-th jump target of the= TB */ -static inline void tb_remove_from_jmp_list(TranslationBlock *tb, int n) +/* remove @orig from its @n_orig-th jump list */ +static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_o= rig) { - TranslationBlock *tb1; - uintptr_t *ptb, ntb; - unsigned int n1; + uintptr_t ptr, ptr_locked; + TranslationBlock *dest; + TranslationBlock *tb; + uintptr_t *pprev; + int n; =20 - ptb =3D &tb->jmp_list_next[n]; - if (*ptb) { - /* find tb(n) in circular list */ - for (;;) { - ntb =3D *ptb; - n1 =3D ntb & 3; - tb1 =3D (TranslationBlock *)(ntb & ~3); - if (n1 =3D=3D n && tb1 =3D=3D tb) { - break; - } - if (n1 =3D=3D 2) { - ptb =3D &tb1->jmp_list_first; - } else { - ptb =3D &tb1->jmp_list_next[n1]; - } - } - /* now we can suppress tb(n) from the list */ - *ptb =3D tb->jmp_list_next[n]; + /* mark the LSB of jmp_dest[] so that no further jumps can be inserted= */ + ptr =3D atomic_or_fetch(&orig->jmp_dest[n_orig], 1); + dest =3D (TranslationBlock *)(ptr & ~1); + if (dest =3D=3D NULL) { + return; + } =20 - tb->jmp_list_next[n] =3D (uintptr_t)NULL; + qemu_spin_lock(&dest->jmp_lock); + /* + * While acquiring the lock, the jump might have been removed if the + * destination TB was invalidated; check again. + */ + ptr_locked =3D atomic_read(&orig->jmp_dest[n_orig]); + if (ptr_locked !=3D ptr) { + qemu_spin_unlock(&dest->jmp_lock); + /* + * The only possibility is that the jump was unlinked via + * tb_jump_unlink(dest). Seeing here another destination would be = a bug, + * because we set the LSB above. + */ + g_assert(ptr_locked =3D=3D 1 && dest->cflags & CF_INVALID); + return; } + /* + * We first acquired the lock, and since the destination pointer match= es, + * we know for sure that @orig is in the jmp list. + */ + pprev =3D &dest->jmp_list_head; + TB_FOR_EACH_JMP(dest, tb, n) { + if (tb =3D=3D orig && n =3D=3D n_orig) { + *pprev =3D tb->jmp_list_next[n]; + /* no need to set orig->jmp_dest[n]; setting the LSB was enoug= h */ + qemu_spin_unlock(&dest->jmp_lock); + return; + } + pprev =3D &tb->jmp_list_next[n]; + } + g_assert_not_reached(); } =20 /* reset the jump entry 'n' of a TB so that it is not chained to @@ -1399,24 +1421,21 @@ static inline void tb_reset_jump(TranslationBlock *= tb, int n) } =20 /* remove any jumps to the TB */ -static inline void tb_jmp_unlink(TranslationBlock *tb) +static inline void tb_jmp_unlink(TranslationBlock *dest) { - TranslationBlock *tb1; - uintptr_t *ptb, ntb; - unsigned int n1; + TranslationBlock *tb; + int n; =20 - ptb =3D &tb->jmp_list_first; - for (;;) { - ntb =3D *ptb; - n1 =3D ntb & 3; - tb1 =3D (TranslationBlock *)(ntb & ~3); - if (n1 =3D=3D 2) { - break; - } - tb_reset_jump(tb1, n1); - *ptb =3D tb1->jmp_list_next[n1]; - tb1->jmp_list_next[n1] =3D (uintptr_t)NULL; + qemu_spin_lock(&dest->jmp_lock); + + TB_FOR_EACH_JMP(dest, tb, n) { + tb_reset_jump(tb, n); + atomic_and(&tb->jmp_dest[n], (uintptr_t)NULL | 1); + /* No need to clear the list entry; setting the dest ptr is enough= */ } + dest->jmp_list_head =3D (uintptr_t)NULL; + + qemu_spin_unlock(&dest->jmp_lock); } =20 /* If @rm_from_page_list is set, call with the TB's pages' locks held */ @@ -1429,11 +1448,14 @@ static void do_tb_phys_invalidate(TranslationBlock = *tb, bool rm_from_page_list) =20 assert_tb_locked(); =20 + /* make sure no further incoming jumps will be chained to this TB */ + qemu_spin_lock(&tb->jmp_lock); atomic_set(&tb->cflags, tb->cflags | CF_INVALID); + qemu_spin_unlock(&tb->jmp_lock); =20 /* remove the TB from the hash list */ phys_pc =3D tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK); - h =3D tb_hash_func(phys_pc, tb->pc, tb->flags, tb->cflags & CF_HASH_MA= SK, + h =3D tb_hash_func(phys_pc, tb->pc, tb->flags, tb_cflags(tb) & CF_HASH= _MASK, tb->trace_vcpu_dstate); if (!qht_remove(&tb_ctx.htable, tb, h)) { return; @@ -1784,10 +1806,12 @@ TranslationBlock *tb_gen_code(CPUState *cpu, CODE_GEN_ALIGN)); =20 /* init jump list */ - assert(((uintptr_t)tb & 3) =3D=3D 0); - tb->jmp_list_first =3D (uintptr_t)tb | 2; + qemu_spin_init(&tb->jmp_lock); + tb->jmp_list_head =3D (uintptr_t)NULL; tb->jmp_list_next[0] =3D (uintptr_t)NULL; tb->jmp_list_next[1] =3D (uintptr_t)NULL; + tb->jmp_dest[0] =3D (uintptr_t)NULL; + tb->jmp_dest[1] =3D (uintptr_t)NULL; =20 /* init original jump addresses wich has been set during tcg_gen_code(= ) */ if (tb->jmp_reset_offset[0] !=3D TB_JMP_RESET_OFFSET_INVALID) { @@ -1879,7 +1903,7 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, } } if (current_tb =3D=3D tb && - (current_tb->cflags & CF_COUNT_MASK) !=3D 1) { + (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { /* If we are modifying the current TB, we must stop its execution. We could be more precise by checking that the modification is after the current PC, but it @@ -2076,7 +2100,7 @@ static bool tb_invalidate_phys_page(tb_page_addr_t ad= dr, uintptr_t pc) PAGE_FOR_EACH_TB(p, tb, n) { #ifdef TARGET_HAS_PRECISE_SMC if (current_tb =3D=3D tb && - (current_tb->cflags & CF_COUNT_MASK) !=3D 1) { + (tb_cflags(current_tb) & CF_COUNT_MASK) !=3D 1) { /* If we are modifying the current TB, we must stop its execution. We could be more precise by checking that the modification is after the current PC, but it @@ -2201,7 +2225,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retadd= r) /* Generate a new TB executing the I/O insn. */ cpu->cflags_next_tb =3D curr_cflags() | CF_LAST_IO | n; =20 - if (tb->cflags & CF_NOCACHE) { + if (tb_cflags(tb) & CF_NOCACHE) { if (tb->orig_tb) { /* Invalidate original TB if this TB was generated in * cpu_exec_nocache() */ --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981318474645.1054505169096; Thu, 5 Apr 2018 19:21:58 -0700 (PDT) Received: from localhost ([::1]:44032 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H0f-0000tY-4q for importer@patchew.org; Thu, 05 Apr 2018 22:21:49 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58487) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsN-0002Gp-7Q for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsK-0003Wn-Tc for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:15 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:50313) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsK-0003WR-PJ for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:12 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 6B27020A90; Thu, 5 Apr 2018 22:13:12 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:12 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 1FFC51025D; Thu, 5 Apr 2018 22:13:12 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=l4oZAUooxG4YyRh3wz9cPP56lQ xqYVAGvH0lWVow2P8=; b=xrgror2gTBO/Fb6Ktp9QgTGeuY9ewaoIjNUlbnHg9u axqGKaQf7QhdmwpPI2j1XRsQn6LEOExjRaFv0mPueLXBlbdaOXH5bawSBGOgmkQc 7pGKmirxA9F1Tz9ApOogc8hyY1LxMtdxcEObWqwd4utc8rrB4pjd1haNUflzZmRz Q= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=l4oZAU ooxG4YyRh3wz9cPP56lQxqYVAGvH0lWVow2P8=; b=OGE5z14itchFC+x9vu+vjH NqWP2z4+UtfmvkiyRZHKE5QytjAqrHMJvMyEbvfYbPRzaUGPAlyr0uIMXt1Y8bPm c7d4PemE+sdZzulr49izDlTjgHlo4EjpFiacINJGc2YAz0LspmTcYD6+2busE4mn l0vJQDplUZ2+c5HHkCUo+c6EvOP7z511tZpP1+Nxhdf/ZKxAYfu/GnCOziaADSpk itPNSpdTMPJeeXZLAor5KPnBnwteBBiaCdsG0cgkIcwuXfUqitgxZO2x8PurlWfd eqJ3lyRR63tZZS60lghp1gKKtESG5pJJ3RvFDkFqXCwTmPm/lHgERbn1DkTmrxQg == X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:06 -0400 Message-Id: <1522980788-1252-16-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 15/17] cputlb: remove tb_lock from tlb_flush functions X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 The acquisition of tb_lock was added when the async tlb_flush was introduced in e3b9ca810 ("cputlb: introduce tlb_flush_* async work.") tb_lock was there to allow us to do memset() on the tb_jmp_cache's. However, since f3ced3c5928 ("tcg: consistently access cpu->tb_jmp_cache atomically") all accesses to tb_jmp_cache are atomic, so tb_lock is not needed here. Get rid of it. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Emilio G. Cota Reviewed-by: Richard Henderson --- accel/tcg/cputlb.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 0543903..f5c3a09 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -125,8 +125,6 @@ static void tlb_flush_nocheck(CPUState *cpu) atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); tlb_debug("(count: %zu)\n", tlb_flush_count()); =20 - tb_lock(); - memset(env->tlb_table, -1, sizeof(env->tlb_table)); memset(env->tlb_v_table, -1, sizeof(env->tlb_v_table)); cpu_tb_jmp_cache_clear(cpu); @@ -135,8 +133,6 @@ static void tlb_flush_nocheck(CPUState *cpu) env->tlb_flush_addr =3D -1; env->tlb_flush_mask =3D 0; =20 - tb_unlock(); - atomic_mb_set(&cpu->pending_tlb_flush, 0); } =20 @@ -180,8 +176,6 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cp= u, run_on_cpu_data data) =20 assert_cpu_is_self(cpu); =20 - tb_lock(); - tlb_debug("start: mmu_idx:0x%04lx\n", mmu_idx_bitmask); =20 for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { @@ -197,8 +191,6 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cp= u, run_on_cpu_data data) cpu_tb_jmp_cache_clear(cpu); =20 tlb_debug("done\n"); - - tb_unlock(); } =20 void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981115026472.964289736399; Thu, 5 Apr 2018 19:18:35 -0700 (PDT) Received: from localhost ([::1]:44012 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GxW-0006QN-5q for importer@patchew.org; Thu, 05 Apr 2018 22:18:34 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58480) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsN-0002Ga-2C for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsL-0003X3-5U for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:14 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:47145) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsK-0003WY-Tw for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:12 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 9D1F820B9D; Thu, 5 Apr 2018 22:13:12 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:12 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 5E811E47F2; Thu, 5 Apr 2018 22:13:12 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=mesmtp; bh=vLwYK7bokbRBlM PwQUxf2mW4lbWsbeIQQQByiHE17yM=; b=sQRyQC2Fu9YG+SdBVXny0TBhL49nwC pFv8lEp19IANt8buV2by1YJhez/PCwv6JqamOii6B7+c8iosT+U//JtXBGMFMpuW iDOTGfopPKUT0AB1Ckzme7VnRUWvvBie8NjKDNZMr43ensVEkfMkF3+S8J7gN77p Z/DPh7DYP8lgc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=vLwYK7bokbRBlMPwQUxf2mW4lbWsbeIQQQByiHE17yM=; b=MjJHdB76 q5Yc5xZZX/j1TzPuLvm6DovSeJOlspORxzuiRJMdUtVtDqy1ZoTMm4+xCaU3MEx6 IzIttISr2avFWqzpUG65ap6ZxnkGclC4smKTy7nL7ZWPE5jQedLHtaNWcxxr7CMV NL9fX1WxZs8OIuqKwB7Rainge8nP3iN8u5hgkIoLLOi8wO4uutqCQzU1QdZj03zx jnzLKlxrJRorONQAjg2js6lzwKwYap1pqxM/XlumN3iTrM8xNFZJPnCY8QZS+qkJ TVVlv7vf315xHAWCVv+tbRl8ip2d+HEIymmOWWqQWcLbBVZtygfRrPY6Vr3fA5jr M24qna+BRByyRQ== X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:07 -0400 Message-Id: <1522980788-1252-17-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 16/17] translate-all: remove tb_lock mention from cpu_restore_state_from_tb X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" tb_lock was needed when the function did retranslation. However, since fca8a500d519 ("tcg: Save insn data and use it in cpu_restore_state_from_tb") we don't do retranslation. Get rid of the comment. Signed-off-by: Emilio G. Cota Reviewed-by: Richard Henderson --- accel/tcg/translate-all.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 486c9df..62e5796 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -354,9 +354,7 @@ static int encode_search(TranslationBlock *tb, uint8_t = *block) return p - block; } =20 -/* The cpu state corresponding to 'searched_pc' is restored. - * Called with tb_lock held. - */ +/* The cpu state corresponding to 'searched_pc' is restored */ static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb, uintptr_t searched_pc) { --=20 2.7.4 From nobody Tue Oct 28 02:12:20 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1522981738344560.2597070956276; Thu, 5 Apr 2018 19:28:58 -0700 (PDT) Received: from localhost ([::1]:44521 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4H7Z-00073d-EC for importer@patchew.org; Thu, 05 Apr 2018 22:28:57 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58552) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f4GsQ-0002Jo-Ii for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:21 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f4GsL-0003Xr-Ik for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:18 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:36645) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f4GsL-0003Wr-95 for qemu-devel@nongnu.org; Thu, 05 Apr 2018 22:13:13 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id E72A420B3B; Thu, 5 Apr 2018 22:13:12 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 05 Apr 2018 22:13:12 -0400 Received: from localhost (flamenco.cs.columbia.edu [128.59.20.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 8E7D71025D; Thu, 5 Apr 2018 22:13:12 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=braap.org; h=cc :date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=mesmtp; bh=ggAABwCN7RmGA/ IK/6BNBKglTTmDW0osq3ESOioHgqs=; b=P6iug0GxtzzN5Z7URMJ1uUKHPVutt4 5jtueks/RV1G4V2pBygJIXLEWSiMVMQx/hN4D7rgw5j/HJDDpyL76SExCR3o2gDV tW5byDDf55UPvACSvdsK3zpaAkE1qsqyL1wmwJT97HwvwAEQnvT+cMBDAGTNKb83 B0wzK5F5tv2bk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=ggAABwCN7RmGA/IK/6BNBKglTTmDW0osq3ESOioHgqs=; b=GPWhSfA0 rzyckUeg+gXf0FYZjEC6qYGUxcEQr8NR2R2CjQ/JjbevdHahvTJtJ2uzZV0axU6a A15mbdZqpUbKnVen/1p8nY5R1Z9gTPHkJeSDsRKVMBM89u5OPkk13h3RuDiHHvNo 3ubg+r1EKImIgfom19EmpZhBJJuKd2RXeGc7aS7/v8y1VpsxhUeH7EZMKkDy+veC A27VpK0H9c6j3UgcPBzGOTdxU3LKpKDKgnGXAlsqYPXntCvCWEwzZIy2mlSFDApt JNMnWUXi0GD3Y68SwwjxTohJa5Eo62Aiqfb3VAacAC1hPijEptY56coAMQj+HLYk kJ7v1PnOdXtHDg== X-ME-Sender: From: "Emilio G. Cota" To: qemu-devel@nongnu.org Date: Thu, 5 Apr 2018 22:13:08 -0400 Message-Id: <1522980788-1252-18-git-send-email-cota@braap.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522980788-1252-1-git-send-email-cota@braap.org> References: <1522980788-1252-1-git-send-email-cota@braap.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.111.4.29 Subject: [Qemu-devel] [PATCH v2 17/17] tcg: remove tb_lock X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Use mmap_lock in user-mode to protect TCG state and the page descriptors. In !user-mode, each vCPU has its own TCG state, so no locks needed. Per-page locks are used to protect the page descriptors. Per-TB locks are used in both modes to protect TB jumps. Some notes: - tb_lock is removed from notdirty_mem_write by passing a locked page_collection to tb_invalidate_phys_page_fast. - tcg_tb_lookup/remove/insert/etc have their own internal lock(s), so there is no need to further serialize access to them. - do_tb_flush is run in a safe async context, meaning no other vCPU threads are running. Therefore acquiring mmap_lock there is just to please tools such as thread sanitizer. - Not visible in the diff, but tb_invalidate_phys_page already has an assert_memory_lock. - cpu_io_recompile is !user-only, so no mmap_lock there. - Added mmap_unlock()'s before all siglongjmp's that could be called in user-mode while mmap_lock is held. + Added an assert for !have_mmap_lock() after returning from the longjmp in cpu_exec, just like we do in cpu_exec_step_atomic. Performance numbers before/after: Host: AMD Opteron(tm) Processor 6376 ubuntu 17.04 ppc64 bootup+shutdown time 700 +-+--+----+------+------------+-----------+------------*--+-+ | + + + + + *B | | before ***B*** ** * | |tb lock removal ###D### *** | 600 +-+ *** +-+ | ** # | | *B* #D | | *** * ## | 500 +-+ *** ### +-+ | * *** ### | | *B* # ## | | ** * #D# | 400 +-+ ** ## +-+ | ** ### | | ** ## | | ** # ## | 300 +-+ * B* #D# +-+ | B *** ### | | * ** #### | | * *** ### | 200 +-+ B *B #D# +-+ | #B* * ## # | | #* ## | | + D##D# + + + + | 100 +-+--+----+------+------------+-----------+------------+--+-+ 1 8 16 Guest CPUs 48 64 png: https://imgur.com/HwmBHXe debian jessie aarch64 bootup+shutdown time 90 +-+--+-----+-----+------------+------------+------------+--+-+ | + + + + + + | | before ***B*** B | 80 +tb lock removal ###D### **D +-+ | **### | | **## | 70 +-+ ** # +-+ | ** ## | | ** # | 60 +-+ *B ## +-+ | ** ## | | *** #D | 50 +-+ *** ## +-+ | * ** ### | | **B* ### | 40 +-+ **** # ## +-+ | **** #D# | | ***B** ### | 30 +-+ B***B** #### +-+ | B * * # ### | | B ###D# | 20 +-+ D ##D## +-+ | D# | | + + + + + + | 10 +-+--+-----+-----+------------+------------+------------+--+-+ 1 8 16 Guest CPUs 48 64 png: https://imgur.com/iGpGFtv The gains are high for 4-8 CPUs. Beyond that point, however, unrelated lock contention significantly hurts scalability. Signed-off-by: Emilio G. Cota --- docs/devel/multi-thread-tcg.txt | 11 ++-- accel/tcg/translate-all.h | 3 +- include/exec/cpu-common.h | 2 +- include/exec/exec-all.h | 4 -- include/exec/memory-internal.h | 6 +- include/exec/tb-context.h | 2 - tcg/tcg.h | 4 +- accel/tcg/cpu-exec.c | 34 +++-------- accel/tcg/translate-all.c | 132 ++++++++++++------------------------= ---- exec.c | 25 +++----- linux-user/main.c | 3 - 11 files changed, 74 insertions(+), 152 deletions(-) diff --git a/docs/devel/multi-thread-tcg.txt b/docs/devel/multi-thread-tcg.= txt index df83445..06530be 100644 --- a/docs/devel/multi-thread-tcg.txt +++ b/docs/devel/multi-thread-tcg.txt @@ -61,6 +61,7 @@ have their block-to-block jumps patched. Global TCG State ---------------- =20 +### User-mode emulation We need to protect the entire code generation cycle including any post generation patching of the translated code. This also implies a shared translation buffer which contains code running on all cores. Any @@ -75,9 +76,11 @@ patching. =20 (Current solution) =20 -Mainly as part of the linux-user work all code generation is -serialised with a tb_lock(). For the SoftMMU tb_lock() also takes the -place of mmap_lock() in linux-user. +Code generation is serialised with mmap_lock(). + +### !User-mode emulation +Each vCPU has its own TCG context and associated TCG region, thereby +requiring no locking. =20 Translation Blocks ------------------ @@ -195,7 +198,7 @@ work as "safe work" and exiting the cpu run loop. This = ensure by the time execution restarts all flush operations have completed. =20 TLB flag updates are all done atomically and are also protected by the -tb_lock() which is used by the functions that update the TLB in bulk. +corresponding page lock. =20 (Known limitation) =20 diff --git a/accel/tcg/translate-all.h b/accel/tcg/translate-all.h index 6d1d258..e6cb963 100644 --- a/accel/tcg/translate-all.h +++ b/accel/tcg/translate-all.h @@ -26,7 +26,8 @@ struct page_collection *page_collection_lock(tb_page_addr_t start, tb_page_addr_t end); void page_collection_unlock(struct page_collection *set); -void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len); +void tb_invalidate_phys_page_fast(struct page_collection *pages, + tb_page_addr_t start, int len); void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t en= d, int is_cpu_write_access); void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end); diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index 24d335f..46b3be5 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -23,7 +23,7 @@ typedef struct CPUListState { FILE *file; } CPUListState; =20 -/* The CPU list lock nests outside tb_lock/tb_unlock. */ +/* The CPU list lock nests outside page_(un)lock or mmap_(un)lock */ void qemu_init_cpu_list(void); void cpu_list_lock(void); void cpu_list_unlock(void); diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index f8adeb8..9ec7348 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -436,10 +436,6 @@ extern uintptr_t tci_tb_ptr; smaller than 4 bytes, so we don't worry about special-casing this. */ #define GETPC_ADJ 2 =20 -void tb_lock(void); -void tb_unlock(void); -void tb_lock_reset(void); - #if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) void assert_page_collection_locked(bool val); #else diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h index 6a5ee42..2374339 100644 --- a/include/exec/memory-internal.h +++ b/include/exec/memory-internal.h @@ -45,6 +45,8 @@ void mtree_print_dispatch(fprintf_function mon, void *f, struct AddressSpaceDispatch *d, MemoryRegion *root); =20 +struct page_collection; + /* Opaque struct for passing info from memory_notdirty_write_prepare() * to memory_notdirty_write_complete(). Callers should treat all fields * as private, with the exception of @active. @@ -56,10 +58,10 @@ void mtree_print_dispatch(fprintf_function mon, void *f, */ typedef struct { CPUState *cpu; + struct page_collection *pages; ram_addr_t ram_addr; vaddr mem_vaddr; unsigned size; - bool locked; bool active; } NotDirtyInfo; =20 @@ -87,7 +89,7 @@ typedef struct { * * This must only be called if we are using TCG; it will assert otherwise. * - * We may take a lock in the prepare call, so callers must ensure that + * We may take locks in the prepare call, so callers must ensure that * they don't exit (via longjump or otherwise) without calling complete. * * This call must only be made inside an RCU critical section. diff --git a/include/exec/tb-context.h b/include/exec/tb-context.h index 8c9b49c..feb585e 100644 --- a/include/exec/tb-context.h +++ b/include/exec/tb-context.h @@ -32,8 +32,6 @@ typedef struct TBContext TBContext; struct TBContext { =20 struct qht htable; - /* any access to the tbs or the page table must use this lock */ - QemuMutex tb_lock; =20 /* statistics */ unsigned tb_flush_count; diff --git a/tcg/tcg.h b/tcg/tcg.h index 9dd9448..c411bf5 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -841,7 +841,7 @@ static inline bool tcg_op_buf_full(void) =20 /* pool based memory allocation */ =20 -/* user-mode: tb_lock must be held for tcg_malloc_internal. */ +/* user-mode: mmap_lock must be held for tcg_malloc_internal. */ void *tcg_malloc_internal(TCGContext *s, int size); void tcg_pool_reset(TCGContext *s); TranslationBlock *tcg_tb_alloc(TCGContext *s); @@ -859,7 +859,7 @@ TranslationBlock *tcg_tb_lookup(uintptr_t tc_ptr); void tcg_tb_foreach(GTraverseFunc func, gpointer user_data); size_t tcg_nb_tbs(void); =20 -/* user-mode: Called with tb_lock held. */ +/* user-mode: Called with mmap_lock held. */ static inline void *tcg_malloc(int size) { TCGContext *s =3D tcg_ctx; diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index 178452a..e5ad5e6 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -210,20 +210,20 @@ static void cpu_exec_nocache(CPUState *cpu, int max_c= ycles, We only end up here when an existing TB is too long. */ cflags |=3D MIN(max_cycles, CF_COUNT_MASK); =20 - tb_lock(); + mmap_lock(); tb =3D tb_gen_code(cpu, orig_tb->pc, orig_tb->cs_base, orig_tb->flags, cflags); tb->orig_tb =3D orig_tb; - tb_unlock(); + mmap_unlock(); =20 /* execute the generated code */ trace_exec_tb_nocache(tb, tb->pc); cpu_tb_exec(cpu, tb); =20 - tb_lock(); + mmap_lock(); tb_phys_invalidate(tb, -1); + mmap_unlock(); tcg_tb_remove(tb); - tb_unlock(); } #endif =20 @@ -242,9 +242,7 @@ void cpu_exec_step_atomic(CPUState *cpu) tb =3D tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, cf_mask); if (tb =3D=3D NULL) { mmap_lock(); - tb_lock(); tb =3D tb_gen_code(cpu, pc, cs_base, flags, cflags); - tb_unlock(); mmap_unlock(); } =20 @@ -259,15 +257,13 @@ void cpu_exec_step_atomic(CPUState *cpu) cpu_tb_exec(cpu, tb); cc->cpu_exec_exit(cpu); } else { - /* We may have exited due to another problem here, so we need - * to reset any tb_locks we may have taken but didn't release. + /* * The mmap_lock is dropped by tb_gen_code if it runs out of * memory. */ #ifndef CONFIG_SOFTMMU tcg_debug_assert(!have_mmap_lock()); #endif - tb_lock_reset(); assert_page_collection_locked(false); } =20 @@ -396,20 +392,11 @@ static inline TranslationBlock *tb_find(CPUState *cpu, TranslationBlock *tb; target_ulong cs_base, pc; uint32_t flags; - bool acquired_tb_lock =3D false; =20 tb =3D tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, cf_mask); if (tb =3D=3D NULL) { - /* mmap_lock is needed by tb_gen_code, and mmap_lock must be - * taken outside tb_lock. As system emulation is currently - * single threaded the locks are NOPs. - */ mmap_lock(); - tb_lock(); - acquired_tb_lock =3D true; - tb =3D tb_gen_code(cpu, pc, cs_base, flags, cf_mask); - mmap_unlock(); /* We add the TB in the virtual pc hash table for the fast lookup = */ atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb); @@ -425,15 +412,8 @@ static inline TranslationBlock *tb_find(CPUState *cpu, #endif /* See if we can patch the calling TB. */ if (last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) { - if (!acquired_tb_lock) { - tb_lock(); - acquired_tb_lock =3D true; - } tb_add_jump(last_tb, tb_exit, tb); } - if (acquired_tb_lock) { - tb_unlock(); - } return tb; } =20 @@ -709,7 +689,9 @@ int cpu_exec(CPUState *cpu) g_assert(cc =3D=3D CPU_GET_CLASS(cpu)); #endif /* buggy compiler */ cpu->can_do_io =3D 1; - tb_lock_reset(); +#ifndef CONFIG_SOFTMMU + tcg_debug_assert(!have_mmap_lock()); +#endif if (qemu_mutex_iothread_locked()) { qemu_mutex_unlock_iothread(); } diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 62e5796..443762c 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -88,13 +88,13 @@ #endif =20 /* Access to the various translations structures need to be serialised via= locks - * for consistency. This is automatic for SoftMMU based system - * emulation due to its single threaded nature. In user-mode emulation - * access to the memory related structures are protected with the - * mmap_lock. + * for consistency. + * In user-mode emulation access to the memory related structures are prot= ected + * with mmap_lock. + * In !user-mode we use per-page locks. */ #ifdef CONFIG_SOFTMMU -#define assert_memory_lock() tcg_debug_assert(have_tb_lock) +#define assert_memory_lock() #else #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif @@ -216,9 +216,6 @@ __thread TCGContext *tcg_ctx; TBContext tb_ctx; bool parallel_cpus; =20 -/* translation block context */ -static __thread int have_tb_lock; - static void page_table_config_init(void) { uint32_t v_l1_bits; @@ -239,31 +236,6 @@ static void page_table_config_init(void) assert(v_l2_levels >=3D 0); } =20 -#define assert_tb_locked() tcg_debug_assert(have_tb_lock) -#define assert_tb_unlocked() tcg_debug_assert(!have_tb_lock) - -void tb_lock(void) -{ - assert_tb_unlocked(); - qemu_mutex_lock(&tb_ctx.tb_lock); - have_tb_lock++; -} - -void tb_unlock(void) -{ - assert_tb_locked(); - have_tb_lock--; - qemu_mutex_unlock(&tb_ctx.tb_lock); -} - -void tb_lock_reset(void) -{ - if (have_tb_lock) { - qemu_mutex_unlock(&tb_ctx.tb_lock); - have_tb_lock =3D 0; - } -} - void cpu_gen_init(void) { tcg_context_init(&tcg_init_ctx); @@ -419,8 +391,7 @@ bool cpu_restore_state(CPUState *cpu, uintptr_t host_pc) * - fault during translation (instruction fetch) * - fault from helper (not using GETPC() macro) * - * Either way we need return early to avoid blowing up on a - * recursive tb_lock() as we can't resolve it here. + * Either way we need return early as we can't resolve it here. * * We are using unsigned arithmetic so if host_pc < * tcg_init_ctx.code_gen_buffer check_offset will wrap to way @@ -429,7 +400,6 @@ bool cpu_restore_state(CPUState *cpu, uintptr_t host_pc) check_offset =3D host_pc - (uintptr_t) tcg_init_ctx.code_gen_buffer; =20 if (check_offset < tcg_init_ctx.code_gen_buffer_size) { - tb_lock(); tb =3D tcg_tb_lookup(host_pc); if (tb) { cpu_restore_state_from_tb(cpu, tb, host_pc); @@ -440,7 +410,6 @@ bool cpu_restore_state(CPUState *cpu, uintptr_t host_pc) } r =3D true; } - tb_unlock(); } =20 return r; @@ -1129,7 +1098,6 @@ static inline void code_gen_alloc(size_t tb_size) fprintf(stderr, "Could not allocate dynamic translator buffer\n"); exit(1); } - qemu_mutex_init(&tb_ctx.tb_lock); } =20 static bool tb_cmp(const void *ap, const void *bp) @@ -1173,14 +1141,12 @@ void tcg_exec_init(unsigned long tb_size) /* * Allocate a new translation block. Flush the translation buffer if * too many translation blocks or too much generated code. - * - * Called with tb_lock held. */ static TranslationBlock *tb_alloc(target_ulong pc) { TranslationBlock *tb; =20 - assert_tb_locked(); + assert_memory_lock(); =20 tb =3D tcg_tb_alloc(tcg_ctx); if (unlikely(tb =3D=3D NULL)) { @@ -1247,8 +1213,7 @@ static gboolean tb_host_size_iter(gpointer key, gpoin= ter value, gpointer data) /* flush all the translation blocks */ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) { - tb_lock(); - + mmap_lock(); /* If it is already been done on request of another CPU, * just retry. */ @@ -1278,7 +1243,7 @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_dat= a tb_flush_count) atomic_mb_set(&tb_ctx.tb_flush_count, tb_ctx.tb_flush_count + 1); =20 done: - tb_unlock(); + mmap_unlock(); } =20 void tb_flush(CPUState *cpu) @@ -1312,7 +1277,7 @@ do_tb_invalidate_check(struct qht *ht, void *p, uint3= 2_t hash, void *userp) =20 /* verify that all the pages have correct rights for code * - * Called with tb_lock held. + * Called with mmap_lock held. */ static void tb_invalidate_check(target_ulong address) { @@ -1342,7 +1307,10 @@ static void tb_page_check(void) =20 #endif /* CONFIG_USER_ONLY */ =20 -/* call with @pd->lock held */ +/* + * user-mode: call with mmap_lock held + * !user-mode: call with @pd->lock held + */ static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) { TranslationBlock *tb1; @@ -1436,7 +1404,11 @@ static inline void tb_jmp_unlink(TranslationBlock *d= est) qemu_spin_unlock(&dest->jmp_lock); } =20 -/* If @rm_from_page_list is set, call with the TB's pages' locks held */ +/* + * In user-mode, call with mmap_lock held. + * In !user-mode, if @rm_from_page_list is set, call with the TB's pages' + * locks held. + */ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_= list) { CPUState *cpu; @@ -1444,7 +1416,7 @@ static void do_tb_phys_invalidate(TranslationBlock *t= b, bool rm_from_page_list) uint32_t h; tb_page_addr_t phys_pc; =20 - assert_tb_locked(); + assert_memory_lock(); =20 /* make sure no further incoming jumps will be chained to this TB */ qemu_spin_lock(&tb->jmp_lock); @@ -1497,7 +1469,7 @@ static void tb_phys_invalidate__locked(TranslationBlo= ck *tb) =20 /* invalidate one TB * - * Called with tb_lock held. + * Called with mmap_lock held in user-mode. */ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr) { @@ -1542,7 +1514,7 @@ static void build_page_bitmap(PageDesc *p) /* add the tb in the target page and protect it if necessary * * Called with mmap_lock held for user-mode emulation. - * Called with @p->lock held. + * Called with @p->lock held in !user-mode. */ static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, unsigned int n, tb_page_addr_t page_addr) @@ -1825,10 +1797,9 @@ TranslationBlock *tb_gen_code(CPUState *cpu, if ((pc & TARGET_PAGE_MASK) !=3D virt_page2) { phys_page2 =3D get_page_addr_code(env, virt_page2); } - /* As long as consistency of the TB stuff is provided by tb_lock in us= er - * mode and is implicit in single-threaded softmmu emulation, no expli= cit - * memory barrier is required before tb_link_page() makes the TB visib= le - * through the physical hash table and physical page list. + /* + * No explicit memory barrier is required -- tb_link_page() makes the + * TB visible in a consistent state. */ existing_tb =3D tb_link_page(tb, phys_pc, phys_page2); /* if the TB already exists, discard what we just translated */ @@ -1844,8 +1815,9 @@ TranslationBlock *tb_gen_code(CPUState *cpu, } =20 /* - * Call with all @pages locked. * @p must be non-NULL. + * user-mode: call with mmap_lock held. + * !user-mode: call with all @pages locked. */ static void tb_invalidate_phys_page_range__locked(struct page_collection *pages, @@ -1929,6 +1901,7 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, page_collection_unlock(pages); /* Force execution of one insn next time. */ cpu->cflags_next_tb =3D 1 | curr_cflags(); + mmap_unlock(); cpu_loop_exit_noexc(cpu); } #endif @@ -1941,8 +1914,7 @@ tb_invalidate_phys_page_range__locked(struct page_col= lection *pages, * access: the virtual CPU will exit the current TB if code is modified in= side * this TB. * - * Called with tb_lock/mmap_lock held for user-mode emulation - * Called with tb_lock held for system-mode emulation + * Called with mmap_lock held for user-mode emulation */ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t en= d, int is_cpu_write_access) @@ -1951,7 +1923,6 @@ void tb_invalidate_phys_page_range(tb_page_addr_t sta= rt, tb_page_addr_t end, PageDesc *p; =20 assert_memory_lock(); - assert_tb_locked(); =20 p =3D page_find(start >> TARGET_PAGE_BITS); if (p =3D=3D NULL) { @@ -1970,14 +1941,15 @@ void tb_invalidate_phys_page_range(tb_page_addr_t s= tart, tb_page_addr_t end, * access: the virtual CPU will exit the current TB if code is modified in= side * this TB. * - * Called with mmap_lock held for user-mode emulation, grabs tb_lock - * Called with tb_lock held for system-mode emulation + * Called with mmap_lock held for user-mode emulation. */ -static void tb_invalidate_phys_range_1(tb_page_addr_t start, tb_page_addr_= t end) +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) { struct page_collection *pages; tb_page_addr_t next; =20 + assert_memory_lock(); + pages =3D page_collection_lock(start, end); for (next =3D (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; start < end; @@ -1994,29 +1966,15 @@ static void tb_invalidate_phys_range_1(tb_page_addr= _t start, tb_page_addr_t end) } =20 #ifdef CONFIG_SOFTMMU -void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) -{ - assert_tb_locked(); - tb_invalidate_phys_range_1(start, end); -} -#else -void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) -{ - assert_memory_lock(); - tb_lock(); - tb_invalidate_phys_range_1(start, end); - tb_unlock(); -} -#endif - -#ifdef CONFIG_SOFTMMU /* len must be <=3D 8 and start must be a multiple of len. * Called via softmmu_template.h when code areas are written to with * iothread mutex not held. + * + * Call with all @pages in the range [@start, @start + len[ locked. */ -void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len) +void tb_invalidate_phys_page_fast(struct page_collection *pages, + tb_page_addr_t start, int len) { - struct page_collection *pages; PageDesc *p; =20 #if 0 @@ -2035,7 +1993,6 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t star= t, int len) return; } =20 - pages =3D page_collection_lock(start, start + len); assert_page_locked(p); if (!p->code_bitmap && ++p->code_write_count >=3D SMC_BITMAP_USE_THRESHOLD) { @@ -2054,7 +2011,6 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t star= t, int len) do_invalidate: tb_invalidate_phys_page_range__locked(pages, p, start, start + len= , 1); } - page_collection_unlock(pages); } #else /* Called with mmap_lock held. If pc is not 0 then it indicates the @@ -2086,7 +2042,6 @@ static bool tb_invalidate_phys_page(tb_page_addr_t ad= dr, uintptr_t pc) return false; } =20 - tb_lock(); #ifdef TARGET_HAS_PRECISE_SMC if (p->first_tb && pc !=3D 0) { current_tb =3D tcg_tb_lookup(pc); @@ -2118,12 +2073,9 @@ static bool tb_invalidate_phys_page(tb_page_addr_t a= ddr, uintptr_t pc) if (current_tb_modified) { /* Force execution of one insn next time. */ cpu->cflags_next_tb =3D 1 | curr_cflags(); - /* tb_lock will be reset after cpu_loop_exit_noexc longjmps - * back into the cpu_exec loop. */ return true; } #endif - tb_unlock(); =20 return false; } @@ -2144,18 +2096,18 @@ void tb_invalidate_phys_addr(AddressSpace *as, hwad= dr addr) return; } ram_addr =3D memory_region_get_ram_addr(mr) + addr; - tb_lock(); tb_invalidate_phys_page_range(ram_addr, ram_addr + 1, 0); - tb_unlock(); rcu_read_unlock(); } #endif /* !defined(CONFIG_USER_ONLY) */ =20 -/* Called with tb_lock held. */ +/* user-mode: call with mmap_lock held */ void tb_check_watchpoint(CPUState *cpu) { TranslationBlock *tb; =20 + assert_memory_lock(); + tb =3D tcg_tb_lookup(cpu->mem_io_pc); if (tb) { /* We can use retranslation to find the PC. */ @@ -2189,7 +2141,6 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retadd= r) TranslationBlock *tb; uint32_t n; =20 - tb_lock(); tb =3D tcg_tb_lookup(retaddr); if (!tb) { cpu_abort(cpu, "cpu_io_recompile: could not find TB for pc=3D%p", @@ -2237,9 +2188,6 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retadd= r) * repeating the fault, which is horribly inefficient. * Better would be to execute just this insn uncached, or generate a * second new TB. - * - * cpu_loop_exit_noexc will longjmp back to cpu_exec where the - * tb_lock gets reset. */ cpu_loop_exit_noexc(cpu); } diff --git a/exec.c b/exec.c index 02b1efe..5988c0d 100644 --- a/exec.c +++ b/exec.c @@ -849,9 +849,7 @@ const char *parse_cpu_model(const char *cpu_model) static void breakpoint_invalidate(CPUState *cpu, target_ulong pc) { mmap_lock(); - tb_lock(); tb_invalidate_phys_page_range(pc, pc + 1, 0); - tb_unlock(); mmap_unlock(); } #else @@ -2456,21 +2454,20 @@ void memory_notdirty_write_prepare(NotDirtyInfo *nd= i, ndi->ram_addr =3D ram_addr; ndi->mem_vaddr =3D mem_vaddr; ndi->size =3D size; - ndi->locked =3D false; + ndi->pages =3D NULL; =20 assert(tcg_enabled()); if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { - ndi->locked =3D true; - tb_lock(); - tb_invalidate_phys_page_fast(ram_addr, size); + ndi->pages =3D page_collection_lock(ram_addr, ram_addr + size); + tb_invalidate_phys_page_fast(ndi->pages, ram_addr, size); } } =20 /* Called within RCU critical section. */ void memory_notdirty_write_complete(NotDirtyInfo *ndi) { - if (ndi->locked) { - tb_unlock(); + if (ndi->pages) { + page_collection_unlock(ndi->pages); } =20 /* Set both VGA and migration bits for simplicity and to remove @@ -2571,18 +2568,16 @@ static void check_watchpoint(int offset, int len, M= emTxAttrs attrs, int flags) } cpu->watchpoint_hit =3D wp; =20 - /* Both tb_lock and iothread_mutex will be reset when - * cpu_loop_exit or cpu_loop_exit_noexc longjmp - * back into the cpu_exec main loop. - */ - tb_lock(); + mmap_lock(); tb_check_watchpoint(cpu); if (wp->flags & BP_STOP_BEFORE_ACCESS) { cpu->exception_index =3D EXCP_DEBUG; + mmap_unlock(); cpu_loop_exit(cpu); } else { /* Force execution of one insn next time. */ cpu->cflags_next_tb =3D 1 | curr_cflags(); + mmap_unlock(); cpu_loop_exit_noexc(cpu); } } @@ -2999,9 +2994,9 @@ static void invalidate_and_set_dirty(MemoryRegion *mr= , hwaddr addr, } if (dirty_log_mask & (1 << DIRTY_MEMORY_CODE)) { assert(tcg_enabled()); - tb_lock(); + mmap_lock(); tb_invalidate_phys_range(addr, addr + length); - tb_unlock(); + mmap_unlock(); dirty_log_mask &=3D ~(1 << DIRTY_MEMORY_CODE); } cpu_physical_memory_set_dirty_range(addr, length, dirty_log_mask); diff --git a/linux-user/main.c b/linux-user/main.c index 8907a84..93fd6ef 100644 --- a/linux-user/main.c +++ b/linux-user/main.c @@ -131,7 +131,6 @@ void fork_start(void) { start_exclusive(); mmap_fork_start(); - qemu_mutex_lock(&tb_ctx.tb_lock); cpu_list_lock(); } =20 @@ -147,14 +146,12 @@ void fork_end(int child) QTAILQ_REMOVE(&cpus, cpu, node); } } - qemu_mutex_init(&tb_ctx.tb_lock); qemu_init_cpu_list(); gdbserver_fork(thread_cpu); /* qemu_init_cpu_list() takes care of reinitializing the * exclusive state, so we don't need to end_exclusive() here. */ } else { - qemu_mutex_unlock(&tb_ctx.tb_lock); cpu_list_unlock(); end_exclusive(); } --=20 2.7.4