From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988747374566.6989410636135; Wed, 31 Oct 2018 05:25:47 -0700 (PDT) Received: from localhost ([::1]:59156 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpZA-0003km-3p for importer@patchew.org; Wed, 31 Oct 2018 08:25:44 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48812) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVC-0000ZN-Gu for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpV0-0001Bh-8T for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:29 -0400 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]:35195) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpUx-00017G-43 for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:23 -0400 Received: by mail-wr1-x444.google.com with SMTP id w5-v6so16263316wrt.2 for ; Wed, 31 Oct 2018 05:21:22 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.20 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qCyLB+n5EKVJXBopQEFZ+OzjPMF9tnCVECySDJjBV9c=; b=EvBAOR8Pgw0WuSyve9PVJZXGXMnEH640FnJ46PM0OrAQv8yNyktRwNOLKRzsM/En2A yBFXm0XAseWULPfGbophDxRxnF41Cm4ufVC45ehvcNnfPGHZq7PCFchc+1SzJ341X45w QVB7nD5a2sOasBdOFgujuwU2QgiRobS+EEyNM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qCyLB+n5EKVJXBopQEFZ+OzjPMF9tnCVECySDJjBV9c=; b=aQORqMx1FxG6eOkdpDhlmtQ0zot27fKn4aXO+Kky2ny05mIkRHECqn2x3Mmd0ckpQU pC2PHys9pnnv3bsXf5pWsfUs0RqRXzv6Nz/1iln8+6FTO3GI1J5pdt08/eY8YzL1i0y9 gKlc9yO7axYgyW2caESx3gCl09do87KzjMfACROLRol08GPt1zJqYKVo6U0+2PC2Fnpr FhqG/9TmR5XTVL+TO728+lhkE81PcmzaDFJ7HNcd/x9N6oyvAo5eGJvDp9IkzziZT3Rm Zo8fNJWKGRDuKYTuX0Sn+x9QqwS4OPN1sX+mm8Zsst+rvNitI8IjBZ6gL9vY5ZrL0b5U H5JQ== X-Gm-Message-State: AGRZ1gIo4pYtAvMwWtwsVznYtKGsgiZh2hAG6yCg2cLmmUrbfUL+V6DA wZfqWSgkoCqv/8VfMQcOYeSG9sjJduI= X-Google-Smtp-Source: AJdET5fMovOjw/sqHpDmZf/bkWL15gcP0uBHRrP5QMFqtUr6+761UXEpSUyain/Z6bCmRGfCb2JI3A== X-Received: by 2002:adf:d101:: with SMTP id a1-v6mr2745320wri.17.1540988481635; Wed, 31 Oct 2018 05:21:21 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:10 +0000 Message-Id: <20181031122119.1669-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::444 Subject: [Qemu-devel] [PULL 01/10] cputlb: Move tlb_lock to CPUTLBCommon X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This is the first of several moves to reduce the size of the CPU_COMMON_TLB macro and improve some locality of refernce. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 17 ++++++++++++--- accel/tcg/cputlb.c | 48 ++++++++++++++++++++--------------------- 2 files changed, 38 insertions(+), 27 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index 4ff62f32bf..9005923b4d 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -141,10 +141,21 @@ typedef struct CPUIOTLBEntry { MemTxAttrs attrs; } CPUIOTLBEntry; =20 +/* + * Data elements that are shared between all MMU modes. + */ +typedef struct CPUTLBCommon { + /* lock serializes updates to tlb_table and tlb_v_table */ + QemuSpin lock; +} CPUTLBCommon; + +/* + * The meaning of each of the MMU modes is defined in the target code. + * Note that NB_MMU_MODES is not yet defined; we can only reference it + * within preprocessor defines that will be expanded later. + */ #define CPU_COMMON_TLB \ - /* The meaning of the MMU modes is defined in the target code. */ \ - /* tlb_lock serializes updates to tlb_table and tlb_v_table */ \ - QemuSpin tlb_lock; \ + CPUTLBCommon tlb_c; \ CPUTLBEntry tlb_table[NB_MMU_MODES][CPU_TLB_SIZE]; \ CPUTLBEntry tlb_v_table[NB_MMU_MODES][CPU_VTLB_SIZE]; \ CPUIOTLBEntry iotlb[NB_MMU_MODES][CPU_TLB_SIZE]; \ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index af57aca5e4..d4e07056be 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -78,7 +78,7 @@ void tlb_init(CPUState *cpu) { CPUArchState *env =3D cpu->env_ptr; =20 - qemu_spin_init(&env->tlb_lock); + qemu_spin_init(&env->tlb_c.lock); } =20 /* flush_all_helper: run fn across all cpus @@ -134,15 +134,15 @@ static void tlb_flush_nocheck(CPUState *cpu) tlb_debug("(count: %zu)\n", tlb_flush_count()); =20 /* - * tlb_table/tlb_v_table updates from any thread must hold tlb_lock. + * tlb_table/tlb_v_table updates from any thread must hold tlb_c.lock. * However, updates from the owner thread (as is the case here; see the * above assert_cpu_is_self) do not need atomic_set because all reads * that do not hold the lock are performed by the same owner thread. */ - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); memset(env->tlb_table, -1, sizeof(env->tlb_table)); memset(env->tlb_v_table, -1, sizeof(env->tlb_v_table)); - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); =20 cpu_tb_jmp_cache_clear(cpu); =20 @@ -195,7 +195,7 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cp= u, run_on_cpu_data data) =20 tlb_debug("start: mmu_idx:0x%04lx\n", mmu_idx_bitmask); =20 - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { =20 if (test_bit(mmu_idx, &mmu_idx_bitmask)) { @@ -205,7 +205,7 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cp= u, run_on_cpu_data data) memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[= 0])); } } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); =20 cpu_tb_jmp_cache_clear(cpu); =20 @@ -262,7 +262,7 @@ static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tl= b_entry, tlb_hit_page(tlb_entry->addr_code, page); } =20 -/* Called with tlb_lock held */ +/* Called with tlb_c.lock held */ static inline void tlb_flush_entry_locked(CPUTLBEntry *tlb_entry, target_ulong page) { @@ -271,7 +271,7 @@ static inline void tlb_flush_entry_locked(CPUTLBEntry *= tlb_entry, } } =20 -/* Called with tlb_lock held */ +/* Called with tlb_c.lock held */ static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_i= dx, target_ulong page) { @@ -304,12 +304,12 @@ static void tlb_flush_page_async_work(CPUState *cpu, = run_on_cpu_data data) } =20 addr &=3D TARGET_PAGE_MASK; - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { tlb_flush_entry_locked(tlb_entry(env, mmu_idx, addr), addr); tlb_flush_vtlb_page_locked(env, mmu_idx, addr); } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); =20 tb_flush_jmp_cache(cpu, addr); } @@ -345,14 +345,14 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUSt= ate *cpu, tlb_debug("flush page addr:"TARGET_FMT_lx" mmu_idx:0x%lx\n", addr, mmu_idx_bitmap); =20 - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmap)) { tlb_flush_entry_locked(tlb_entry(env, mmu_idx, addr), addr); tlb_flush_vtlb_page_locked(env, mmu_idx, addr); } } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); =20 tb_flush_jmp_cache(cpu, addr); } @@ -479,7 +479,7 @@ void tlb_unprotect_code(ram_addr_t ram_addr) * te->addr_write with atomic_set. We don't need to worry about this for * oversized guests as MTTCG is disabled for them. * - * Called with tlb_lock held. + * Called with tlb_c.lock held. */ static void tlb_reset_dirty_range_locked(CPUTLBEntry *tlb_entry, uintptr_t start, uintptr_t length) @@ -501,7 +501,7 @@ static void tlb_reset_dirty_range_locked(CPUTLBEntry *t= lb_entry, } =20 /* - * Called with tlb_lock held. + * Called with tlb_c.lock held. * Called only from the vCPU context, i.e. the TLB's owner thread. */ static inline void copy_tlb_helper_locked(CPUTLBEntry *d, const CPUTLBEntr= y *s) @@ -511,7 +511,7 @@ static inline void copy_tlb_helper_locked(CPUTLBEntry *= d, const CPUTLBEntry *s) =20 /* This is a cross vCPU call (i.e. another vCPU resetting the flags of * the target vCPU). - * We must take tlb_lock to avoid racing with another vCPU update. The only + * We must take tlb_c.lock to avoid racing with another vCPU update. The o= nly * thing actually updated is the target TLB entry ->addr_write flags. */ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length) @@ -521,7 +521,7 @@ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, = ram_addr_t length) int mmu_idx; =20 env =3D cpu->env_ptr; - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { unsigned int i; =20 @@ -535,10 +535,10 @@ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1= , ram_addr_t length) length); } } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); } =20 -/* Called with tlb_lock held */ +/* Called with tlb_c.lock held */ static inline void tlb_set_dirty1_locked(CPUTLBEntry *tlb_entry, target_ulong vaddr) { @@ -557,7 +557,7 @@ void tlb_set_dirty(CPUState *cpu, target_ulong vaddr) assert_cpu_is_self(cpu); =20 vaddr &=3D TARGET_PAGE_MASK; - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { tlb_set_dirty1_locked(tlb_entry(env, mmu_idx, vaddr), vaddr); } @@ -568,7 +568,7 @@ void tlb_set_dirty(CPUState *cpu, target_ulong vaddr) tlb_set_dirty1_locked(&env->tlb_v_table[mmu_idx][k], vaddr); } } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); } =20 /* Our TLB does not support large pages, so remember the area covered by @@ -669,7 +669,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulon= g vaddr, * a longer critical section, but this is not a concern since the TLB = lock * is unlikely to be contended. */ - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); =20 /* Make sure there's no cached translation for the new page. */ tlb_flush_vtlb_page_locked(env, mmu_idx, vaddr_page); @@ -736,7 +736,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulon= g vaddr, } =20 copy_tlb_helper_locked(te, &tn); - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); } =20 /* Add a new TLB entry, but without specifying the memory @@ -917,11 +917,11 @@ static bool victim_tlb_hit(CPUArchState *env, size_t = mmu_idx, size_t index, /* Found entry in victim tlb, swap tlb and iotlb. */ CPUTLBEntry tmptlb, *tlb =3D &env->tlb_table[mmu_idx][index]; =20 - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); copy_tlb_helper_locked(&tmptlb, tlb); copy_tlb_helper_locked(tlb, vtlb); copy_tlb_helper_locked(vtlb, &tmptlb); - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); =20 CPUIOTLBEntry tmpio, *io =3D &env->iotlb[mmu_idx][index]; CPUIOTLBEntry *vio =3D &env->iotlb_v[mmu_idx][vidx]; --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988630697744.2559464651209; Wed, 31 Oct 2018 05:23:50 -0700 (PDT) Received: from localhost ([::1]:59142 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpXJ-0001zJ-Ad for importer@patchew.org; Wed, 31 Oct 2018 08:23:49 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48837) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVT-0000gj-Hp for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVP-0001cd-IV for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:54 -0400 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]:44959) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVM-00018Z-HX for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:50 -0400 Received: by mail-wr1-x429.google.com with SMTP id d17-v6so15938843wre.11 for ; Wed, 31 Oct 2018 05:21:23 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.21 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=L+zxn4yIHHZ24UshW3jspOhWt1yPjQcT0llFQ6cVKMs=; b=VWAf+40lppjHftk5fD8ZzS44ok2ch0K5L0wMO/h11cDnP8ScvpMsYm51k+W9MdVzPD Hsctf/ppJjuOhD9wGZmKbj7Um+aR3dUCySUY+jYmw2Nh+IbTxXI7kgsIHhcEaNeOY3pq 3tTVfuPV3kBzbLsDr5nqNzL/2mBt/UtsvCQ5E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=L+zxn4yIHHZ24UshW3jspOhWt1yPjQcT0llFQ6cVKMs=; b=cPAOsxu9rnGhGuUlnB9gjJDkx99H8lvzkpUTYTXdZycFwh3MsTXHF/1Uk6IXhBC7eL TjvjzXU8M/kg1wQw0g0ZO3GXy9NPwnheuf8xtfWTFeYHC7KXQ2bwgG41cvdWQHAzFSXb fltBcuW4Qm+Yr69MncGolJ7nM1Xqadtkfxn++mFwU2t0kWfOqsqhZX4zU2NMtT81wBAR vyhgUxkbnegNJZi/rWkUdyf3rVX4B2hdbj2Rk5rUhOn5EZ48ZoQErYAbvuL5jOtAGj8B XZCcCpXUEyM6ZfxvHalMLb/WxznLqcPfRXtjFFWRFwyAidp3mUROTArvM2rrykq3+CnJ jodA== X-Gm-Message-State: AGRZ1gK9dnUUpocMtjBzz+z1PB6uxjTM2nukuc/NTBvCnffThsGXOhcH LQUvgV3IQ7/NycrgfqDK7y/AiUlE/24= X-Google-Smtp-Source: AJdET5fawegt8T831SvnoDUSg1zzNoKj/n02Gc76QWwW43ngWgaYQw6KiZvxsKyr6PQvnD6GoRMe9A== X-Received: by 2002:a5d:47d2:: with SMTP id l18-v6mr2814470wrs.319.1540988482473; Wed, 31 Oct 2018 05:21:22 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:11 +0000 Message-Id: <20181031122119.1669-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::429 Subject: [Qemu-devel] [PULL 02/10] cputlb: Remove tcg_enabled hack from tlb_flush_nocheck X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The bugs this was working around were fixed with commits 022d6378c7fd target/unicore32: remove tlb_flush from uc32_init_fn 6e11beecfde0 target/alpha: remove tlb_flush from alpha_cpu_initfn Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index d4e07056be..d080769c83 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -122,13 +122,6 @@ static void tlb_flush_nocheck(CPUState *cpu) { CPUArchState *env =3D cpu->env_ptr; =20 - /* The QOM tests will trigger tlb_flushes without setting up TCG - * so we bug out here in that case. - */ - if (!tcg_enabled()) { - return; - } - assert_cpu_is_self(cpu); atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); tlb_debug("(count: %zu)\n", tlb_flush_count()); --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988632561846.2698527232172; Wed, 31 Oct 2018 05:23:52 -0700 (PDT) Received: from localhost ([::1]:59144 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpXL-00020c-EC for importer@patchew.org; Wed, 31 Oct 2018 08:23:51 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48844) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVU-0000hA-9p for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVS-0001fQ-0j for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:55 -0400 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]:40488) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVQ-00019T-I1 for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:52 -0400 Received: by mail-wr1-x42c.google.com with SMTP id i17-v6so16201711wre.7 for ; Wed, 31 Oct 2018 05:21:24 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.22 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=M7C/qJbPBoTQzDBdhyvdybNqwgrTLXTdlhXmYkdqAcs=; b=YWvGryWzgM6SfqH8JrzOK9hrL8ZUOClOFX0GDo9/ANYri39IAWDGMkq1pwpGEwnnqf v9i+z+5rbcpyLUSwi+zfGi5khJ2bEl0oKCKdoI3Z7x68DuPpiemOzQNvejMP0kDVNMtk r95g3vAem/qDxUeOqL2KYA7vuF5HbhIE4s6Ak= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=M7C/qJbPBoTQzDBdhyvdybNqwgrTLXTdlhXmYkdqAcs=; b=gSeWMBO4vCXJ0hdtqr9rUMfqAR6Ls18lhA/1JR9xQbCkLe/142BonFnV3ryYR8of01 jNqLLADFk68h0M6/dYC9BCHcIWpau3xkP9z+T2YKp1OZkGdXaHvxlG20HiEiK8uu+gcy Js+S6mzs8beA52BM1yV/KVlJ9qXng69iWbsD/d2KE9HaoILKWxSadzRcjUSFLo9z0oGm I0m0lCDdVKpqKvaZFBRUwdU4ZAb1Qvh6hw6mNcLHx4jogi/zxGfOcMslenxbBKUOfVZy lGLBiKoDB+Pbh3fWYfIFZ/IB9lmZILdlLXBqPisp9s95OQdzOMHPBqVyBd0rD0olH2gj warQ== X-Gm-Message-State: AGRZ1gJG0sqTpYA+6xZdCpzeUCcajlFKGUq2wb8hiiylhO+hy3V3KaAv 2X9fAa0UTNJbWrkcdswZVUl6NtharqU= X-Google-Smtp-Source: AJdET5eywZxr9mXKzzhb9Y0sVyPkW2yWWMWbMFHa6J/La3W6sLA6tBYr9FglhKCP/wQgL4C8Jz5kIg== X-Received: by 2002:adf:ce06:: with SMTP id p6-v6mr2758231wrn.324.1540988483383; Wed, 31 Oct 2018 05:21:23 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:12 +0000 Message-Id: <20181031122119.1669-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::42c Subject: [Qemu-devel] [PULL 03/10] cputlb: Move cpu->pending_tlb_flush to env->tlb_c.pending_flush X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Protect it with the tlb_lock instead of using atomics. The move puts it in or near the same cacheline as the lock; using the lock means we don't need a second atomic operation in order to perform the update. Which makes it cheap to also update pending_flush in tlb_flush_by_mmuidx_async_work. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 8 +++++++- include/qom/cpu.h | 6 ------ accel/tcg/cputlb.c | 35 +++++++++++++++++++++++------------ 3 files changed, 30 insertions(+), 19 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index 9005923b4d..659c73d2a1 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -145,8 +145,14 @@ typedef struct CPUIOTLBEntry { * Data elements that are shared between all MMU modes. */ typedef struct CPUTLBCommon { - /* lock serializes updates to tlb_table and tlb_v_table */ + /* Serialize updates to tlb_table and tlb_v_table, and others as noted= . */ QemuSpin lock; + /* + * Within pending_flush, for each bit N, there exists an outstanding + * cross-cpu flush for mmu_idx N. Further cross-cpu flushes to that + * mmu_idx may be discarded. Protected by tlb_c.lock. + */ + uint16_t pending_flush; } CPUTLBCommon; =20 /* diff --git a/include/qom/cpu.h b/include/qom/cpu.h index def0c64308..1396f53e5b 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -429,12 +429,6 @@ struct CPUState { =20 struct hax_vcpu_state *hax_vcpu; =20 - /* The pending_tlb_flush flag is set and cleared atomically to - * avoid potential races. The aim of the flag is to avoid - * unnecessary flushes. - */ - uint16_t pending_tlb_flush; - int hvf_fd; =20 /* track IOMMUs whose translations we've cached in the TCG TLB */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index d080769c83..abcd08a8a2 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -133,6 +133,7 @@ static void tlb_flush_nocheck(CPUState *cpu) * that do not hold the lock are performed by the same owner thread. */ qemu_spin_lock(&env->tlb_c.lock); + env->tlb_c.pending_flush =3D 0; memset(env->tlb_table, -1, sizeof(env->tlb_table)); memset(env->tlb_v_table, -1, sizeof(env->tlb_v_table)); qemu_spin_unlock(&env->tlb_c.lock); @@ -142,8 +143,6 @@ static void tlb_flush_nocheck(CPUState *cpu) env->vtlb_index =3D 0; env->tlb_flush_addr =3D -1; env->tlb_flush_mask =3D 0; - - atomic_mb_set(&cpu->pending_tlb_flush, 0); } =20 static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data dat= a) @@ -154,8 +153,15 @@ static void tlb_flush_global_async_work(CPUState *cpu,= run_on_cpu_data data) void tlb_flush(CPUState *cpu) { if (cpu->created && !qemu_cpu_is_self(cpu)) { - if (atomic_mb_read(&cpu->pending_tlb_flush) !=3D ALL_MMUIDX_BITS) { - atomic_mb_set(&cpu->pending_tlb_flush, ALL_MMUIDX_BITS); + CPUArchState *env =3D cpu->env_ptr; + uint16_t pending; + + qemu_spin_lock(&env->tlb_c.lock); + pending =3D env->tlb_c.pending_flush; + env->tlb_c.pending_flush =3D ALL_MMUIDX_BITS; + qemu_spin_unlock(&env->tlb_c.lock); + + if (pending !=3D ALL_MMUIDX_BITS) { async_run_on_cpu(cpu, tlb_flush_global_async_work, RUN_ON_CPU_NULL); } @@ -189,6 +195,8 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cp= u, run_on_cpu_data data) tlb_debug("start: mmu_idx:0x%04lx\n", mmu_idx_bitmask); =20 qemu_spin_lock(&env->tlb_c.lock); + env->tlb_c.pending_flush &=3D ~mmu_idx_bitmask; + for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { =20 if (test_bit(mmu_idx, &mmu_idx_bitmask)) { @@ -210,19 +218,22 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxm= ap) tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); =20 if (!qemu_cpu_is_self(cpu)) { - uint16_t pending_flushes =3D idxmap; - pending_flushes &=3D ~atomic_mb_read(&cpu->pending_tlb_flush); + CPUArchState *env =3D cpu->env_ptr; + uint16_t pending, to_clean; =20 - if (pending_flushes) { - tlb_debug("reduced mmu_idx: 0x%" PRIx16 "\n", pending_flushes); + qemu_spin_lock(&env->tlb_c.lock); + pending =3D env->tlb_c.pending_flush; + to_clean =3D idxmap & ~pending; + env->tlb_c.pending_flush =3D pending | idxmap; + qemu_spin_unlock(&env->tlb_c.lock); =20 - atomic_or(&cpu->pending_tlb_flush, pending_flushes); + if (to_clean) { + tlb_debug("reduced mmu_idx: 0x%" PRIx16 "\n", to_clean); async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work, - RUN_ON_CPU_HOST_INT(pending_flushes)); + RUN_ON_CPU_HOST_INT(to_clean)); } } else { - tlb_flush_by_mmuidx_async_work(cpu, - RUN_ON_CPU_HOST_INT(idxmap)); + tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap)); } } =20 --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988893562382.45514363380164; Wed, 31 Oct 2018 05:28:13 -0700 (PDT) Received: from localhost ([::1]:59168 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpbY-0006uU-Bl for importer@patchew.org; Wed, 31 Oct 2018 08:28:12 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48920) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVg-0000pv-Dw for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVe-0001t3-B7 for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:08 -0400 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]:33420) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVY-0001AM-JZ for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:02 -0400 Received: by mail-wr1-x444.google.com with SMTP id u1-v6so16284474wrn.0 for ; Wed, 31 Oct 2018 05:21:25 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.23 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9uOFZXSfcq1X4cvbWJ785q+T2aU3ttZ6wXseu5MbTuI=; b=FTjIJtz0yk/zyTue2tGbpm+vdBnywvm/utojAUn0d5DSdbOk0kp/7vB5n+JdZ1H6oK el8wxxSh1REOin+fGJDsGOQuUhNwfvxvSKSIZkGLTPipsCClYD2VKWq0bXhds/F82+pM nO2Ap7Kk5zdNeGCHY2Y6J3Q6e3zK6f3qE2Cck= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9uOFZXSfcq1X4cvbWJ785q+T2aU3ttZ6wXseu5MbTuI=; b=Y5GCY3uCcrMvNBB8v/dlJE0fjWwPa+PACTXystY3sXjz6Z04L12ybJDlzOHztcPagq X57eNCNdI+OYeup3PLkhCCDJbS2uqoE+BQLJHwwyCE2gkeO/ZVTRrYj1SLE2JmOgxz7x 0eEpWrYwAV8X1KborCTIyZCLPJs9UD8UqME29PqE1/fxC+828cHssxEZprrp068EvTFD jXailI08IoYR+PNoW66G27KErVBoWpn3jTbpRUh9AXlB/K7yQ5IJRD5WwhneRTtAg7vA ++/wAdrlt7vbIxEX22YV/DxzR5m9h+YKds1IC20a3UQfwDTYF/bloPiQzS8Tweh4LXU0 6JBQ== X-Gm-Message-State: AGRZ1gIKBQ22mhPzmb/cuwLqD2Yo2+iv1qP5iRXAviFfkmRPv/Huu8kv EwLL+YFkKDjAmft0mMmuxYftOeljyV0= X-Google-Smtp-Source: AJdET5dVyu4B5nf1ucebR83BB2H4ImrQbgdEQEI9QE/mLxPGTtGh9wc1BRo6O5blHG9Q7dfEmX6cnQ== X-Received: by 2002:adf:ca03:: with SMTP id o3-v6mr2537858wrh.148.1540988484353; Wed, 31 Oct 2018 05:21:24 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:13 +0000 Message-Id: <20181031122119.1669-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::444 Subject: [Qemu-devel] [PULL 04/10] cputlb: Split large page tracking per mmu_idx X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The set of large pages in the kernel is probably not the same as the set of large pages in the application. Forcing one range to cover both will flush more often than necessary. This allows tlb_flush_page_async_work to flush just the one mmu_idx implicated, which in turn allows us to remove tlb_check_page_and_flush_by_mmuidx_async_work. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 14 +++- accel/tcg/cputlb.c | 138 ++++++++++++++++++---------------------- 2 files changed, 73 insertions(+), 79 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index 659c73d2a1..df8ae18d9d 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -141,6 +141,17 @@ typedef struct CPUIOTLBEntry { MemTxAttrs attrs; } CPUIOTLBEntry; =20 +typedef struct CPUTLBDesc { + /* + * Describe a region covering all of the large pages allocated + * into the tlb. When any page within this region is flushed, + * we must flush the entire tlb. The region is matched if + * (addr & large_page_mask) =3D=3D large_page_addr. + */ + target_ulong large_page_addr; + target_ulong large_page_mask; +} CPUTLBDesc; + /* * Data elements that are shared between all MMU modes. */ @@ -162,13 +173,12 @@ typedef struct CPUTLBCommon { */ #define CPU_COMMON_TLB \ CPUTLBCommon tlb_c; \ + CPUTLBDesc tlb_d[NB_MMU_MODES]; \ CPUTLBEntry tlb_table[NB_MMU_MODES][CPU_TLB_SIZE]; \ CPUTLBEntry tlb_v_table[NB_MMU_MODES][CPU_VTLB_SIZE]; \ CPUIOTLBEntry iotlb[NB_MMU_MODES][CPU_TLB_SIZE]; \ CPUIOTLBEntry iotlb_v[NB_MMU_MODES][CPU_VTLB_SIZE]; \ size_t tlb_flush_count; \ - target_ulong tlb_flush_addr; \ - target_ulong tlb_flush_mask; \ target_ulong vtlb_index; \ =20 #else diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index abcd08a8a2..8060ec99d7 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -113,6 +113,14 @@ size_t tlb_flush_count(void) return count; } =20 +static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx) +{ + memset(env->tlb_table[mmu_idx], -1, sizeof(env->tlb_table[0])); + memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0])); + env->tlb_d[mmu_idx].large_page_addr =3D -1; + env->tlb_d[mmu_idx].large_page_mask =3D -1; +} + /* This is OK because CPU architectures generally permit an * implementation to drop entries from the TLB at any time, so * flushing more entries than required is only an efficiency issue, @@ -121,6 +129,7 @@ size_t tlb_flush_count(void) static void tlb_flush_nocheck(CPUState *cpu) { CPUArchState *env =3D cpu->env_ptr; + int mmu_idx; =20 assert_cpu_is_self(cpu); atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); @@ -134,15 +143,14 @@ static void tlb_flush_nocheck(CPUState *cpu) */ qemu_spin_lock(&env->tlb_c.lock); env->tlb_c.pending_flush =3D 0; - memset(env->tlb_table, -1, sizeof(env->tlb_table)); - memset(env->tlb_v_table, -1, sizeof(env->tlb_v_table)); + for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { + tlb_flush_one_mmuidx_locked(env, mmu_idx); + } qemu_spin_unlock(&env->tlb_c.lock); =20 cpu_tb_jmp_cache_clear(cpu); =20 env->vtlb_index =3D 0; - env->tlb_flush_addr =3D -1; - env->tlb_flush_mask =3D 0; } =20 static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data dat= a) @@ -192,25 +200,19 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *= cpu, run_on_cpu_data data) =20 assert_cpu_is_self(cpu); =20 - tlb_debug("start: mmu_idx:0x%04lx\n", mmu_idx_bitmask); + tlb_debug("mmu_idx:0x%04lx\n", mmu_idx_bitmask); =20 qemu_spin_lock(&env->tlb_c.lock); env->tlb_c.pending_flush &=3D ~mmu_idx_bitmask; =20 for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - if (test_bit(mmu_idx, &mmu_idx_bitmask)) { - tlb_debug("%d\n", mmu_idx); - - memset(env->tlb_table[mmu_idx], -1, sizeof(env->tlb_table[0])); - memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[= 0])); + tlb_flush_one_mmuidx_locked(env, mmu_idx); } } qemu_spin_unlock(&env->tlb_c.lock); =20 cpu_tb_jmp_cache_clear(cpu); - - tlb_debug("done\n"); } =20 void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) @@ -287,6 +289,24 @@ static inline void tlb_flush_vtlb_page_locked(CPUArchS= tate *env, int mmu_idx, } } =20 +static void tlb_flush_page_locked(CPUArchState *env, int midx, + target_ulong page) +{ + target_ulong lp_addr =3D env->tlb_d[midx].large_page_addr; + target_ulong lp_mask =3D env->tlb_d[midx].large_page_mask; + + /* Check if we need to flush due to large pages. */ + if ((page & lp_mask) =3D=3D lp_addr) { + tlb_debug("forcing full flush midx %d (" + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", + midx, lp_addr, lp_mask); + tlb_flush_one_mmuidx_locked(env, midx); + } else { + tlb_flush_entry_locked(tlb_entry(env, midx, page), page); + tlb_flush_vtlb_page_locked(env, midx, page); + } +} + static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data) { CPUArchState *env =3D cpu->env_ptr; @@ -295,23 +315,12 @@ static void tlb_flush_page_async_work(CPUState *cpu, = run_on_cpu_data data) =20 assert_cpu_is_self(cpu); =20 - tlb_debug("page :" TARGET_FMT_lx "\n", addr); - - /* Check if we need to flush due to large pages. */ - if ((addr & env->tlb_flush_mask) =3D=3D env->tlb_flush_addr) { - tlb_debug("forcing full flush (" - TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", - env->tlb_flush_addr, env->tlb_flush_mask); - - tlb_flush(cpu); - return; - } + tlb_debug("page addr:" TARGET_FMT_lx "\n", addr); =20 addr &=3D TARGET_PAGE_MASK; qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - tlb_flush_entry_locked(tlb_entry(env, mmu_idx, addr), addr); - tlb_flush_vtlb_page_locked(env, mmu_idx, addr); + tlb_flush_page_locked(env, mmu_idx, addr); } qemu_spin_unlock(&env->tlb_c.lock); =20 @@ -346,14 +355,13 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUSt= ate *cpu, =20 assert_cpu_is_self(cpu); =20 - tlb_debug("flush page addr:"TARGET_FMT_lx" mmu_idx:0x%lx\n", + tlb_debug("page addr:" TARGET_FMT_lx " mmu_map:0x%lx\n", addr, mmu_idx_bitmap); =20 qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmap)) { - tlb_flush_entry_locked(tlb_entry(env, mmu_idx, addr), addr); - tlb_flush_vtlb_page_locked(env, mmu_idx, addr); + tlb_flush_page_locked(env, mmu_idx, addr); } } qemu_spin_unlock(&env->tlb_c.lock); @@ -361,29 +369,6 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUSta= te *cpu, tb_flush_jmp_cache(cpu, addr); } =20 -static void tlb_check_page_and_flush_by_mmuidx_async_work(CPUState *cpu, - run_on_cpu_data = data) -{ - CPUArchState *env =3D cpu->env_ptr; - target_ulong addr_and_mmuidx =3D (target_ulong) data.target_ptr; - target_ulong addr =3D addr_and_mmuidx & TARGET_PAGE_MASK; - unsigned long mmu_idx_bitmap =3D addr_and_mmuidx & ALL_MMUIDX_BITS; - - tlb_debug("addr:"TARGET_FMT_lx" mmu_idx: %04lx\n", addr, mmu_idx_bitma= p); - - /* Check if we need to flush due to large pages. */ - if ((addr & env->tlb_flush_mask) =3D=3D env->tlb_flush_addr) { - tlb_debug("forced full flush (" - TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", - env->tlb_flush_addr, env->tlb_flush_mask); - - tlb_flush_by_mmuidx_async_work(cpu, - RUN_ON_CPU_HOST_INT(mmu_idx_bitmap)= ); - } else { - tlb_flush_page_by_mmuidx_async_work(cpu, data); - } -} - void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t i= dxmap) { target_ulong addr_and_mmu_idx; @@ -395,10 +380,10 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_u= long addr, uint16_t idxmap) addr_and_mmu_idx |=3D idxmap; =20 if (!qemu_cpu_is_self(cpu)) { - async_run_on_cpu(cpu, tlb_check_page_and_flush_by_mmuidx_async_wor= k, + async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_work, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); } else { - tlb_check_page_and_flush_by_mmuidx_async_work( + tlb_flush_page_by_mmuidx_async_work( cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); } } @@ -406,7 +391,7 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulo= ng addr, uint16_t idxmap) void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, target_ulong add= r, uint16_t idxmap) { - const run_on_cpu_func fn =3D tlb_check_page_and_flush_by_mmuidx_async_= work; + const run_on_cpu_func fn =3D tlb_flush_page_by_mmuidx_async_work; target_ulong addr_and_mmu_idx; =20 tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%"PRIx16"\n", addr, idxmap); @@ -420,10 +405,10 @@ void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_= cpu, target_ulong addr, } =20 void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu, - target_ulong a= ddr, - uint16_t idxma= p) + target_ulong addr, + uint16_t idxmap) { - const run_on_cpu_func fn =3D tlb_check_page_and_flush_by_mmuidx_async_= work; + const run_on_cpu_func fn =3D tlb_flush_page_by_mmuidx_async_work; target_ulong addr_and_mmu_idx; =20 tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%"PRIx16"\n", addr, idxmap); @@ -577,25 +562,26 @@ void tlb_set_dirty(CPUState *cpu, target_ulong vaddr) =20 /* Our TLB does not support large pages, so remember the area covered by large pages and trigger a full TLB flush if these are invalidated. */ -static void tlb_add_large_page(CPUArchState *env, target_ulong vaddr, - target_ulong size) +static void tlb_add_large_page(CPUArchState *env, int mmu_idx, + target_ulong vaddr, target_ulong size) { - target_ulong mask =3D ~(size - 1); + target_ulong lp_addr =3D env->tlb_d[mmu_idx].large_page_addr; + target_ulong lp_mask =3D ~(size - 1); =20 - if (env->tlb_flush_addr =3D=3D (target_ulong)-1) { - env->tlb_flush_addr =3D vaddr & mask; - env->tlb_flush_mask =3D mask; - return; + if (lp_addr =3D=3D (target_ulong)-1) { + /* No previous large page. */ + lp_addr =3D vaddr; + } else { + /* Extend the existing region to include the new page. + This is a compromise between unnecessary flushes and + the cost of maintaining a full variable size TLB. */ + lp_mask &=3D env->tlb_d[mmu_idx].large_page_mask; + while (((lp_addr ^ vaddr) & lp_mask) !=3D 0) { + lp_mask <<=3D 1; + } } - /* Extend the existing region to include the new page. - This is a compromise between unnecessary flushes and the cost - of maintaining a full variable size TLB. */ - mask &=3D env->tlb_flush_mask; - while (((env->tlb_flush_addr ^ vaddr) & mask) !=3D 0) { - mask <<=3D 1; - } - env->tlb_flush_addr &=3D mask; - env->tlb_flush_mask =3D mask; + env->tlb_d[mmu_idx].large_page_addr =3D lp_addr & lp_mask; + env->tlb_d[mmu_idx].large_page_mask =3D lp_mask; } =20 /* Add a new TLB entry. At most one entry for a given virtual address @@ -622,12 +608,10 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ul= ong vaddr, =20 assert_cpu_is_self(cpu); =20 - if (size < TARGET_PAGE_SIZE) { + if (size <=3D TARGET_PAGE_SIZE) { sz =3D TARGET_PAGE_SIZE; } else { - if (size > TARGET_PAGE_SIZE) { - tlb_add_large_page(env, vaddr, size); - } + tlb_add_large_page(env, mmu_idx, vaddr, size); sz =3D size; } vaddr_page =3D vaddr & TARGET_PAGE_MASK; --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540989099368873.5018553370377; Wed, 31 Oct 2018 05:31:39 -0700 (PDT) Received: from localhost ([::1]:59193 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHper-0002QU-Kg for importer@patchew.org; Wed, 31 Oct 2018 08:31:37 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48957) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVh-0000qw-Gy for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVg-0001vn-GL for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:09 -0400 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]:42585) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVe-0001BG-E7 for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:08 -0400 Received: by mail-wr1-x434.google.com with SMTP id y15-v6so16247675wru.9 for ; Wed, 31 Oct 2018 05:21:26 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.24 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=79oOTYyj+YWYz4Bl+nMnEmQLJGBi9a72H/rUdEFY3pw=; b=I3hMSo1MjhcakU+oR6E8P1vfjzAP3nQIuSaJAyAdPwFS2WHWnuS3dwoWcO3fPyIp39 0b+ck8/XWGjXzQrdAvX3dL4bFFBhSeHq2LapDfF4Bp6QeB4awPTktaaqdyss+680LVqX rVUgZu29EOBsqN7r5VQoKCBS+o6MLEOhF86qQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=79oOTYyj+YWYz4Bl+nMnEmQLJGBi9a72H/rUdEFY3pw=; b=C87w/sKLHen/xJr/vZ9QwpDMswLPMt70GncO9xouHaHESksoXSMkhZtnulSdFyMNPw 2JRQlCJ/sSkcUuldLOE0BapA6XCeC95ZdNtUa5DmOKYtaVnzOnDqcxffmc9RfVyWT9zY lqmWtmfv1sU7/b/IrWxwOzIjOPrEpFAbiSS2MFb+wyp7UAEKJY4gnZU8xfjTK+VLWVMz s5XQdygSs/ApsGErokmXwJeCnpu7j4r+QErLNsaQDqC/TumRlGgvYcR9uqc+TUeWCGIN JLNkVooE8uBn+LLAA/NfUVsYSOONRIxjzK4MJWNuqPIZQ6KSlwEdtAPgFKndQ4I7j4H0 KxbA== X-Gm-Message-State: AGRZ1gI2KqBhTosG8YDeiPtT2AcfcKyfAhXvL8Gyiv1hgjnS/BWCBGb5 MHApfctFnHYUoYoF5nNREq+gplI2hjE= X-Google-Smtp-Source: AJdET5fWQvGDQttSoA86EjtXJUsHhZtm9F63+Mu9c4pUbaCN8KD/J1FhVBTd+AISR5iYVNgTExm+Dw== X-Received: by 2002:adf:826b:: with SMTP id 98-v6mr2638503wrb.312.1540988485164; Wed, 31 Oct 2018 05:21:25 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:14 +0000 Message-Id: <20181031122119.1669-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::434 Subject: [Qemu-devel] [PULL 05/10] cputlb: Move env->vtlb_index to env->tlb_d.vindex X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The rest of the tlb victim cache is per-tlb, the next use index should be as well. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 5 +++-- accel/tcg/cputlb.c | 5 ++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index df8ae18d9d..181c0dbfa4 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -150,6 +150,8 @@ typedef struct CPUTLBDesc { */ target_ulong large_page_addr; target_ulong large_page_mask; + /* The next index to use in the tlb victim table. */ + size_t vindex; } CPUTLBDesc; =20 /* @@ -178,8 +180,7 @@ typedef struct CPUTLBCommon { CPUTLBEntry tlb_v_table[NB_MMU_MODES][CPU_VTLB_SIZE]; \ CPUIOTLBEntry iotlb[NB_MMU_MODES][CPU_TLB_SIZE]; \ CPUIOTLBEntry iotlb_v[NB_MMU_MODES][CPU_VTLB_SIZE]; \ - size_t tlb_flush_count; \ - target_ulong vtlb_index; \ + size_t tlb_flush_count; =20 #else =20 diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 8060ec99d7..2cd3886fd6 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -119,6 +119,7 @@ static void tlb_flush_one_mmuidx_locked(CPUArchState *e= nv, int mmu_idx) memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0])); env->tlb_d[mmu_idx].large_page_addr =3D -1; env->tlb_d[mmu_idx].large_page_mask =3D -1; + env->tlb_d[mmu_idx].vindex =3D 0; } =20 /* This is OK because CPU architectures generally permit an @@ -149,8 +150,6 @@ static void tlb_flush_nocheck(CPUState *cpu) qemu_spin_unlock(&env->tlb_c.lock); =20 cpu_tb_jmp_cache_clear(cpu); - - env->vtlb_index =3D 0; } =20 static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data dat= a) @@ -667,7 +666,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulon= g vaddr, * different page; otherwise just overwrite the stale data. */ if (!tlb_hit_page_anyprot(te, vaddr_page)) { - unsigned vidx =3D env->vtlb_index++ % CPU_VTLB_SIZE; + unsigned vidx =3D env->tlb_d[mmu_idx].vindex++ % CPU_VTLB_SIZE; CPUTLBEntry *tv =3D &env->tlb_v_table[mmu_idx][vidx]; =20 /* Evict the old entry into the victim tlb. */ --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988783108412.08771806473897; Wed, 31 Oct 2018 05:26:23 -0700 (PDT) Received: from localhost ([::1]:59157 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpZl-0004Pp-SG for importer@patchew.org; Wed, 31 Oct 2018 08:26:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48921) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVg-0000pw-Du for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVe-0001tu-Sv for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:08 -0400 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]:34124) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVc-0001CS-BH for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:06 -0400 Received: by mail-wr1-x443.google.com with SMTP id j26-v6so2077117wre.1 for ; Wed, 31 Oct 2018 05:21:27 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.25 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=y06YAKVrkNadgiSNzny6y48cEnCxKOVP7GttErYl8aI=; b=QH0NvHeIieqH4WZuAWu4/ACBhfmuWxcafAqjTkOwjU2KJEZ3nbNV4zWU0yQPC0FsTm 5JHL8koRzmLgzKXcdWGBYi9whqLkkoC7tjlz2v9dOmNnhuNf9XxTa9dKhvXgjiOeToIF xC0IfU1uZlB7tGil2Zgl4nRCqknzIM32tNFAk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=y06YAKVrkNadgiSNzny6y48cEnCxKOVP7GttErYl8aI=; b=gr9CmmnjzhLg5yIPaJkmYjaw/45WtrTpTYDmWbg8PLseooajThfTVteaOJvy3wx1gl 7IBJ5+yV9LhXUXyl4DQnNMhD9YRCuruL8CluKusx3y8+E8pleSMJ+8RGCtOz6rjfFRGd 7rdkAWCsZAb0o1/4iGu9Ein3VSgHTFKkvO89qw5SwQjkhTo/lJMYDzW+O7EqQRd9s7X5 Qu2mBXL9GHNQEjggSKftq2r9h9bRelCxnLsfwT6xiPnjaRbqakXsxGX730DctnuphnlW U24J39SKjnB1kQwlru9C4lzLARZLHAcuLmtP3dLgNDUe6y1MzH+gvHde286pAfjZR9XS so3A== X-Gm-Message-State: AGRZ1gJ1/cYFttvgcl827+LN/NYG566OMPsIE6QsDQ0oVqNNMV/rBp1L iJBVldpnSeHoag8+zTNEavhkMTnS9PA= X-Google-Smtp-Source: AJdET5da2Xa3uidRHOgpIg2QKoNKrCrjfHg1jX4h4y05Kw6sHXX0m+OVcrX5chgcrRTlWJRvmw7LlQ== X-Received: by 2002:a5d:4182:: with SMTP id m2-v6mr2820602wrp.8.1540988486030; Wed, 31 Oct 2018 05:21:26 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:15 +0000 Message-Id: <20181031122119.1669-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::443 Subject: [Qemu-devel] [PULL 06/10] cputlb: Merge tlb_flush_nocheck into tlb_flush_by_mmuidx_async_work X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The difference between the two sets of APIs is now miniscule. This allows tlb_flush, tlb_flush_all_cpus, and tlb_flush_all_cpus_synced to be merged with their corresponding by_mmuidx functions as well. For accounting, consider mmu_idx_bitmask =3D ALL_MMUIDX_BITS to be a full flush. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 93 +++++++++++----------------------------------- 1 file changed, 21 insertions(+), 72 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 2cd3886fd6..a3a417e81a 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -122,75 +122,6 @@ static void tlb_flush_one_mmuidx_locked(CPUArchState *= env, int mmu_idx) env->tlb_d[mmu_idx].vindex =3D 0; } =20 -/* This is OK because CPU architectures generally permit an - * implementation to drop entries from the TLB at any time, so - * flushing more entries than required is only an efficiency issue, - * not a correctness issue. - */ -static void tlb_flush_nocheck(CPUState *cpu) -{ - CPUArchState *env =3D cpu->env_ptr; - int mmu_idx; - - assert_cpu_is_self(cpu); - atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); - tlb_debug("(count: %zu)\n", tlb_flush_count()); - - /* - * tlb_table/tlb_v_table updates from any thread must hold tlb_c.lock. - * However, updates from the owner thread (as is the case here; see the - * above assert_cpu_is_self) do not need atomic_set because all reads - * that do not hold the lock are performed by the same owner thread. - */ - qemu_spin_lock(&env->tlb_c.lock); - env->tlb_c.pending_flush =3D 0; - for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - tlb_flush_one_mmuidx_locked(env, mmu_idx); - } - qemu_spin_unlock(&env->tlb_c.lock); - - cpu_tb_jmp_cache_clear(cpu); -} - -static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data dat= a) -{ - tlb_flush_nocheck(cpu); -} - -void tlb_flush(CPUState *cpu) -{ - if (cpu->created && !qemu_cpu_is_self(cpu)) { - CPUArchState *env =3D cpu->env_ptr; - uint16_t pending; - - qemu_spin_lock(&env->tlb_c.lock); - pending =3D env->tlb_c.pending_flush; - env->tlb_c.pending_flush =3D ALL_MMUIDX_BITS; - qemu_spin_unlock(&env->tlb_c.lock); - - if (pending !=3D ALL_MMUIDX_BITS) { - async_run_on_cpu(cpu, tlb_flush_global_async_work, - RUN_ON_CPU_NULL); - } - } else { - tlb_flush_nocheck(cpu); - } -} - -void tlb_flush_all_cpus(CPUState *src_cpu) -{ - const run_on_cpu_func fn =3D tlb_flush_global_async_work; - flush_all_helper(src_cpu, fn, RUN_ON_CPU_NULL); - fn(src_cpu, RUN_ON_CPU_NULL); -} - -void tlb_flush_all_cpus_synced(CPUState *src_cpu) -{ - const run_on_cpu_func fn =3D tlb_flush_global_async_work; - flush_all_helper(src_cpu, fn, RUN_ON_CPU_NULL); - async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_NULL); -} - static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data = data) { CPUArchState *env =3D cpu->env_ptr; @@ -212,13 +143,17 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *= cpu, run_on_cpu_data data) qemu_spin_unlock(&env->tlb_c.lock); =20 cpu_tb_jmp_cache_clear(cpu); + + if (mmu_idx_bitmask =3D=3D ALL_MMUIDX_BITS) { + atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); + } } =20 void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); =20 - if (!qemu_cpu_is_self(cpu)) { + if (cpu->created && !qemu_cpu_is_self(cpu)) { CPUArchState *env =3D cpu->env_ptr; uint16_t pending, to_clean; =20 @@ -238,6 +173,11 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxma= p) } } =20 +void tlb_flush(CPUState *cpu) +{ + tlb_flush_by_mmuidx(cpu, ALL_MMUIDX_BITS); +} + void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, uint16_t idxmap) { const run_on_cpu_func fn =3D tlb_flush_by_mmuidx_async_work; @@ -248,8 +188,12 @@ void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, u= int16_t idxmap) fn(src_cpu, RUN_ON_CPU_HOST_INT(idxmap)); } =20 -void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, - uint16_t idxmap) +void tlb_flush_all_cpus(CPUState *src_cpu) +{ + tlb_flush_by_mmuidx_all_cpus(src_cpu, ALL_MMUIDX_BITS); +} + +void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, uint16_t idxma= p) { const run_on_cpu_func fn =3D tlb_flush_by_mmuidx_async_work; =20 @@ -259,6 +203,11 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src= _cpu, async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); } =20 +void tlb_flush_all_cpus_synced(CPUState *src_cpu) +{ + tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, ALL_MMUIDX_BITS); +} + static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, target_ulong page) { --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988782740184.9277306074431; Wed, 31 Oct 2018 05:26:22 -0700 (PDT) Received: from localhost ([::1]:59158 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpZl-0004Q3-JL for importer@patchew.org; Wed, 31 Oct 2018 08:26:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48890) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVe-0000nx-C9 for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVY-0001mb-HS for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:04 -0400 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]:37339) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVY-0001DF-98 for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:00 -0400 Received: by mail-wr1-x433.google.com with SMTP id g9-v6so16240633wrq.4 for ; Wed, 31 Oct 2018 05:21:28 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.26 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iiRxAviPMuTcYzXAmFgo793fKjEXoYsNIFm3NNqKe4Q=; b=JLbIIsh0VTYqWjaN8IfUQwdBnvmmtfknDb0tEk+uArY5l1dGXO4a85VR7dEXDkXcEQ RtSVI5Qz3BMXt1vuO2aF5dHWgbLAv6/eLquH9sSeCApuNtK+Vavi4+w4a2Cy/MlW+6LO L/49cz07+BDIdci+7nJzuZjijhLcH82m9z/5w= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iiRxAviPMuTcYzXAmFgo793fKjEXoYsNIFm3NNqKe4Q=; b=FciHHJu4Q0rwPgXkmlzmsHIwkiH/V7gOPT9+5FrSJMPucqbA1TVOYrKRqwKOwmz5cL RJoWT9vkT3KVl5VMs0lloUcl/MsAB7bHV2skZ7LOoaysr/552HBKWBBAxKqYWkkVwHYp TlwBBnR41fg6HXLMb5Oeaaps24bbgYGXliMqQyn5fBu0yUJRj87kzJ2HkIxt+++ExoB3 6X354bG+m6SBwaeyyq7TcV07FpS8xf2tI4Fkg4Yzv9JPupUB/Gvy30YbEEhwRMYPd6uI VJ3/V6sDg/bMXglxVeUbVm23WxCyTG9ZBFZtTBBnPOnWWaUTBzjhyTfK5n5033ktE+CR FUdQ== X-Gm-Message-State: AGRZ1gLcWB+gDcOpwAUkGI0XCkE2psnvDp3W0Q263dpLtftrSr38huBE 0XJkKCajmN9O9gOYfyb6T4o5UwOlFwY= X-Google-Smtp-Source: AJdET5ff2mLMvJHKECp0rWnTEhEd3NlN1IKibBEYR+7sbG9vFjeGaU3YPoYTkgezIkTqopnIAcFjqg== X-Received: by 2002:a5d:558b:: with SMTP id i11-v6mr2496764wrv.38.1540988486859; Wed, 31 Oct 2018 05:21:26 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:16 +0000 Message-Id: <20181031122119.1669-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::433 Subject: [Qemu-devel] [PULL 07/10] cputlb: Merge tlb_flush_page into tlb_flush_page_by_mmuidx X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The difference between the two sets of APIs is now miniscule. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 58 ++++++++++------------------------------------ 1 file changed, 12 insertions(+), 46 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index a3a417e81a..1f7764de94 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -255,38 +255,6 @@ static void tlb_flush_page_locked(CPUArchState *env, i= nt midx, } } =20 -static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data) -{ - CPUArchState *env =3D cpu->env_ptr; - target_ulong addr =3D (target_ulong) data.target_ptr; - int mmu_idx; - - assert_cpu_is_self(cpu); - - tlb_debug("page addr:" TARGET_FMT_lx "\n", addr); - - addr &=3D TARGET_PAGE_MASK; - qemu_spin_lock(&env->tlb_c.lock); - for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - tlb_flush_page_locked(env, mmu_idx, addr); - } - qemu_spin_unlock(&env->tlb_c.lock); - - tb_flush_jmp_cache(cpu, addr); -} - -void tlb_flush_page(CPUState *cpu, target_ulong addr) -{ - tlb_debug("page :" TARGET_FMT_lx "\n", addr); - - if (!qemu_cpu_is_self(cpu)) { - async_run_on_cpu(cpu, tlb_flush_page_async_work, - RUN_ON_CPU_TARGET_PTR(addr)); - } else { - tlb_flush_page_async_work(cpu, RUN_ON_CPU_TARGET_PTR(addr)); - } -} - /* As we are going to hijack the bottom bits of the page address for a * mmuidx bit mask we need to fail to build if we can't do that */ @@ -336,6 +304,11 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ul= ong addr, uint16_t idxmap) } } =20 +void tlb_flush_page(CPUState *cpu, target_ulong addr) +{ + tlb_flush_page_by_mmuidx(cpu, addr, ALL_MMUIDX_BITS); +} + void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, target_ulong add= r, uint16_t idxmap) { @@ -352,6 +325,11 @@ void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_c= pu, target_ulong addr, fn(src_cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); } =20 +void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr) +{ + tlb_flush_page_by_mmuidx_all_cpus(src, addr, ALL_MMUIDX_BITS); +} + void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu, target_ulong addr, uint16_t idxmap) @@ -369,21 +347,9 @@ void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState= *src_cpu, async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_= idx)); } =20 -void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr) +void tlb_flush_page_all_cpus_synced(CPUState *src, target_ulong addr) { - const run_on_cpu_func fn =3D tlb_flush_page_async_work; - - flush_all_helper(src, fn, RUN_ON_CPU_TARGET_PTR(addr)); - fn(src, RUN_ON_CPU_TARGET_PTR(addr)); -} - -void tlb_flush_page_all_cpus_synced(CPUState *src, - target_ulong addr) -{ - const run_on_cpu_func fn =3D tlb_flush_page_async_work; - - flush_all_helper(src, fn, RUN_ON_CPU_TARGET_PTR(addr)); - async_safe_run_on_cpu(src, fn, RUN_ON_CPU_TARGET_PTR(addr)); + tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS); } =20 /* update the TLBs so that writes to code in the virtual page 'addr' --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 15409890594341020.894678213868; Wed, 31 Oct 2018 05:30:59 -0700 (PDT) Received: from localhost ([::1]:59191 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpeD-0001v6-Sd for importer@patchew.org; Wed, 31 Oct 2018 08:30:57 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48981) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVi-0000rR-1n for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVg-0001wE-Oy for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:09 -0400 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]:46760) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVg-0001E1-FO for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:08 -0400 Received: by mail-wr1-x443.google.com with SMTP id 74-v6so8040162wrb.13 for ; Wed, 31 Oct 2018 05:21:28 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.26 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=53nfMiSkZwljDj68THvPmRulJ3ECsiiCe4M0EwMJzgw=; b=DMfWpcbEGs82S2ksUj/yGHfxBJAdbHYxBQZtaGiE7OxYBMY3N08KKfxSkiVzbh8tzs iVUQR+sfX412HSH3YyssjEATcso/TIZTopM0Lft6WA+6Hkyy/VbCCJqSdMwYdNHha/Cu 6QVVS6tQT5J6UbcR0bkmmn2OyVk12ZOuzZvM0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=53nfMiSkZwljDj68THvPmRulJ3ECsiiCe4M0EwMJzgw=; b=mV72KL6Va2wOfa0Q655mWrC2rYOvFZ2wtL8T+z5mbfiVoBAc4k31nvWZP2U2s+++vG QSR+QqIny1hZZ7eFDWQJt8kVZKZHgjIXYBjiiHofeU5kZkG5sKyyPrzEMf/OTg5PvDu4 e3vaYXS+kJKawnqc16HsuapomP2/tGsoqDqOWdG9t+S2BOQUCgiv6jqLBtrriehltB4U IgPNHKve6FocnpynvxUQhzw7NsyzSiOTX4ZMV/5yqBPe6qJA+Bib78TOUZS2aH/fzJaa FQ+MIL9UKyV236CMScbpTYMkxTyASl0caGI6aRSr9rH6uDr/vO14WM1F+moUYnCuSRXw J9Sg== X-Gm-Message-State: AGRZ1gITIucxdL/xtg4hguNGkXkIN/BW1JusbmvAPc5zLVRsLKd/TujN ewRCHQzk6+3K68gS5y6ARERKaqRzpSs= X-Google-Smtp-Source: AJdET5etqvXxEN+UdWyz5D4bVelaNSk9C5yZ+SL+WikJNSAY7nDPjZoy6w+TAMWxK4v5rXCsrtB/ug== X-Received: by 2002:adf:f941:: with SMTP id q1-v6mr2503043wrr.151.1540988487757; Wed, 31 Oct 2018 05:21:27 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:17 +0000 Message-Id: <20181031122119.1669-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::443 Subject: [Qemu-devel] [PULL 08/10] cputlb: Count "partial" and "elided" tlb flushes X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Our only statistic so far was "full" tlb flushes, where all mmu_idx are flushed at the same time. Now count "partial" tlb flushes where sets of mmu_idx are flushed, but the set is not maximal. Account one per mmu_idx flushed, as that is the unit of work performed. We don't actually count elided flushes yet, but go ahead and change the interface presented to the monitor all at once. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 12 ++++++++++-- include/exec/cputlb.h | 2 +- accel/tcg/cputlb.c | 18 +++++++++++++----- accel/tcg/translate-all.c | 8 ++++++-- 4 files changed, 30 insertions(+), 10 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index 181c0dbfa4..c7b501d627 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -166,6 +166,15 @@ typedef struct CPUTLBCommon { * mmu_idx may be discarded. Protected by tlb_c.lock. */ uint16_t pending_flush; + + /* + * Statistics. These are not lock protected, but are read and + * written atomically. This allows the monitor to print a snapshot + * of the stats without interfering with the cpu. + */ + size_t full_flush_count; + size_t part_flush_count; + size_t elide_flush_count; } CPUTLBCommon; =20 /* @@ -179,8 +188,7 @@ typedef struct CPUTLBCommon { CPUTLBEntry tlb_table[NB_MMU_MODES][CPU_TLB_SIZE]; \ CPUTLBEntry tlb_v_table[NB_MMU_MODES][CPU_VTLB_SIZE]; \ CPUIOTLBEntry iotlb[NB_MMU_MODES][CPU_TLB_SIZE]; \ - CPUIOTLBEntry iotlb_v[NB_MMU_MODES][CPU_VTLB_SIZE]; \ - size_t tlb_flush_count; + CPUIOTLBEntry iotlb_v[NB_MMU_MODES][CPU_VTLB_SIZE]; =20 #else =20 diff --git a/include/exec/cputlb.h b/include/exec/cputlb.h index c91db211bc..5373188be3 100644 --- a/include/exec/cputlb.h +++ b/include/exec/cputlb.h @@ -23,6 +23,6 @@ /* cputlb.c */ void tlb_protect_code(ram_addr_t ram_addr); void tlb_unprotect_code(ram_addr_t ram_addr); -size_t tlb_flush_count(void); +void tlb_flush_counts(size_t *full, size_t *part, size_t *elide); #endif #endif diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 1f7764de94..e60628c350 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -100,17 +100,21 @@ static void flush_all_helper(CPUState *src, run_on_cp= u_func fn, } } =20 -size_t tlb_flush_count(void) +void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide) { CPUState *cpu; - size_t count =3D 0; + size_t full =3D 0, part =3D 0, elide =3D 0; =20 CPU_FOREACH(cpu) { CPUArchState *env =3D cpu->env_ptr; =20 - count +=3D atomic_read(&env->tlb_flush_count); + full +=3D atomic_read(&env->tlb_c.full_flush_count); + part +=3D atomic_read(&env->tlb_c.part_flush_count); + elide +=3D atomic_read(&env->tlb_c.elide_flush_count); } - return count; + *pfull =3D full; + *ppart =3D part; + *pelide =3D elide; } =20 static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx) @@ -145,7 +149,11 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *c= pu, run_on_cpu_data data) cpu_tb_jmp_cache_clear(cpu); =20 if (mmu_idx_bitmask =3D=3D ALL_MMUIDX_BITS) { - atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); + atomic_set(&env->tlb_c.full_flush_count, + env->tlb_c.full_flush_count + 1); + } else { + atomic_set(&env->tlb_c.part_flush_count, + env->tlb_c.part_flush_count + ctpop16(mmu_idx_bitmask)); } } =20 diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 356dcd0948..639f0b2728 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -2290,7 +2290,7 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fpr= intf) { struct tb_tree_stats tst =3D {}; struct qht_stats hst; - size_t nb_tbs; + size_t nb_tbs, flush_full, flush_part, flush_elide; =20 tcg_tb_foreach(tb_tree_stats_iter, &tst); nb_tbs =3D tst.nb_tbs; @@ -2326,7 +2326,11 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fp= rintf) cpu_fprintf(f, "TB flush count %u\n", atomic_read(&tb_ctx.tb_flush_count)); cpu_fprintf(f, "TB invalidate count %zu\n", tcg_tb_phys_invalidate_cou= nt()); - cpu_fprintf(f, "TLB flush count %zu\n", tlb_flush_count()); + + tlb_flush_counts(&flush_full, &flush_part, &flush_elide); + cpu_fprintf(f, "TLB full flushes %zu\n", flush_full); + cpu_fprintf(f, "TLB partial flushes %zu\n", flush_part); + cpu_fprintf(f, "TLB elided flushes %zu\n", flush_elide); tcg_dump_info(f, cpu_fprintf); } =20 --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988934467991.6073905307175; Wed, 31 Oct 2018 05:28:54 -0700 (PDT) Received: from localhost ([::1]:59172 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpcC-0008SM-Df for importer@patchew.org; Wed, 31 Oct 2018 08:28:52 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48918) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVg-0000pd-8L for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVd-0001rE-1j for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:06 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]:53756) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVY-0001Ei-IV for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:02 -0400 Received: by mail-wm1-x334.google.com with SMTP id v24-v6so5087175wmh.3 for ; Wed, 31 Oct 2018 05:21:29 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.27 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=q/yjo8z+g7lwII7MAY80uwlsqNxP5aVjMUM/ksqnyvc=; b=XbtsLZBbUDXxTOZGWz73kzMKoAJWHQS6KGA01KvPELR0F+kDJNmogqkZLQdwvinWwC Kb5181SqUxBliOMyzjH5Ti/zyHQRgRcvjtLn9HWKBx01ibsoKhFsBVrMMX5XyZ0SxsiK JcpWF7gjgcKWe8AqWHZKOZZwTVHiJWN5x9JLE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=q/yjo8z+g7lwII7MAY80uwlsqNxP5aVjMUM/ksqnyvc=; b=t5+30K0SbpLsYGvCtD+7MkPoK0bNthB80tX/ew3cTx+xpMsAZsDvoMnvzEPrteBLMt krchYZ0LnPyVnHqfACc3oFgkZyPJ0dPXA80n/PhAyfe6xBdsf43rXP0k9MJijeU+rN+M 5oH/LxeFqVdXhGLmhJE4e2pJYckSWZlW78bV4yqv8Sj1eHF42wq2kLoJ6VjoJOtVMmV0 NYYMLGXiOshdWJJaI2QMjZ3GdPMkHRW1SJrIf8iHPECT/x/Aytf9iR1BKorvfrvfYWNo qjTgwzarh109kr3QBXtKH3JOxzohuLidwmOUU83JoD1wwRgX4ObI0YGr4jtEXNHA9j77 21Ew== X-Gm-Message-State: AGRZ1gILyRhAMKGcaordz25Ac0HDLKQQga7/xlXCJc8j6rueAd1zTgte /zu+wwW8IGClZjIdoUaLWtSQ4Kzseyc= X-Google-Smtp-Source: AJdET5cjG49Fue3JvYN+y53m9jBdFk7pJLQ+OFrzKescuSG02Q08AeQEP5aFjpji07o5CX5WacIQZg== X-Received: by 2002:a1c:1d12:: with SMTP id d18-v6mr2115254wmd.31.1540988488611; Wed, 31 Oct 2018 05:21:28 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:18 +0000 Message-Id: <20181031122119.1669-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::334 Subject: [Qemu-devel] [PULL 09/10] cputlb: Filter flushes on already clean tlbs X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Especially for guests with large numbers of tlbs, like ARM or PPC, we may well not use all of them in between flush operations. Remember which tlbs have been used since the last flush, and avoid any useless flushing. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 7 ++++++- accel/tcg/cputlb.c | 35 +++++++++++++++++++++++++---------- 2 files changed, 31 insertions(+), 11 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index c7b501d627..ca0fea8b27 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -166,7 +166,12 @@ typedef struct CPUTLBCommon { * mmu_idx may be discarded. Protected by tlb_c.lock. */ uint16_t pending_flush; - + /* + * Within dirty, for each bit N, modifications have been made to + * mmu_idx N since the last time that mmu_idx was flushed. + * Protected by tlb_c.lock. + */ + uint16_t dirty; /* * Statistics. These are not lock protected, but are read and * written atomically. This allows the monitor to print a snapshot diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index e60628c350..f6c37bc4db 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -79,6 +79,9 @@ void tlb_init(CPUState *cpu) CPUArchState *env =3D cpu->env_ptr; =20 qemu_spin_init(&env->tlb_c.lock); + + /* Ensure that cpu_reset performs a full flush. */ + env->tlb_c.dirty =3D ALL_MMUIDX_BITS; } =20 /* flush_all_helper: run fn across all cpus @@ -129,31 +132,40 @@ static void tlb_flush_one_mmuidx_locked(CPUArchState = *env, int mmu_idx) static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data = data) { CPUArchState *env =3D cpu->env_ptr; - unsigned long mmu_idx_bitmask =3D data.host_int; - int mmu_idx; + uint16_t asked =3D data.host_int; + uint16_t all_dirty, work, to_clean; =20 assert_cpu_is_self(cpu); =20 - tlb_debug("mmu_idx:0x%04lx\n", mmu_idx_bitmask); + tlb_debug("mmu_idx:0x%04" PRIx16 "\n", asked); =20 qemu_spin_lock(&env->tlb_c.lock); - env->tlb_c.pending_flush &=3D ~mmu_idx_bitmask; =20 - for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - if (test_bit(mmu_idx, &mmu_idx_bitmask)) { - tlb_flush_one_mmuidx_locked(env, mmu_idx); - } + all_dirty =3D env->tlb_c.dirty; + to_clean =3D asked & all_dirty; + all_dirty &=3D ~to_clean; + env->tlb_c.dirty =3D all_dirty; + + for (work =3D to_clean; work !=3D 0; work &=3D work - 1) { + int mmu_idx =3D ctz32(work); + tlb_flush_one_mmuidx_locked(env, mmu_idx); } + qemu_spin_unlock(&env->tlb_c.lock); =20 cpu_tb_jmp_cache_clear(cpu); =20 - if (mmu_idx_bitmask =3D=3D ALL_MMUIDX_BITS) { + if (to_clean =3D=3D ALL_MMUIDX_BITS) { atomic_set(&env->tlb_c.full_flush_count, env->tlb_c.full_flush_count + 1); } else { atomic_set(&env->tlb_c.part_flush_count, - env->tlb_c.part_flush_count + ctpop16(mmu_idx_bitmask)); + env->tlb_c.part_flush_count + ctpop16(to_clean)); + if (to_clean !=3D asked) { + atomic_set(&env->tlb_c.elide_flush_count, + env->tlb_c.elide_flush_count + + ctpop16(asked & ~to_clean)); + } } } =20 @@ -581,6 +593,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulon= g vaddr, */ qemu_spin_lock(&env->tlb_c.lock); =20 + /* Note that the tlb is no longer clean. */ + env->tlb_c.dirty |=3D 1 << mmu_idx; + /* Make sure there's no cached translation for the new page. */ tlb_flush_vtlb_page_locked(env, mmu_idx, vaddr_page); =20 --=20 2.17.2 From nobody Mon Feb 9 19:10:44 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988935481520.7542398950769; Wed, 31 Oct 2018 05:28:55 -0700 (PDT) Received: from localhost ([::1]:59171 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpc9-0008Qo-TW for importer@patchew.org; Wed, 31 Oct 2018 08:28:49 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49034) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVk-0000sa-Uz for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVe-0001tC-C8 for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:12 -0400 Received: from mail-wr1-x42a.google.com ([2a00:1450:4864:20::42a]:39827) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVa-0001Fw-Es for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:04 -0400 Received: by mail-wr1-x42a.google.com with SMTP id r10-v6so16263122wrv.6 for ; Wed, 31 Oct 2018 05:21:30 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.28 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=H6sT80oSETHSMbDT5Vnt8FGyyw3j6FjOjATOS8VN5ZI=; b=L6pZo355YfoeOQJvrW1nq3ESPLS+JthsSkwpMEhdQIhccY8EIHOxZyMW+dwWtbYdlK FAGVs9VfPOXfX5gQnt4z7pukKWCuQ9fNcqbL8MGRspGrk5Fl8Nk+4P6RJD7zoWAC+8As 08YV8xL3R3fQlxm7Foe02DUYjPUvnWMBHkKcI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=H6sT80oSETHSMbDT5Vnt8FGyyw3j6FjOjATOS8VN5ZI=; b=EEvm1LC4g7hTwKBVa5IYKAp/eGKvSxmTeoBM8iW3tHl175iVmeTfBU0LpGaMrCO0X8 q10Nq7023NuOG7uONEC4q/mu/FzC8sPUquKwjz/zjoDzvnJCfDXH1LZla0vEtz6HSP17 mHFdkn56i0vyAWL0uCKj/9U/TnsWHQd6JEW3Pv6icgQGPIHmPQYqr8D1RXdfLpk3rCOm 7cKKqwCgFtvr3KUgH2131b5nK7XfIuenHYd0flJiM0ikn6MrKmFG0qvapfZRfXjzlG+F wTkjZLe8+YNxbTVtIQe8Voghn2gvRIAKNDxe66wuRZKbvlBTrD/OFMET5kCUJ7zMKXL/ ogpw== X-Gm-Message-State: AGRZ1gLhKuqfc9jj+1iWFLFy590wUA9KqpUi8kHZbCGyOSs7iUKy3CMc Hich7iJNKAqHuKJXbk9fpEDqGt5twbc= X-Google-Smtp-Source: AJdET5dZYgrzXjly6iaOAnzPsI32z434/rvi9+/xh9eTHzgwJQfh9dsPdIwBMTLH3iQsZmq3oGcJCw== X-Received: by 2002:a5d:628c:: with SMTP id k12-v6mr2481247wru.83.1540988489431; Wed, 31 Oct 2018 05:21:29 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:19 +0000 Message-Id: <20181031122119.1669-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::42a Subject: [Qemu-devel] [PULL 10/10] cputlb: Remove tlb_c.pending_flushes X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This is essentially redundant with tlb_c.dirty. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 6 ------ accel/tcg/cputlb.c | 16 ++-------------- 2 files changed, 2 insertions(+), 20 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index ca0fea8b27..6a60f94a41 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -160,12 +160,6 @@ typedef struct CPUTLBDesc { typedef struct CPUTLBCommon { /* Serialize updates to tlb_table and tlb_v_table, and others as noted= . */ QemuSpin lock; - /* - * Within pending_flush, for each bit N, there exists an outstanding - * cross-cpu flush for mmu_idx N. Further cross-cpu flushes to that - * mmu_idx may be discarded. Protected by tlb_c.lock. - */ - uint16_t pending_flush; /* * Within dirty, for each bit N, modifications have been made to * mmu_idx N since the last time that mmu_idx was flushed. diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index f6c37bc4db..af6bd8ccf9 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -174,20 +174,8 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxma= p) tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); =20 if (cpu->created && !qemu_cpu_is_self(cpu)) { - CPUArchState *env =3D cpu->env_ptr; - uint16_t pending, to_clean; - - qemu_spin_lock(&env->tlb_c.lock); - pending =3D env->tlb_c.pending_flush; - to_clean =3D idxmap & ~pending; - env->tlb_c.pending_flush =3D pending | idxmap; - qemu_spin_unlock(&env->tlb_c.lock); - - if (to_clean) { - tlb_debug("reduced mmu_idx: 0x%" PRIx16 "\n", to_clean); - async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work, - RUN_ON_CPU_HOST_INT(to_clean)); - } + async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work, + RUN_ON_CPU_HOST_INT(idxmap)); } else { tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap)); } --=20 2.17.2