From nobody Tue Feb 10 09:01:57 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1540988783108412.08771806473897; Wed, 31 Oct 2018 05:26:23 -0700 (PDT) Received: from localhost ([::1]:59157 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpZl-0004Pp-SG for importer@patchew.org; Wed, 31 Oct 2018 08:26:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48921) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVg-0000pw-Du for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpVe-0001tu-Sv for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:08 -0400 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]:34124) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpVc-0001CS-BH for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:22:06 -0400 Received: by mail-wr1-x443.google.com with SMTP id j26-v6so2077117wre.1 for ; Wed, 31 Oct 2018 05:21:27 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.25 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=y06YAKVrkNadgiSNzny6y48cEnCxKOVP7GttErYl8aI=; b=QH0NvHeIieqH4WZuAWu4/ACBhfmuWxcafAqjTkOwjU2KJEZ3nbNV4zWU0yQPC0FsTm 5JHL8koRzmLgzKXcdWGBYi9whqLkkoC7tjlz2v9dOmNnhuNf9XxTa9dKhvXgjiOeToIF xC0IfU1uZlB7tGil2Zgl4nRCqknzIM32tNFAk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=y06YAKVrkNadgiSNzny6y48cEnCxKOVP7GttErYl8aI=; b=gr9CmmnjzhLg5yIPaJkmYjaw/45WtrTpTYDmWbg8PLseooajThfTVteaOJvy3wx1gl 7IBJ5+yV9LhXUXyl4DQnNMhD9YRCuruL8CluKusx3y8+E8pleSMJ+8RGCtOz6rjfFRGd 7rdkAWCsZAb0o1/4iGu9Ein3VSgHTFKkvO89qw5SwQjkhTo/lJMYDzW+O7EqQRd9s7X5 Qu2mBXL9GHNQEjggSKftq2r9h9bRelCxnLsfwT6xiPnjaRbqakXsxGX730DctnuphnlW U24J39SKjnB1kQwlru9C4lzLARZLHAcuLmtP3dLgNDUe6y1MzH+gvHde286pAfjZR9XS so3A== X-Gm-Message-State: AGRZ1gJ1/cYFttvgcl827+LN/NYG566OMPsIE6QsDQ0oVqNNMV/rBp1L iJBVldpnSeHoag8+zTNEavhkMTnS9PA= X-Google-Smtp-Source: AJdET5da2Xa3uidRHOgpIg2QKoNKrCrjfHg1jX4h4y05Kw6sHXX0m+OVcrX5chgcrRTlWJRvmw7LlQ== X-Received: by 2002:a5d:4182:: with SMTP id m2-v6mr2820602wrp.8.1540988486030; Wed, 31 Oct 2018 05:21:26 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:15 +0000 Message-Id: <20181031122119.1669-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::443 Subject: [Qemu-devel] [PULL 06/10] cputlb: Merge tlb_flush_nocheck into tlb_flush_by_mmuidx_async_work X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The difference between the two sets of APIs is now miniscule. This allows tlb_flush, tlb_flush_all_cpus, and tlb_flush_all_cpus_synced to be merged with their corresponding by_mmuidx functions as well. For accounting, consider mmu_idx_bitmask =3D ALL_MMUIDX_BITS to be a full flush. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 93 +++++++++++----------------------------------- 1 file changed, 21 insertions(+), 72 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 2cd3886fd6..a3a417e81a 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -122,75 +122,6 @@ static void tlb_flush_one_mmuidx_locked(CPUArchState *= env, int mmu_idx) env->tlb_d[mmu_idx].vindex =3D 0; } =20 -/* This is OK because CPU architectures generally permit an - * implementation to drop entries from the TLB at any time, so - * flushing more entries than required is only an efficiency issue, - * not a correctness issue. - */ -static void tlb_flush_nocheck(CPUState *cpu) -{ - CPUArchState *env =3D cpu->env_ptr; - int mmu_idx; - - assert_cpu_is_self(cpu); - atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); - tlb_debug("(count: %zu)\n", tlb_flush_count()); - - /* - * tlb_table/tlb_v_table updates from any thread must hold tlb_c.lock. - * However, updates from the owner thread (as is the case here; see the - * above assert_cpu_is_self) do not need atomic_set because all reads - * that do not hold the lock are performed by the same owner thread. - */ - qemu_spin_lock(&env->tlb_c.lock); - env->tlb_c.pending_flush =3D 0; - for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - tlb_flush_one_mmuidx_locked(env, mmu_idx); - } - qemu_spin_unlock(&env->tlb_c.lock); - - cpu_tb_jmp_cache_clear(cpu); -} - -static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data dat= a) -{ - tlb_flush_nocheck(cpu); -} - -void tlb_flush(CPUState *cpu) -{ - if (cpu->created && !qemu_cpu_is_self(cpu)) { - CPUArchState *env =3D cpu->env_ptr; - uint16_t pending; - - qemu_spin_lock(&env->tlb_c.lock); - pending =3D env->tlb_c.pending_flush; - env->tlb_c.pending_flush =3D ALL_MMUIDX_BITS; - qemu_spin_unlock(&env->tlb_c.lock); - - if (pending !=3D ALL_MMUIDX_BITS) { - async_run_on_cpu(cpu, tlb_flush_global_async_work, - RUN_ON_CPU_NULL); - } - } else { - tlb_flush_nocheck(cpu); - } -} - -void tlb_flush_all_cpus(CPUState *src_cpu) -{ - const run_on_cpu_func fn =3D tlb_flush_global_async_work; - flush_all_helper(src_cpu, fn, RUN_ON_CPU_NULL); - fn(src_cpu, RUN_ON_CPU_NULL); -} - -void tlb_flush_all_cpus_synced(CPUState *src_cpu) -{ - const run_on_cpu_func fn =3D tlb_flush_global_async_work; - flush_all_helper(src_cpu, fn, RUN_ON_CPU_NULL); - async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_NULL); -} - static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data = data) { CPUArchState *env =3D cpu->env_ptr; @@ -212,13 +143,17 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *= cpu, run_on_cpu_data data) qemu_spin_unlock(&env->tlb_c.lock); =20 cpu_tb_jmp_cache_clear(cpu); + + if (mmu_idx_bitmask =3D=3D ALL_MMUIDX_BITS) { + atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); + } } =20 void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); =20 - if (!qemu_cpu_is_self(cpu)) { + if (cpu->created && !qemu_cpu_is_self(cpu)) { CPUArchState *env =3D cpu->env_ptr; uint16_t pending, to_clean; =20 @@ -238,6 +173,11 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxma= p) } } =20 +void tlb_flush(CPUState *cpu) +{ + tlb_flush_by_mmuidx(cpu, ALL_MMUIDX_BITS); +} + void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, uint16_t idxmap) { const run_on_cpu_func fn =3D tlb_flush_by_mmuidx_async_work; @@ -248,8 +188,12 @@ void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, u= int16_t idxmap) fn(src_cpu, RUN_ON_CPU_HOST_INT(idxmap)); } =20 -void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, - uint16_t idxmap) +void tlb_flush_all_cpus(CPUState *src_cpu) +{ + tlb_flush_by_mmuidx_all_cpus(src_cpu, ALL_MMUIDX_BITS); +} + +void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, uint16_t idxma= p) { const run_on_cpu_func fn =3D tlb_flush_by_mmuidx_async_work; =20 @@ -259,6 +203,11 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src= _cpu, async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); } =20 +void tlb_flush_all_cpus_synced(CPUState *src_cpu) +{ + tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, ALL_MMUIDX_BITS); +} + static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, target_ulong page) { --=20 2.17.2