From nobody Tue Nov 4 18:50:18 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1530547913099156.92569440111663; Mon, 2 Jul 2018 09:11:53 -0700 (PDT) Received: from localhost ([::1]:34010 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fa1Qe-0000J2-DT for importer@patchew.org; Mon, 02 Jul 2018 12:11:52 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38450) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fa1Ky-0004gU-0J for qemu-devel@nongnu.org; Mon, 02 Jul 2018 12:06:05 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fa1Kw-0005Sg-QN for qemu-devel@nongnu.org; Mon, 02 Jul 2018 12:05:59 -0400 Received: from mail-pg0-x241.google.com ([2607:f8b0:400e:c05::241]:34857) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fa1Kw-0005SF-IT for qemu-devel@nongnu.org; Mon, 02 Jul 2018 12:05:58 -0400 Received: by mail-pg0-x241.google.com with SMTP id i7-v6so7345960pgp.2 for ; Mon, 02 Jul 2018 09:05:58 -0700 (PDT) Received: from cloudburst.twiddle.net (97-126-112-211.tukw.qwest.net. [97.126.112.211]) by smtp.gmail.com with ESMTPSA id e17-v6sm38107002pfd.15.2018.07.02.09.05.55 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 02 Jul 2018 09:05:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LjSXEo7wznIsBtYxIKACJ2dssod61S6F92h6i+lGTPo=; b=Mm9FA4lSL7xljPo6oTntTchB3OgJrW0vIQWHi3hz3xmartekLZeq/vfEGdxwAauTqv A9o1+h2hieG26cBQwxF1tboIkaNvheYS7VqqjqDDkHTGKQKx0irXhakAsWIm/yid3hac GpxR8gdQOUfHTIgdQskv5SQRln0I7JKQ2LCHc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LjSXEo7wznIsBtYxIKACJ2dssod61S6F92h6i+lGTPo=; b=OzqaLAejrTPuAvFWGEjoXTJWmfGiSk1fLamcrdEedw1SlRs2+BD3T66Eh1OYaX2Wko nUPPhizSuOGScUhZg2kbjG6hJatGaVCgeryef6ajLKTGIqo/oSN7N/zASlQKJweQGbDr LZA3c9NWOi/1hFawvKwgE1SIN17lqhdUGKAmTVAtp/E0BMtuq6lcJF9IhehZXf577GJQ 8wSBQBgzd0dExWqB3UAYfBcyt+BooKdb2JZXKslN3wmi3qd8AtdqmM9QcuYZHDu+XITi 2vD67QajvI0DMUjYxhAFC3hkVFhwZxszHOZypYc0J/N4wX2hMvrC53CKtowXYrJYRVuz ejvQ== X-Gm-Message-State: APt69E1WrOC696nibFs60d/qZdwJLQXPLKjGinwNwNRc7y3nVBpJ+i5n kMKlGg+q2v8gKu5e/W3Ah0vnO7HjnUg= X-Google-Smtp-Source: AAOMgpfbLyJzSTvxNWmVzAU6bgF8G2/BI9ylVX/Xiu8pO+8CgUm5KWI1HYO/u/8DjwrCD2vGwJeSrg== X-Received: by 2002:a63:6e0a:: with SMTP id j10-v6mr5689753pgc.321.1530547557347; Mon, 02 Jul 2018 09:05:57 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Mon, 2 Jul 2018 09:05:45 -0700 Message-Id: <20180702160546.31969-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180702160546.31969-1-richard.henderson@linaro.org> References: <20180702160546.31969-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c05::241 Subject: [Qemu-devel] [PULL 5/6] accel/tcg: Avoid caching overwritten tlb entries X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" When installing a TLB entry, remove any cached version of the same page in the VTLB. If the existing TLB entry matches, do not copy into the VTLB, but overwrite it. Reviewed-by: Peter Maydell Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 63 ++++++++++++++++++++++++++-------------------- 1 file changed, 36 insertions(+), 27 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index cc90a5fe92..20c147d655 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -235,17 +235,30 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *sr= c_cpu, async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); } =20 - - -static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong ad= dr) +static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, + target_ulong page) { - if (tlb_hit_page(tlb_entry->addr_read, addr) || - tlb_hit_page(tlb_entry->addr_write, addr) || - tlb_hit_page(tlb_entry->addr_code, addr)) { + return tlb_hit_page(tlb_entry->addr_read, page) || + tlb_hit_page(tlb_entry->addr_write, page) || + tlb_hit_page(tlb_entry->addr_code, page); +} + +static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong pa= ge) +{ + if (tlb_hit_page_anyprot(tlb_entry, page)) { memset(tlb_entry, -1, sizeof(*tlb_entry)); } } =20 +static inline void tlb_flush_vtlb_page(CPUArchState *env, int mmu_idx, + target_ulong page) +{ + int k; + for (k =3D 0; k < CPU_VTLB_SIZE; k++) { + tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], page); + } +} + static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data) { CPUArchState *env =3D cpu->env_ptr; @@ -271,14 +284,7 @@ static void tlb_flush_page_async_work(CPUState *cpu, r= un_on_cpu_data data) i =3D (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); - } - - /* check whether there are entries that need to be flushed in the vtlb= */ - for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - int k; - for (k =3D 0; k < CPU_VTLB_SIZE; k++) { - tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); - } + tlb_flush_vtlb_page(env, mmu_idx, addr); } =20 tb_flush_jmp_cache(cpu, addr); @@ -310,7 +316,6 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUStat= e *cpu, unsigned long mmu_idx_bitmap =3D addr_and_mmuidx & ALL_MMUIDX_BITS; int page =3D (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); int mmu_idx; - int i; =20 assert_cpu_is_self(cpu); =20 @@ -320,11 +325,7 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUSta= te *cpu, for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmap)) { tlb_flush_entry(&env->tlb_table[mmu_idx][page], addr); - - /* check whether there are vltb entries that need to be flushe= d */ - for (i =3D 0; i < CPU_VTLB_SIZE; i++) { - tlb_flush_entry(&env->tlb_v_table[mmu_idx][i], addr); - } + tlb_flush_vtlb_page(env, mmu_idx, addr); } } =20 @@ -609,10 +610,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulo= ng vaddr, target_ulong address; target_ulong code_address; uintptr_t addend; - CPUTLBEntry *te, *tv, tn; + CPUTLBEntry *te, tn; hwaddr iotlb, xlat, sz, paddr_page; target_ulong vaddr_page; - unsigned vidx =3D env->vtlb_index++ % CPU_VTLB_SIZE; int asidx =3D cpu_asidx_from_attrs(cpu, attrs); =20 assert_cpu_is_self(cpu); @@ -654,19 +654,28 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ul= ong vaddr, addend =3D (uintptr_t)memory_region_get_ram_ptr(section->mr) + xla= t; } =20 + /* Make sure there's no cached translation for the new page. */ + tlb_flush_vtlb_page(env, mmu_idx, vaddr_page); + code_address =3D address; iotlb =3D memory_region_section_get_iotlb(cpu, section, vaddr_page, paddr_page, xlat, prot, &addre= ss); =20 index =3D (vaddr_page >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); te =3D &env->tlb_table[mmu_idx][index]; - /* do not discard the translation in te, evict it into a victim tlb */ - tv =3D &env->tlb_v_table[mmu_idx][vidx]; =20 - /* addr_write can race with tlb_reset_dirty_range */ - copy_tlb_helper(tv, te, true); + /* + * Only evict the old entry to the victim tlb if it's for a + * different page; otherwise just overwrite the stale data. + */ + if (!tlb_hit_page_anyprot(te, vaddr_page)) { + unsigned vidx =3D env->vtlb_index++ % CPU_VTLB_SIZE; + CPUTLBEntry *tv =3D &env->tlb_v_table[mmu_idx][vidx]; =20 - env->iotlb_v[mmu_idx][vidx] =3D env->iotlb[mmu_idx][index]; + /* Evict the old entry into the victim tlb. */ + copy_tlb_helper(tv, te, true); + env->iotlb_v[mmu_idx][vidx] =3D env->iotlb[mmu_idx][index]; + } =20 /* refill the tlb */ /* --=20 2.17.1