From nobody Thu Apr 25 06:17:10 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1602882997; cv=none; d=zohomail.com; s=zohoarc; b=MFu5lPCB9V4R3T1Zt8bIQVkrtK7/4YWpgh6grSwrl6o/BWvr11nnHdUG1gt8gTf3lQSOOsvJH5+D6R1Uz04kDAtgRQKOTeT6Ky7XEEQOMrYQLoB+MH4IXDPCA3yF3K8MaL7ZaJVzWt3GwKD+CnwTawF62HgMokDKPudZy5lZvVU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602882997; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=tzpVWbbsA3myTqfadNtH08IGirbxENfmSPc8kYt/cFQ=; b=KYZc2NkUm5DnQkS/jUaiRmhcwEDTmuLq2/BgStFF3kSGAK7/InDI9XN9csVOgahbb42TTXgSZOS1RpVXsNY9eciVrGjn776vntwPGLkBrGWbwJ6A2BCtMY4lcojFNzJkw+qCfDSpWouWvjyVcCaLxx7ZMIjn+htB+n831mIgO+w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1602882997454340.3174936161216; Fri, 16 Oct 2020 14:16:37 -0700 (PDT) Received: from localhost ([::1]:59474 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kTX5X-00069g-5Z for importer@patchew.org; Fri, 16 Oct 2020 17:16:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36068) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kTWxG-0000co-DU for qemu-devel@nongnu.org; Fri, 16 Oct 2020 17:08:02 -0400 Received: from mail-pj1-x1041.google.com ([2607:f8b0:4864:20::1041]:53673) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kTWxD-0004nD-Tn for qemu-devel@nongnu.org; Fri, 16 Oct 2020 17:08:02 -0400 Received: by mail-pj1-x1041.google.com with SMTP id g16so2025051pjv.3 for ; Fri, 16 Oct 2020 14:07:59 -0700 (PDT) Received: from localhost.localdomain ([71.212.141.89]) by smtp.gmail.com with ESMTPSA id 198sm3682324pfy.41.2020.10.16.14.07.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Oct 2020 14:07:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tzpVWbbsA3myTqfadNtH08IGirbxENfmSPc8kYt/cFQ=; b=QgVrAgQneq2mFVYyJIDBt9L7stFFxUpcLNx0EMPl+sdn2LT75V39RtQJdawsLbv7nB ZX0KhrnppyiNCBUmwDDOew5Wp5w2aeHvULC2UYlI1CcQHU+YcTQSeMVvsjCOdflsXHTa XMsjoc7f/jOSBHpsXz4Yb7ArowYtdGpMnKDlEpU7AvhjuBuI9DZBhi97jsqg4JV2UdZq BsN+pkE34VmbGMBvbYzxf1ZxIZmGAen5td4stdCWKd2Iju2hU2t8Ieo0tAV+Vmn5hXyC +RrzFhpvHKMVX2V3ocjVZbaS453SKlTLx1D+56L02SsEQIyE9DSLi94OJA/ZjjqKQLiV hcoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tzpVWbbsA3myTqfadNtH08IGirbxENfmSPc8kYt/cFQ=; b=l4qv3vzD8PPGf58foilOqpQWXzE++1MdTiIkg8vADUBSwcJdC+be2W7+fFAEi3kvkU wc7g01nGTtfXRQaykHPmfA1xMr/448Wd2wHDLEf7vHm7L9a7hdqNS7FxlRxtjQM0vZ56 MwNWQ9Z+FsUryg67haRhxZLEoeif6ghCzd/ND+qLkE3/QBIM55wQpCM0ChPCQhzkZv/t wfimjawkqirK9XFWrXG5uxozDOD86GqD2A90t0sl26zGSK3L90/anrGnw8XQuwo8z5GP VhicqqiuoVWkT+xa5xmTMxPmE0HZBMOdBRGEAFoNns6maWW145MAFxWnm/yxXHc/a4ud UwVA== X-Gm-Message-State: AOAM530/pBBYBeFkxi4pIOw1CELX2CJUY/yhVH2hpxJJF6A7Jb2SdUHs lmVdVx7gzV0Z6qYxBmp832oi/XhOkTf0JQ== X-Google-Smtp-Source: ABdhPJxdUQNtYSRzLo87xhSeIHvCzH7XMHaAFmxaQtHnUaAgIfj3ZDRy0mohn5yHK3iMYwAAyc/wLA== X-Received: by 2002:a17:90a:9f90:: with SMTP id o16mr6124216pjp.180.1602882477943; Fri, 16 Oct 2020 14:07:57 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH v2 1/2] accel/tcg: Add tlb_flush_page_bits_by_mmuidx* Date: Fri, 16 Oct 2020 14:07:53 -0700 Message-Id: <20201016210754.818257-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201016210754.818257-1-richard.henderson@linaro.org> References: <20201016210754.818257-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1041; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1041.google.com X-detected-operating-system: by eggs.gnu.org: No matching host in p0f cache. That's all we know. X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, qemu-arm@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" On ARM, the Top Byte Ignore feature means that only 56 bits of the address are significant in the virtual address. We are required to give the entire 64-bit address to FAR_ELx on fault, which means that we do not "clean" the top byte early in TCG. This new interface allows us to flush all 256 possible aliases for a given page, currently missed by tlb_flush_page*. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- include/exec/exec-all.h | 36 ++++++ accel/tcg/cputlb.c | 275 ++++++++++++++++++++++++++++++++++++++-- 2 files changed, 302 insertions(+), 9 deletions(-) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 66f9b4cca6..4707ac140c 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -251,6 +251,25 @@ void tlb_flush_by_mmuidx_all_cpus(CPUState *cpu, uint1= 6_t idxmap); * depend on when the guests translation ends the TB. */ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap); + +/** + * tlb_flush_page_bits_by_mmuidx + * @cpu: CPU whose TLB should be flushed + * @addr: virtual address of page to be flushed + * @idxmap: bitmap of mmu indexes to flush + * @bits: number of significant bits in address + * + * Similar to tlb_flush_page_mask, but with a bitmap of indexes. + */ +void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr, + uint16_t idxmap, unsigned bits); + +/* Similarly, with broadcast and syncing. */ +void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu, target_ulong ad= dr, + uint16_t idxmap, unsigned bits= ); +void tlb_flush_page_bits_by_mmuidx_all_cpus_synced + (CPUState *cpu, target_ulong addr, uint16_t idxmap, unsigned bits); + /** * tlb_set_page_with_attrs: * @cpu: CPU to add this TLB entry for @@ -337,6 +356,23 @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced= (CPUState *cpu, uint16_t idxmap) { } +static inline void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, + target_ulong addr, + uint16_t idxmap, + unsigned bits) +{ +} +static inline void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu, + target_ulong add= r, + uint16_t idxmap, + unsigned bits) +{ +} +static inline void +tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *cpu, target_ulong = addr, + uint16_t idxmap, unsigned bi= ts) +{ +} #endif /** * probe_access: diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 2bbbb3ab29..42ab79c1a5 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -409,12 +409,21 @@ void tlb_flush_all_cpus_synced(CPUState *src_cpu) tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, ALL_MMUIDX_BITS); } =20 +static bool tlb_hit_page_mask_anyprot(CPUTLBEntry *tlb_entry, + target_ulong page, target_ulong mask) +{ + page &=3D mask; + mask &=3D TARGET_PAGE_MASK | TLB_INVALID_MASK; + + return (page =3D=3D (tlb_entry->addr_read & mask) || + page =3D=3D (tlb_addr_write(tlb_entry) & mask) || + page =3D=3D (tlb_entry->addr_code & mask)); +} + static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, target_ulong page) { - return tlb_hit_page(tlb_entry->addr_read, page) || - tlb_hit_page(tlb_addr_write(tlb_entry), page) || - tlb_hit_page(tlb_entry->addr_code, page); + return tlb_hit_page_mask_anyprot(tlb_entry, page, -1); } =20 /** @@ -427,31 +436,45 @@ static inline bool tlb_entry_is_empty(const CPUTLBEnt= ry *te) } =20 /* Called with tlb_c.lock held */ -static inline bool tlb_flush_entry_locked(CPUTLBEntry *tlb_entry, - target_ulong page) +static bool tlb_flush_entry_mask_locked(CPUTLBEntry *tlb_entry, + target_ulong page, + target_ulong mask) { - if (tlb_hit_page_anyprot(tlb_entry, page)) { + if (tlb_hit_page_mask_anyprot(tlb_entry, page, mask)) { memset(tlb_entry, -1, sizeof(*tlb_entry)); return true; } return false; } =20 +static inline bool tlb_flush_entry_locked(CPUTLBEntry *tlb_entry, + target_ulong page) +{ + return tlb_flush_entry_mask_locked(tlb_entry, page, -1); +} + /* Called with tlb_c.lock held */ -static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_i= dx, - target_ulong page) +static void tlb_flush_vtlb_page_mask_locked(CPUArchState *env, int mmu_idx, + target_ulong page, + target_ulong mask) { CPUTLBDesc *d =3D &env_tlb(env)->d[mmu_idx]; int k; =20 assert_cpu_is_self(env_cpu(env)); for (k =3D 0; k < CPU_VTLB_SIZE; k++) { - if (tlb_flush_entry_locked(&d->vtable[k], page)) { + if (tlb_flush_entry_mask_locked(&d->vtable[k], page, mask)) { tlb_n_used_entries_dec(env, mmu_idx); } } } =20 +static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_i= dx, + target_ulong page) +{ + tlb_flush_vtlb_page_mask_locked(env, mmu_idx, page, -1); +} + static void tlb_flush_page_locked(CPUArchState *env, int midx, target_ulong page) { @@ -666,6 +689,240 @@ void tlb_flush_page_all_cpus_synced(CPUState *src, ta= rget_ulong addr) tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS); } =20 +static void tlb_flush_page_bits_locked(CPUArchState *env, int midx, + target_ulong page, unsigned bits) +{ + CPUTLBDesc *d =3D &env_tlb(env)->d[midx]; + CPUTLBDescFast *f =3D &env_tlb(env)->f[midx]; + target_ulong mask =3D MAKE_64BIT_MASK(0, bits); + + /* + * If @bits is smaller than the tlb size, there may be multiple entries + * within the TLB; otherwise all addresses that match under @mask hit + * the same TLB entry. + * + * TODO: Perhaps allow bits to be a few bits less than the size. + * For now, just flush the entire TLB. + */ + if (mask < f->mask) { + tlb_debug("forcing full flush midx %d (" + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", + midx, page, mask); + tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime()); + return; + } + + /* Check if we need to flush due to large pages. */ + if ((page & d->large_page_mask) =3D=3D d->large_page_addr) { + tlb_debug("forcing full flush midx %d (" + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", + midx, d->large_page_addr, d->large_page_mask); + tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime()); + return; + } + + if (tlb_flush_entry_mask_locked(tlb_entry(env, midx, page), page, mask= )) { + tlb_n_used_entries_dec(env, midx); + } + tlb_flush_vtlb_page_mask_locked(env, midx, page, mask); +} + +typedef struct { + target_ulong addr; + uint16_t idxmap; + uint16_t bits; +} TLBFlushPageBitsByMMUIdxData; + +static void +tlb_flush_page_bits_by_mmuidx_async_0(CPUState *cpu, + TLBFlushPageBitsByMMUIdxData d) +{ + CPUArchState *env =3D cpu->env_ptr; + int mmu_idx; + + assert_cpu_is_self(cpu); + + tlb_debug("page addr:" TARGET_FMT_lx "/%u mmu_map:0x%x\n", + d.addr, d.bits, d.idxmap); + + qemu_spin_lock(&env_tlb(env)->c.lock); + for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { + if ((d.idxmap >> mmu_idx) & 1) { + tlb_flush_page_bits_locked(env, mmu_idx, d.addr, d.bits); + } + } + qemu_spin_unlock(&env_tlb(env)->c.lock); + + tb_flush_jmp_cache(cpu, d.addr); +} + +static bool encode_pbm_to_runon(run_on_cpu_data *out, + TLBFlushPageBitsByMMUIdxData d) +{ + /* We need 6 bits to hold to hold @bits up to 63. */ + if (d.idxmap <=3D MAKE_64BIT_MASK(0, TARGET_PAGE_BITS - 6)) { + *out =3D RUN_ON_CPU_TARGET_PTR(d.addr | (d.idxmap << 6) | d.bits); + return true; + } + return false; +} + +static TLBFlushPageBitsByMMUIdxData +decode_runon_to_pbm(run_on_cpu_data data) +{ + target_ulong addr_map_bits =3D (target_ulong) data.target_ptr; + return (TLBFlushPageBitsByMMUIdxData){ + .addr =3D addr_map_bits & TARGET_PAGE_MASK, + .idxmap =3D (addr_map_bits & ~TARGET_PAGE_MASK) >> 6, + .bits =3D addr_map_bits & 0x3f + }; +} + +static void tlb_flush_page_bits_by_mmuidx_async_1(CPUState *cpu, + run_on_cpu_data runon) +{ + tlb_flush_page_bits_by_mmuidx_async_0(cpu, decode_runon_to_pbm(runon)); +} + +static void tlb_flush_page_bits_by_mmuidx_async_2(CPUState *cpu, + run_on_cpu_data data) +{ + TLBFlushPageBitsByMMUIdxData *d =3D data.host_ptr; + tlb_flush_page_bits_by_mmuidx_async_0(cpu, *d); + g_free(d); +} + +void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr, + uint16_t idxmap, unsigned bits) +{ + TLBFlushPageBitsByMMUIdxData d; + run_on_cpu_data runon; + + /* If all bits are significant, this devolves to tlb_flush_page. */ + if (bits >=3D TARGET_LONG_BITS) { + tlb_flush_page_by_mmuidx(cpu, addr, idxmap); + return; + } + /* If no page bits are significant, this devolves to tlb_flush. */ + if (bits < TARGET_PAGE_BITS) { + tlb_flush_by_mmuidx(cpu, idxmap); + return; + } + + /* This should already be page aligned */ + d.addr =3D addr & TARGET_PAGE_MASK; + d.idxmap =3D idxmap; + d.bits =3D bits; + + if (qemu_cpu_is_self(cpu)) { + tlb_flush_page_bits_by_mmuidx_async_0(cpu, d); + } else if (encode_pbm_to_runon(&runon, d)) { + async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon= ); + } else { + TLBFlushPageBitsByMMUIdxData *p + =3D g_new(TLBFlushPageBitsByMMUIdxData, 1); + + /* Otherwise allocate a structure, freed by the worker. */ + *p =3D d; + async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_2, + RUN_ON_CPU_HOST_PTR(p)); + } +} + +void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu, + target_ulong addr, + uint16_t idxmap, + unsigned bits) +{ + TLBFlushPageBitsByMMUIdxData d; + run_on_cpu_data runon; + + /* If all bits are significant, this devolves to tlb_flush_page. */ + if (bits >=3D TARGET_LONG_BITS) { + tlb_flush_page_by_mmuidx_all_cpus(src_cpu, addr, idxmap); + return; + } + /* If no page bits are significant, this devolves to tlb_flush. */ + if (bits < TARGET_PAGE_BITS) { + tlb_flush_by_mmuidx_all_cpus(src_cpu, idxmap); + return; + } + + /* This should already be page aligned */ + d.addr =3D addr & TARGET_PAGE_MASK; + d.idxmap =3D idxmap; + d.bits =3D bits; + + if (encode_pbm_to_runon(&runon, d)) { + flush_all_helper(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1, r= unon); + } else { + CPUState *dst_cpu; + TLBFlushPageBitsByMMUIdxData *p; + + /* Allocate a separate data block for each destination cpu. */ + CPU_FOREACH(dst_cpu) { + if (dst_cpu !=3D src_cpu) { + p =3D g_new(TLBFlushPageBitsByMMUIdxData, 1); + *p =3D d; + async_run_on_cpu(dst_cpu, + tlb_flush_page_bits_by_mmuidx_async_2, + RUN_ON_CPU_HOST_PTR(p)); + } + } + } + + tlb_flush_page_bits_by_mmuidx_async_0(src_cpu, d); +} + +void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu, + target_ulong addr, + uint16_t idxmap, + unsigned bits) +{ + TLBFlushPageBitsByMMUIdxData d; + run_on_cpu_data runon; + + /* If all bits are significant, this devolves to tlb_flush_page. */ + if (bits >=3D TARGET_LONG_BITS) { + tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap); + return; + } + /* If no page bits are significant, this devolves to tlb_flush. */ + if (bits < TARGET_PAGE_BITS) { + tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, idxmap); + return; + } + + /* This should already be page aligned */ + d.addr =3D addr & TARGET_PAGE_MASK; + d.idxmap =3D idxmap; + d.bits =3D bits; + + if (encode_pbm_to_runon(&runon, d)) { + flush_all_helper(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1, r= unon); + async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async= _1, + runon); + } else { + CPUState *dst_cpu; + TLBFlushPageBitsByMMUIdxData *p; + + /* Allocate a separate data block for each destination cpu. */ + CPU_FOREACH(dst_cpu) { + if (dst_cpu !=3D src_cpu) { + p =3D g_new(TLBFlushPageBitsByMMUIdxData, 1); + *p =3D d; + async_run_on_cpu(dst_cpu, tlb_flush_page_bits_by_mmuidx_as= ync_2, + RUN_ON_CPU_HOST_PTR(p)); + } + } + + p =3D g_new(TLBFlushPageBitsByMMUIdxData, 1); + *p =3D d; + async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async= _2, + RUN_ON_CPU_HOST_PTR(p)); + } +} + /* update the TLBs so that writes to code in the virtual page 'addr' can be detected */ void tlb_protect_code(ram_addr_t ram_addr) --=20 2.25.1