From nobody Sun May 5 02:57:30 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1587653831; cv=none; d=zohomail.com; s=zohoarc; b=bj1ZiycDTXom4S+f/1KhHosl7b//EVcR5X6Xof5+MgB7zkIk3kdEWXUZG+OMO1TKVftHwpOFdTn3ayey6ipkQfjNnGDhKIpl3Kpmvv9KofLWiwl3DgyGSLQlZxV5hubR/Ud28nMu3OjXJx9FduRgc2A1SkPLPmAjqHS4PyJwJDQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1587653831; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Kp5QQKeLwXUV/mbIgpX5uKyCfKyL2YdB4a5qwxLkS+E=; b=T+w+P/JUu1UNpYRbCx6PT90zg/ELJEjvHJgu7cREU8heu5DfUpmjbVeNKrc5WRtDTZknlUoyRRXsv8lOrKYg782iFMz9MNqKUljpFvNjXWcUaHBfd9y4GseH74Sx5xxdZQDVpuO9kV+r+wSjV5yCGhXe7PRAtZH77YrRnYJbtlg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1587653831503114.82297093877696; Thu, 23 Apr 2020 07:57:11 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jRdHU-0000jC-TW; Thu, 23 Apr 2020 14:56:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jRdHT-0000it-HR for xen-devel@lists.xenproject.org; Thu, 23 Apr 2020 14:56:47 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a4414168-8572-11ea-b58d-bc764e2007e4; Thu, 23 Apr 2020 14:56:42 +0000 (UTC) X-Inumbo-ID: a4414168-8572-11ea-b58d-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1587653803; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lw0igEL+hLah+EJErxdPOZY2wmhG4dh1CdAbnB4ziu8=; b=aMc591cySkNIk05zQeYW3Y4S5qF1DjHvGdIdJCdU157DpKuo0JQKQejK lFySGpG6J51UGTLfdi4EZS4MkXHd/qcROwQ1e0OYPdIok4AViL3vGMbmr D4qzrlo8fWnq+RUhf4LyYIEzBuCAv9zRpAKY2KDHZ5DrtAaekQmk3rIKM g=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 6Bt5GGHe4QvAM1PydNq68I5FNT8OBYyMHNGt6Tx7xNYcuWi+O3ZhZ1KwCf5hdrSod6vdCF5mrn vVhKbOrEpnxjx/5y0JCI0CB9BNL/DGsp/GdqWixn6D775fZCLsjQ8aDdjfDqKgwofH2xj/Qgg9 Tc8aHqZWnyzFN0KxaoEMbrtOqLHzrXh6CWN9e9KYqCinmzlTORvMOSPWDSy1n+iOkeMmZgbRuz rwpFtpfgWJeOj0cKcWAxWSta5bTDiM7wj74gisRiQom4M5XKTvtEh2CZwQL8Zv+uk6XW6KuWfs 95o= X-SBRS: 2.7 X-MesageID: 16389041 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.73,307,1583211600"; d="scan'208";a="16389041" From: Roger Pau Monne To: Subject: [PATCH v11 1/3] x86/tlb: introduce a flush HVM ASIDs flag Date: Thu, 23 Apr 2020 16:56:09 +0200 Message-ID: <20200423145611.55378-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200423145611.55378-1-roger.pau@citrix.com> References: <20200423145611.55378-1-roger.pau@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Andrew Cooper , Tim Deegan , George Dunlap , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Introduce a specific flag to request a HVM guest linear TLB flush, which is an ASID/VPID tickle that forces a guest linear to guest physical TLB flush for all HVM guests. This was previously unconditionally done in each pre_flush call, but that's not required: HVM guests not using shadow don't require linear TLB flushes as Xen doesn't modify the pages tables the guest runs on in that case (ie: when using HAP). Note that shadow paging code already takes care of issuing the necessary flushes when the shadow page tables are modified. In order to keep the previous behavior modify all shadow code TLB flushes to also flush the guest linear to physical TLB if the guest is HVM. I haven't looked at each specific shadow code TLB flush in order to figure out whether it actually requires a guest TLB flush or not, so there might be room for improvement in that regard. Also perform ASID/VPID flushes when modifying the p2m tables as it's a requirement for AMD hardware. Finally keep the flush in switch_cr3_cr4, as it's not clear whether code could rely on switch_cr3_cr4 also performing a guest linear TLB flush. A following patch can remove the ASID/VPID tickle from switch_cr3_cr4 if found to not be necessary. Signed-off-by: Roger Pau Monn=C3=A9 Acked-by: Tim Deegan Reviewed-by: Jan Beulich --- Changes since v10: - Reword commit message. - Split flags generation in guest_flush_tlb_mask to a separate helper. - Move sh_flush_local into multi.c and make it use guest_flush_tlb_flags. - Flush ASID always when running HVM domains in shadow mode. Changes since v9: - Introduce and use guest_flush_tlb_mask and sh_flush_local. - Add a local domain variable to p2m_pt_change_entry_type_global. Changes since v8: - Don't flush host TLB on HAP changes. - Introduce a helper for shadow changes that only flushes ASIDs/VPIDs when the guest is HVM. - Introduce a helper for HAP that only flushes ASIDs/VPIDs. Changes since v7: - Do not perform an ASID flush in filtered_flush_tlb_mask: the requested flush is related to the page need_tlbflush field and not to p2m changes (applies to both callers). Changes since v6: - Add ASID/VPID flushes when modifying the p2m. - Keep the ASID/VPID flush in switch_cr3_cr4. Changes since v5: - Rename FLUSH_GUESTS_TLB to FLUSH_HVM_ASID_CORE. - Clarify commit message. - Define FLUSH_HVM_ASID_CORE to 0 when !CONFIG_HVM. --- xen/arch/x86/flushtlb.c | 23 +++++++++++++++++++++-- xen/arch/x86/mm/hap/hap.c | 8 ++++---- xen/arch/x86/mm/hap/nested_hap.c | 2 +- xen/arch/x86/mm/p2m-pt.c | 5 +++-- xen/arch/x86/mm/paging.c | 2 +- xen/arch/x86/mm/shadow/common.c | 18 +++++++++--------- xen/arch/x86/mm/shadow/hvm.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 22 ++++++++++++++-------- xen/include/asm-x86/flushtlb.h | 9 +++++++++ 9 files changed, 63 insertions(+), 28 deletions(-) diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c index 03f92c23dc..0c40b5d273 100644 --- a/xen/arch/x86/flushtlb.c +++ b/xen/arch/x86/flushtlb.c @@ -7,6 +7,7 @@ * Copyright (c) 2003-2006, K A Fraser */ =20 +#include #include #include #include @@ -59,8 +60,6 @@ static u32 pre_flush(void) raise_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ); =20 skip_clocktick: - hvm_flush_guest_tlbs(); - return t2; } =20 @@ -118,6 +117,7 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) local_irq_save(flags); =20 t =3D pre_flush(); + hvm_flush_guest_tlbs(); =20 old_cr4 =3D read_cr4(); ASSERT(!(old_cr4 & X86_CR4_PCIDE) || !(old_cr4 & X86_CR4_PGE)); @@ -221,6 +221,9 @@ unsigned int flush_area_local(const void *va, unsigned = int flags) do_tlb_flush(); } =20 + if ( flags & FLUSH_HVM_ASID_CORE ) + hvm_flush_guest_tlbs(); + if ( flags & FLUSH_CACHE ) { const struct cpuinfo_x86 *c =3D ¤t_cpu_data; @@ -254,3 +257,19 @@ unsigned int flush_area_local(const void *va, unsigned= int flags) =20 return flags; } + +unsigned int guest_flush_tlb_flags(const struct domain *d) +{ + bool shadow =3D paging_mode_shadow(d); + bool asid =3D is_hvm_domain(d) && (cpu_has_svm || shadow); + + return (shadow ? FLUSH_TLB : 0) | (asid ? FLUSH_HVM_ASID_CORE : 0); +} + +void guest_flush_tlb_mask(const struct domain *d, const cpumask_t *mask) +{ + unsigned int flags =3D guest_flush_tlb_flags(d); + + if ( flags ) + flush_mask(mask, flags); +} diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 11829e7aad..580d1c2164 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -118,7 +118,7 @@ int hap_track_dirty_vram(struct domain *d, p2m_change_type_range(d, begin_pfn, begin_pfn + nr, p2m_ram_rw, p2m_ram_logdirty); =20 - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); =20 memset(dirty_bitmap, 0xff, size); /* consider all pages dirty = */ } @@ -205,7 +205,7 @@ static int hap_enable_log_dirty(struct domain *d, bool_= t log_global) * to be read-only, or via hardware-assisted log-dirty. */ p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty); - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); } return 0; } @@ -234,7 +234,7 @@ static void hap_clean_dirty_bitmap(struct domain *d) * be read-only, or via hardware-assisted log-dirty. */ p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty); - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); } =20 /************************************************/ @@ -812,7 +812,7 @@ hap_write_p2m_entry(struct p2m_domain *p2m, unsigned lo= ng gfn, l1_pgentry_t *p, =20 safe_write_pte(p, new); if ( old_flags & _PAGE_PRESENT ) - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); =20 paging_unlock(d); =20 diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_= hap.c index abe5958a52..f92ddc5206 100644 --- a/xen/arch/x86/mm/hap/nested_hap.c +++ b/xen/arch/x86/mm/hap/nested_hap.c @@ -84,7 +84,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigne= d long gfn, safe_write_pte(p, new); =20 if (old_flags & _PAGE_PRESENT) - flush_tlb_mask(p2m->dirty_cpumask); + guest_flush_tlb_mask(d, p2m->dirty_cpumask); =20 paging_unlock(d); =20 diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index eb66077496..5c0501794e 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -866,11 +866,12 @@ static void p2m_pt_change_entry_type_global(struct p2= m_domain *p2m, l1_pgentry_t *tab; unsigned long gfn =3D 0; unsigned int i, changed; + const struct domain *d =3D p2m->domain; =20 if ( pagetable_get_pfn(p2m_get_pagetable(p2m)) =3D=3D 0 ) return; =20 - ASSERT(hap_enabled(p2m->domain)); + ASSERT(hap_enabled(d)); =20 tab =3D map_domain_page(pagetable_get_mfn(p2m_get_pagetable(p2m))); for ( changed =3D i =3D 0; i < (1 << PAGETABLE_ORDER); ++i ) @@ -896,7 +897,7 @@ static void p2m_pt_change_entry_type_global(struct p2m_= domain *p2m, unmap_domain_page(tab); =20 if ( changed ) - flush_tlb_mask(p2m->domain->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); } =20 static int p2m_pt_change_entry_type_range(struct p2m_domain *p2m, diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c index f5ff5d67a0..7c265fb5f3 100644 --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -613,7 +613,7 @@ void paging_log_dirty_range(struct domain *d, =20 p2m_unlock(p2m); =20 - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); } =20 /* diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index 3746dd6fb0..7ed8e7b71b 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -368,7 +368,7 @@ static int oos_remove_write_access(struct vcpu *v, mfn_= t gmfn, } =20 if ( ftlb ) - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); =20 return 0; } @@ -946,7 +946,7 @@ static void _shadow_prealloc(struct domain *d, unsigned= int pages) /* See if that freed up enough space */ if ( d->arch.paging.shadow.free_pages >=3D pages ) { - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); return; } } @@ -1000,7 +1000,7 @@ static void shadow_blow_tables(struct domain *d) pagetable_get_mfn(v->arch.shadow_table[i]),= 0); =20 /* Make sure everyone sees the unshadowings */ - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); } =20 void shadow_blow_tables_per_domain(struct domain *d) @@ -1103,7 +1103,7 @@ mfn_t shadow_alloc(struct domain *d, if ( unlikely(!cpumask_empty(&mask)) ) { perfc_incr(shadow_alloc_tlbflush); - flush_tlb_mask(&mask); + guest_flush_tlb_mask(d, &mask); } /* Now safe to clear the page for reuse */ clear_domain_page(page_to_mfn(sp)); @@ -2296,7 +2296,7 @@ void sh_remove_shadows(struct domain *d, mfn_t gmfn, = int fast, int all) =20 /* Need to flush TLBs now, so that linear maps are safe next time we * take a fault. */ - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); =20 paging_unlock(d); } @@ -3013,7 +3013,7 @@ static void sh_unshadow_for_p2m_change(struct domain = *d, unsigned long gfn, { sh_remove_all_shadows_and_parents(d, mfn); if ( sh_remove_all_mappings(d, mfn, _gfn(gfn)) ) - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); } } =20 @@ -3053,7 +3053,7 @@ static void sh_unshadow_for_p2m_change(struct domain = *d, unsigned long gfn, } omfn =3D mfn_add(omfn, 1); } - flush_tlb_mask(&flushmask); + guest_flush_tlb_mask(d, &flushmask); =20 if ( npte ) unmap_domain_page(npte); @@ -3340,7 +3340,7 @@ int shadow_track_dirty_vram(struct domain *d, } } if ( flush_tlb ) - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); goto out; =20 out_sl1ma: @@ -3410,7 +3410,7 @@ bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, = struct vcpu *v), } =20 /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); + guest_flush_tlb_mask(d, mask); =20 /* Done. */ for_each_vcpu ( d, v ) diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c index 1e6024c71f..608360daec 100644 --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -591,7 +591,7 @@ static void validate_guest_pt_write(struct vcpu *v, mfn= _t gmfn, =20 if ( rc & SHADOW_SET_FLUSH ) /* Need to flush TLBs to pick up shadow PT changes */ - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); =20 if ( rc & SHADOW_SET_ERROR ) { diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 5368adf474..7d16d1c1a9 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -85,6 +85,12 @@ const char *const fetch_type_names[] =3D { }; #endif =20 +/* Helper to perform a local TLB flush. */ +static void sh_flush_local(const struct domain *d) +{ + flush_local(guest_flush_tlb_flags(d)); +} + /*************************************************************************= */ /* Hash table mapping from guest pagetables to shadows * @@ -3075,7 +3081,7 @@ static int sh_page_fault(struct vcpu *v, perfc_incr(shadow_rm_write_flush_tlb); smp_wmb(); atomic_inc(&d->arch.paging.shadow.gtable_dirty_version); - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); } =20 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) @@ -3584,7 +3590,7 @@ static bool sh_invlpg(struct vcpu *v, unsigned long l= inear) if ( mfn_to_page(sl1mfn)->u.sh.type =3D=3D SH_type_fl1_shadow ) { - flush_tlb_local(); + sh_flush_local(v->domain); return false; } =20 @@ -3798,7 +3804,7 @@ sh_update_linear_entries(struct vcpu *v) * linear pagetable to read a top-level shadow page table entry. But, * without this change, it would fetch the wrong value due to a stale = TLB. */ - flush_tlb_local(); + sh_flush_local(d); } =20 =20 @@ -3998,7 +4004,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) * (old) shadow linear maps in the writeable mapping heuristics. */ #if GUEST_PAGING_LEVELS =3D=3D 2 if ( sh_remove_write_access(d, gmfn, 2, 0) !=3D 0 ) - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow); #elif GUEST_PAGING_LEVELS =3D=3D 3 /* PAE guests have four shadow_table entries, based on the @@ -4022,7 +4028,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) } } if ( flush ) - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); /* Now install the new shadows. */ for ( i =3D 0; i < 4; i++ ) { @@ -4043,7 +4049,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) } #elif GUEST_PAGING_LEVELS =3D=3D 4 if ( sh_remove_write_access(d, gmfn, 4, 0) !=3D 0 ) - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow); if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) ) { @@ -4494,7 +4500,7 @@ static void sh_pagetable_dying(paddr_t gpa) } } if ( flush ) - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); =20 /* Remember that we've seen the guest use this interface, so we * can rely on it using it in future, instead of guessing at @@ -4531,7 +4537,7 @@ static void sh_pagetable_dying(paddr_t gpa) mfn_to_page(gmfn)->pagetable_dying =3D true; shadow_unhook_mappings(d, smfn, 1/* user pages only */); /* Now flush the TLB: we removed toplevel mappings. */ - flush_tlb_mask(d->dirty_cpumask); + guest_flush_tlb_mask(d, d->dirty_cpumask); } =20 /* Remember that we've seen the guest use this interface, so we diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 2cfe4e6e97..798049b6ad 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -105,6 +105,12 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long c= r4); #define FLUSH_VCPU_STATE 0x1000 /* Flush the per-cpu root page table */ #define FLUSH_ROOT_PGTBL 0x2000 +#if CONFIG_HVM + /* Flush all HVM guests linear TLB (using ASID/VPID) */ +#define FLUSH_HVM_ASID_CORE 0x4000 +#else +#define FLUSH_HVM_ASID_CORE 0 +#endif =20 /* Flush local TLBs/caches. */ unsigned int flush_area_local(const void *va, unsigned int flags); @@ -159,4 +165,7 @@ static inline int clean_dcache_va_range(const void *p, = unsigned long size) return clean_and_invalidate_dcache_va_range(p, size); } =20 +unsigned int guest_flush_tlb_flags(const struct domain *d); +void guest_flush_tlb_mask(const struct domain *d, const cpumask_t *mask); + #endif /* __FLUSHTLB_H__ */ --=20 2.26.0 From nobody Sun May 5 02:57:30 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1587653833; cv=none; d=zohomail.com; s=zohoarc; b=EdZ8J+8cTU7vDiNJtOEUTlUjhUVcwxYEa9s2M8iNMrrnPO9iQoa8vXRd0zGmHGp8XRpRCWsJyXqamyoHcCtNgwd1pslCbsTD8qYQgOAMY7PSqFwZryAngpDU3Jv7EqlXLAD96kjryO7VFs+3vz+juFbghKT4xFK4A0lCCH+fm0U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1587653833; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hNDbrw01rfurkNWSJmh6X8NzgKLnITlbiWp1HH6WK1E=; b=jSaUc9PoV4j3yi7rpzh7bDqNn5xVhlYfyrXQt3d4cneO52P1rvr3sUk2Yudo5nfg6AXSS5a/ZMcmJsgWgIUo36KEbcetGGh7oIkDDQBThfaCajxTpY/vWwaVUVJ0Gfv7eVMxcR4PFEHTJXcMHGF7qqcNEBrWQaT4ysrsJlb4+Xg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1587653833210236.884990372179; Thu, 23 Apr 2020 07:57:13 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jRdHT-0000iv-Lu; Thu, 23 Apr 2020 14:56:47 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jRdHR-0000im-Re for xen-devel@lists.xenproject.org; Thu, 23 Apr 2020 14:56:45 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a549ba54-8572-11ea-9393-12813bfff9fa; Thu, 23 Apr 2020 14:56:43 +0000 (UTC) X-Inumbo-ID: a549ba54-8572-11ea-9393-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1587653803; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A+41uuweYOPK2zy96DBA3O55H3T+IJmZa1Mxares+eM=; b=EBqISCT4oO84bqZUHo7IGYBTAJkMTOYMy2ORKRgLFbkDangRA0dNlJge FF//hbjGNO9SUM4Ng3fCR/VuID0UntrBxPJ3NjtIvSp5T2beRcQVW8zxI xU3LStXHdX/XZ83jAqgcqhT1IL+8smUpwdbYz9kjfkOX1hVDYnpBztp2I Q=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 987ncg5eR+9bKvSPev/w1qOuJ5pOd9jD2FCXU1gNh2FJS29nxJIiaLv/C02ZOA+IY8lo7ANQXw WCxn8po7KTNvqcYhE0w5eGe1VeLxga4eEnnd9lHgGVSuExEEehZl6lr2FsLaJ3tv2rcONd64AZ AUzwqAeWeZ0NtZC+8vzkdNAPudjzd8F+b6Xb9aZTdavag3j4CoJAlMsjzjtQQD/YFsJG2yQny5 eHrjdWYdXItXG/aJ2andxindIhcl/jBAmI5FXo7EJ9KU7Zb1OLTMxVmgh5/nkLsx//lf7mN1sZ 43Q= X-SBRS: 2.7 X-MesageID: 16819479 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.73,307,1583211600"; d="scan'208";a="16819479" From: Roger Pau Monne To: Subject: [PATCH v11 2/3] x86/tlb: allow disabling the TLB clock Date: Thu, 23 Apr 2020 16:56:10 +0200 Message-ID: <20200423145611.55378-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200423145611.55378-1-roger.pau@citrix.com> References: <20200423145611.55378-1-roger.pau@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The TLB clock is helpful when running Xen on bare metal because when doing a TLB flush each CPU is IPI'ed and can keep a timestamp of the last flush. This is not the case however when Xen is running virtualized, and the underlying hypervisor provides mechanism to assist in performing TLB flushes: Xen itself for example offers a HVMOP_flush_tlbs hypercall in order to perform a TLB flush without having to IPI each CPU. When using such mechanisms it's no longer possible to keep a timestamp of the flushes on each CPU, as they are performed by the underlying hypervisor. Offer a boolean in order to signal Xen that the timestamped TLB shouldn't be used. This avoids keeping the timestamps of the flushes, and also forces NEED_FLUSH to always return true. No functional change intended, as this change doesn't introduce any user that disables the timestamped TLB. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Jan Beulich --- xen/arch/x86/flushtlb.c | 19 +++++++++++++------ xen/include/asm-x86/flushtlb.h | 17 ++++++++++++++++- 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c index 0c40b5d273..25798df50f 100644 --- a/xen/arch/x86/flushtlb.c +++ b/xen/arch/x86/flushtlb.c @@ -33,6 +33,9 @@ u32 tlbflush_clock =3D 1U; DEFINE_PER_CPU(u32, tlbflush_time); =20 +/* Signals whether the TLB flush clock is in use. */ +bool __read_mostly tlb_clk_enabled =3D true; + /* * pre_flush(): Increment the virtual TLB-flush clock. Returns new clock v= alue. *=20 @@ -83,12 +86,13 @@ static void post_flush(u32 t) static void do_tlb_flush(void) { unsigned long flags, cr4; - u32 t; + u32 t =3D 0; =20 /* This non-reentrant function is sometimes called in interrupt contex= t. */ local_irq_save(flags); =20 - t =3D pre_flush(); + if ( tlb_clk_enabled ) + t =3D pre_flush(); =20 if ( use_invpcid ) invpcid_flush_all(); @@ -100,7 +104,8 @@ static void do_tlb_flush(void) else write_cr3(read_cr3()); =20 - post_flush(t); + if ( tlb_clk_enabled ) + post_flush(t); =20 local_irq_restore(flags); } @@ -108,7 +113,7 @@ static void do_tlb_flush(void) void switch_cr3_cr4(unsigned long cr3, unsigned long cr4) { unsigned long flags, old_cr4; - u32 t; + u32 t =3D 0; =20 /* Throughout this function we make this assumption: */ ASSERT(!(cr4 & X86_CR4_PCIDE) || !(cr4 & X86_CR4_PGE)); @@ -116,7 +121,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) /* This non-reentrant function is sometimes called in interrupt contex= t. */ local_irq_save(flags); =20 - t =3D pre_flush(); + if ( tlb_clk_enabled ) + t =3D pre_flush(); hvm_flush_guest_tlbs(); =20 old_cr4 =3D read_cr4(); @@ -169,7 +175,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) if ( cr4 & X86_CR4_PCIDE ) invpcid_flush_all_nonglobals(); =20 - post_flush(t); + if ( tlb_clk_enabled ) + post_flush(t); =20 local_irq_restore(flags); } diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 798049b6ad..8639427cce 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -21,10 +21,21 @@ extern u32 tlbflush_clock; /* Time at which each CPU's TLB was last flushed. */ DECLARE_PER_CPU(u32, tlbflush_time); =20 -#define tlbflush_current_time() tlbflush_clock +/* TLB clock is in use. */ +extern bool tlb_clk_enabled; + +static inline uint32_t tlbflush_current_time(void) +{ + /* Returning 0 from tlbflush_current_time will always force a flush. */ + return tlb_clk_enabled ? tlbflush_clock : 0; +} =20 static inline void page_set_tlbflush_timestamp(struct page_info *page) { + /* Avoid the write if the TLB clock is disabled. */ + if ( !tlb_clk_enabled ) + return; + /* * Prevent storing a stale time stamp, which could happen if an update * to tlbflush_clock plus a subsequent flush IPI happen between the @@ -67,6 +78,10 @@ static inline void tlbflush_filter(cpumask_t *mask, uint= 32_t page_timestamp) { unsigned int cpu; =20 + /* Short-circuit: there's no need to iterate if the clock is disabled.= */ + if ( !tlb_clk_enabled ) + return; + for_each_cpu ( cpu, mask ) if ( !NEED_FLUSH(per_cpu(tlbflush_time, cpu), page_timestamp) ) __cpumask_clear_cpu(cpu, mask); --=20 2.26.0 From nobody Sun May 5 02:57:30 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1587653833; cv=none; d=zohomail.com; s=zohoarc; b=h9hSLfEb/KFYH/wW9BY+D/9KbaAGVSNYJ7+k9yB704S6GtAUW1Qx61pWn/WCsvxfRugltqHNDB5VK7e7IfYDEHfzkFoTipvSbUWVPP0jCQEkUlkrpJsIX7fSSvJn+BKVLNfZUVzZSGx54iIACxL8iQLLGslJZgjnYgzs3lKkq58= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1587653833; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qOV74p4TAYMdNTB6aZ+iHUS4fFqxkaMVZNeGXQrxxg4=; b=dlg9CqCa1b7/uV9CEj8QNXBZ+gRJRAyVBCMPrJ3YfaW6vWLkf/l5vJ41Uh87voJLGvSjYpQzxeikl4l7DwAXGYm/TxMovOWkfucJNuInZL5xRtNWwMK4O3ftnaHbZ2mCyKTY9/b+B9qkGf0jmGcPUqj5K9fQxjqMK3WHTh8A/1I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1587653833021690.2400521768402; Thu, 23 Apr 2020 07:57:13 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jRdHZ-0000lJ-9k; Thu, 23 Apr 2020 14:56:53 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jRdHY-0000l4-H1 for xen-devel@lists.xenproject.org; Thu, 23 Apr 2020 14:56:52 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a64b7dac-8572-11ea-b58d-bc764e2007e4; Thu, 23 Apr 2020 14:56:45 +0000 (UTC) X-Inumbo-ID: a64b7dac-8572-11ea-b58d-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1587653806; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6xDyV2T4eBxzsTghP88Qw+tODGijc2LN55Bf5M9CvYA=; b=IF31iiKzbbeu0G5uKW+KflYcFn/E+NQ1KSr/3NYqk0sKb7DEb8WbO851 2kXMZRlIPQVvnt+6o5NfHdTUKhpZf9WP68oWbyUQMo4bSroa5GiEkv+se mEXzpGuJB79rex+1p70FiJTM0rSDaTXCMiSuvbXEL0ReI5xgkEx2Iud+U s=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: HLxOj7Zj+/LRqnIGjwgJD9AW8xDF/WuvKmYEfbwNNfqIyeV8+NyPVu8oKK81E/oC4V5b6TAwa4 pLA75EFGZwohAF52zOO3vpKcldwU3K0VGg8YWw/t0xFHcgK6g99G6JkqZRroFi6erS/uy9girE 2PyN9AhrtTH2Z5DHLKz9Y8JPYciOj8KMHlU1/GF5xK4XqdpNNLnwTkdtDKPsHpIrMZTUaO93NX T0p2kOMGeBzIOjPmc+iHOoNeR5GNmNmcpAmBf0zKQw/5Uw9siSqCGKhk6qUTmfKLJcxrHmRpeo o5k= X-SBRS: 2.7 X-MesageID: 16153763 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.73,307,1583211600"; d="scan'208";a="16153763" From: Roger Pau Monne To: Subject: [PATCH v11 3/3] x86/tlb: use Xen L0 assisted TLB flush when available Date: Thu, 23 Apr 2020 16:56:11 +0200 Message-ID: <20200423145611.55378-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200423145611.55378-1-roger.pau@citrix.com> References: <20200423145611.55378-1-roger.pau@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes. This greatly increases the performance of TLB flushes when running with a high amount of vCPUs as a Xen guest, and is specially important when running in shim mode. The following figures are from a PV guest running `make -j32 xen` in shim mode with 32 vCPUs and HAP. Using x2APIC and ALLBUT shorthand: real 4m35.973s user 4m35.110s sys 36m24.117s Using L0 assisted flush: real 1m2.596s user 4m34.818s sys 5m16.374s The implementation adds a new hook to hypervisor_ops so other enlightenments can also implement such assisted flush just by filling the hook. Note that the Xen implementation completely ignores the dirty CPU mask and the linear address passed in, and always performs a global TLB flush on all vCPUs. This is a limitation of the hypercall provided by Xen. Also note that local TLB flushes are not performed using the assisted TLB flush, only remote ones. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Reviewed-by: Jan Beulich --- Changes since v5: - Clarify commit message. - Test for assisted flush at setup, do this for all hypervisors. - Return EOPNOTSUPP if assisted flush is not available. Changes since v4: - Adjust order calculation. Changes since v3: - Use an alternative call for the flush hook. Changes since v1: - Add a L0 assisted hook to hypervisor ops. --- xen/arch/x86/guest/hypervisor.c | 14 ++++++++++++++ xen/arch/x86/guest/xen/xen.c | 6 ++++++ xen/arch/x86/smp.c | 7 +++++++ xen/include/asm-x86/guest/hypervisor.h | 17 +++++++++++++++++ 4 files changed, 44 insertions(+) diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hyperviso= r.c index 647cdb1367..e46de42ded 100644 --- a/xen/arch/x86/guest/hypervisor.c +++ b/xen/arch/x86/guest/hypervisor.c @@ -18,6 +18,7 @@ * * Copyright (c) 2019 Microsoft. */ +#include #include #include =20 @@ -51,6 +52,10 @@ void __init hypervisor_setup(void) { if ( ops.setup ) ops.setup(); + + /* Check if assisted flush is available and disable the TLB clock if s= o. */ + if ( !hypervisor_flush_tlb(cpumask_of(smp_processor_id()), NULL, 0) ) + tlb_clk_enabled =3D false; } =20 int hypervisor_ap_setup(void) @@ -73,6 +78,15 @@ void __init hypervisor_e820_fixup(struct e820map *e820) ops.e820_fixup(e820); } =20 +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order) +{ + if ( ops.flush_tlb ) + return alternative_call(ops.flush_tlb, mask, va, order); + + return -EOPNOTSUPP; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index e74fd1e995..3bc01c8723 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -324,12 +324,18 @@ static void __init e820_fixup(struct e820map *e820) pv_shim_fixup_e820(e820); } =20 +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int o= rder) +{ + return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); +} + static const struct hypervisor_ops __initconstrel ops =3D { .name =3D "Xen", .setup =3D setup, .ap_setup =3D ap_setup, .resume =3D resume, .e820_fixup =3D e820_fixup, + .flush_tlb =3D flush_tlb, }; =20 const struct hypervisor_ops *__init xg_probe(void) diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index bcead5d01b..1d9fec65de 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -268,6 +269,12 @@ void flush_area_mask(const cpumask_t *mask, const void= *va, unsigned int flags) if ( (flags & ~FLUSH_ORDER_MASK) && !cpumask_subset(mask, cpumask_of(cpu)) ) { + if ( cpu_has_hypervisor && + !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | + FLUSH_ORDER_MASK)) && + !hypervisor_flush_tlb(mask, va, (flags - 1) & FLUSH_ORDER_MAS= K) ) + return; + spin_lock(&flush_lock); cpumask_and(&flush_cpumask, mask, &cpu_online_map); cpumask_clear_cpu(cpu, &flush_cpumask); diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/g= uest/hypervisor.h index ade10e74ea..77a1d21824 100644 --- a/xen/include/asm-x86/guest/hypervisor.h +++ b/xen/include/asm-x86/guest/hypervisor.h @@ -19,6 +19,8 @@ #ifndef __X86_HYPERVISOR_H__ #define __X86_HYPERVISOR_H__ =20 +#include + #include =20 struct hypervisor_ops { @@ -32,6 +34,8 @@ struct hypervisor_ops { void (*resume)(void); /* Fix up e820 map */ void (*e820_fixup)(struct e820map *e820); + /* L0 assisted TLB flush */ + int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int o= rder); }; =20 #ifdef CONFIG_GUEST @@ -41,6 +45,14 @@ void hypervisor_setup(void); int hypervisor_ap_setup(void); void hypervisor_resume(void); void hypervisor_e820_fixup(struct e820map *e820); +/* + * L0 assisted TLB flush. + * mask: cpumask of the dirty vCPUs that should be flushed. + * va: linear address to flush, or NULL for global flushes. + * order: order of the linear address pointed by va. + */ +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order); =20 #else =20 @@ -52,6 +64,11 @@ static inline void hypervisor_setup(void) { ASSERT_UNREA= CHABLE(); } static inline int hypervisor_ap_setup(void) { return 0; } static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); } static inline void hypervisor_e820_fixup(struct e820map *e820) {} +static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *= va, + unsigned int order) +{ + return -EOPNOTSUPP; +} =20 #endif /* CONFIG_GUEST */ =20 --=20 2.26.0