From nobody Mon Feb 9 12:09:38 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1581355790218614.7834533847355; Mon, 10 Feb 2020 09:29:50 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j1Cs4-0006dq-Mi; Mon, 10 Feb 2020 17:29:20 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j1Cs3-0006dV-M3 for xen-devel@lists.xenproject.org; Mon, 10 Feb 2020 17:29:19 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d752adfd-4c2a-11ea-b4ed-12813bfff9fa; Mon, 10 Feb 2020 17:29:08 +0000 (UTC) X-Inumbo-ID: d752adfd-4c2a-11ea-b4ed-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1581355749; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Xpa2szC0fat6uOjGu5WUW2pqHkGPf+SYTBMV/T2OCYs=; b=YbqEzX2UE1DITpgsuxN0uCk/4olekPbdfwsSWEHYqLG19gwRoSHWvg/R QybNHaDWXJa7xWZqrSPFYegoGT2H8K7BcNqO3KhprAOIhy3Oh1o3rKDlp g5M3FbvIYinxlJO3KuyztAkvEXRXcNHCOBn6r1x9GFay5GSvddjQoHb3g Q=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: wb8keozaMXSUCoU/43WV9vuvJ/huP8TrkOLMO8gZQvBR2KDaFy8nrhu9U9nG8QLTjbPe4RW/Lc QiQsjIQX3OSZUYh0RZB1jwBUKcfVduQVn1h9HCGi8C2cUUzmDaO1ZiXWqaGkBhdXmBehtFzAdE LVeQOrsBmfi6rfLQwa9pRIC/3mNWPsJviAzP87yxowTdtRhK2miC1bJNnLIBYK34TfknPRda39 c5E3SP7azDCh9kQYqrPjZeSYvXgPcPfq/YSID0QWWjaqKp5QxjxN55DhXLPYNbvM2rOkwJNZko v8c= X-SBRS: 2.7 X-MesageID: 12224397 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,425,1574139600"; d="scan'208";a="12224397" From: Roger Pau Monne To: Date: Mon, 10 Feb 2020 18:28:29 +0100 Message-ID: <20200210172829.43604-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200210172829.43604-1-roger.pau@citrix.com> References: <20200210172829.43604-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 7/7] x86/tlb: use Xen L0 assisted TLB flush when available X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes. This greatly increases the performance of TLB flushes when running with a high amount of vCPUs as a Xen guest, and is specially important when running in shim mode. The following figures are from a PV guest running `make -j32 xen` in shim mode with 32 vCPUs and HAP. Using x2APIC and ALLBUT shorthand: real 4m35.973s user 4m35.110s sys 36m24.117s Using L0 assisted flush: real 1m2.596s user 4m34.818s sys 5m16.374s The implementation adds a new hook to hypervisor_ops so other enlightenments can also implement such assisted flush just by filling the hook. Note that the Xen implementation completely ignores the dirty CPU mask and the linear address passed in, and always performs a global TLB flush on all vCPUs. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- Changes since v3: - Use an alternative call for the flush hook. Changes since v1: - Add a L0 assisted hook to hypervisor ops. --- xen/arch/x86/guest/hypervisor.c | 10 ++++++++++ xen/arch/x86/guest/xen/xen.c | 6 ++++++ xen/arch/x86/smp.c | 11 +++++++++++ xen/include/asm-x86/guest/hypervisor.h | 17 +++++++++++++++++ 4 files changed, 44 insertions(+) diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hyperviso= r.c index 647cdb1367..47e938e287 100644 --- a/xen/arch/x86/guest/hypervisor.c +++ b/xen/arch/x86/guest/hypervisor.c @@ -18,6 +18,7 @@ * * Copyright (c) 2019 Microsoft. */ +#include #include #include =20 @@ -73,6 +74,15 @@ void __init hypervisor_e820_fixup(struct e820map *e820) ops.e820_fixup(e820); } =20 +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order) +{ + if ( ops.flush_tlb ) + return alternative_call(ops.flush_tlb, mask, va, order); + + return -ENOSYS; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index f151b07548..5d3427a713 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -324,12 +324,18 @@ static void __init e820_fixup(struct e820map *e820) pv_shim_fixup_e820(e820); } =20 +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int o= rder) +{ + return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); +} + static const struct hypervisor_ops __initdata ops =3D { .name =3D "Xen", .setup =3D setup, .ap_setup =3D ap_setup, .resume =3D resume, .e820_fixup =3D e820_fixup, + .flush_tlb =3D flush_tlb, }; =20 const struct hypervisor_ops *__init xg_probe(void) diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index 65eb7cbda8..9bc925616a 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -256,6 +257,16 @@ void flush_area_mask(const cpumask_t *mask, const void= *va, unsigned int flags) if ( (flags & ~FLUSH_ORDER_MASK) && !cpumask_subset(mask, cpumask_of(cpu)) ) { + if ( cpu_has_hypervisor && + !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | + FLUSH_ORDER_MASK)) && + !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) ) + { + if ( tlb_clk_enabled ) + tlb_clk_enabled =3D false; + return; + } + spin_lock(&flush_lock); cpumask_and(&flush_cpumask, mask, &cpu_online_map); cpumask_clear_cpu(cpu, &flush_cpumask); diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/g= uest/hypervisor.h index ade10e74ea..432e57c2a0 100644 --- a/xen/include/asm-x86/guest/hypervisor.h +++ b/xen/include/asm-x86/guest/hypervisor.h @@ -19,6 +19,8 @@ #ifndef __X86_HYPERVISOR_H__ #define __X86_HYPERVISOR_H__ =20 +#include + #include =20 struct hypervisor_ops { @@ -32,6 +34,8 @@ struct hypervisor_ops { void (*resume)(void); /* Fix up e820 map */ void (*e820_fixup)(struct e820map *e820); + /* L0 assisted TLB flush */ + int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int o= rder); }; =20 #ifdef CONFIG_GUEST @@ -41,6 +45,14 @@ void hypervisor_setup(void); int hypervisor_ap_setup(void); void hypervisor_resume(void); void hypervisor_e820_fixup(struct e820map *e820); +/* + * L0 assisted TLB flush. + * mask: cpumask of the dirty vCPUs that should be flushed. + * va: linear address to flush, or NULL for global flushes. + * order: order of the linear address pointed by va. + */ +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order); =20 #else =20 @@ -52,6 +64,11 @@ static inline void hypervisor_setup(void) { ASSERT_UNREA= CHABLE(); } static inline int hypervisor_ap_setup(void) { return 0; } static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); } static inline void hypervisor_e820_fixup(struct e820map *e820) {} +static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *= va, + unsigned int order) +{ + return -ENOSYS; +} =20 #endif /* CONFIG_GUEST */ =20 --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel