From nobody Mon Feb 9 20:35:19 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148727857609.4287445185099; Mon, 27 Jan 2020 10:12:07 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rH-0007tx-V9; Mon, 27 Jan 2020 18:11:35 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rG-0007tV-W1 for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:35 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 717153bc-4130-11ea-8590-12813bfff9fa; Mon, 27 Jan 2020 18:11:30 +0000 (UTC) X-Inumbo-ID: 717153bc-4130-11ea-8590-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148691; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4IrhZy3TJurc3YdlOXeix0dvMd6yAgAHgdoYd1jCw8A=; b=L6qsZBPEEBOapyLr/Bawy73f+RGD1SrIEEvrolhLm5Zc9Dx39B6PiC3Z WHJguRH4JVjlupmqqWob6SPtgjPD3PwdIPaha9kfZNvAJIH7p2c1afW67 bEcFfUqLjZQ4fD2sCKuk96QutiFtQCW3duHbo6b7IeMZSGPfjFWRwHxOI Y=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 92GTHQogjf6mfJI5Lj9YxgGU421ow+XwT0fxbFTkGzpumJWemu5AuwlFDzeHdurVHNhXtobr4k g8Th6kTY2CwcLynEtminWd2piKeKLLkg39rsqIE50ElJ+h5mOX/KGCpZZKHDdPimhimQCTIZDq x2F89p9lfW6ordHPniMMOPev0lD8kYLnS+ceM182sTEZJU4uwqDMkfBkgM5n8QPu2Fi5TIOvN5 4Sus5Ql2SJOpyckyZOEu1WmbfBYm16havZXRIpsGCtdezJ5SU5Pfs9YDhRdr9Qd2IFY9n9omsZ Wog= X-SBRS: 2.7 X-MesageID: 11501352 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11501352" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:11 +0100 Message-ID: <20200127181115.82709-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 3/7] x86/paging: add TLB flush hooks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , George Dunlap , Andrew Cooper , Tim Deegan , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add shadow and hap implementation specific helpers to perform guest TLB flushes. Note that the code for both is exactly the same at the moment, and is copied from hvm_flush_vcpu_tlb. This will be changed by further patches that will add implementation specific optimizations to them. No functional change intended. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- xen/arch/x86/hvm/hvm.c | 51 ++---------------------------- xen/arch/x86/mm/hap/hap.c | 54 ++++++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/common.c | 55 +++++++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/multi.c | 1 - xen/include/asm-x86/hap.h | 3 ++ xen/include/asm-x86/shadow.h | 12 +++++++ 6 files changed, 127 insertions(+), 49 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 0b93609a82..96c419f0ef 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3986,55 +3986,10 @@ static void hvm_s3_resume(struct domain *d) bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { - static DEFINE_PER_CPU(cpumask_t, flush_cpumask); - cpumask_t *mask =3D &this_cpu(flush_cpumask); - struct domain *d =3D current->domain; - struct vcpu *v; - - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - - cpumask_clear(mask); - - /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ - for_each_vcpu ( d, v ) - { - unsigned int cpu; - - if ( !flush_vcpu(ctxt, v) ) - continue; - - paging_update_cr3(v, false); + struct domain *currd =3D current->domain; =20 - cpu =3D read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) - __cpumask_set_cpu(cpu, mask); - } - - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); - - return true; + return shadow_mode_enabled(currd) ? shadow_flush_tlb(flush_vcpu, ctxt) + : hap_flush_tlb(flush_vcpu, ctxt); } =20 static bool always_flush(void *ctxt, struct vcpu *v) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 3d93f3451c..6894c1aa38 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,6 +669,60 @@ static void hap_update_cr3(struct vcpu *v, int do_lock= ing, bool noflush) hvm_update_guest_cr3(v, noflush); } =20 +bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask =3D &this_cpu(flush_cpumask); + struct domain *d =3D current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu =3D read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + const struct paging_mode * hap_paging_get_mode(struct vcpu *v) { diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index 6212ec2c4a..ee90e55b41 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -3357,6 +3357,61 @@ out: return rc; } =20 +/* Fluhs TLB of selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask =3D &this_cpu(flush_cpumask); + struct domain *d =3D current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu =3D read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + /*************************************************************************= */ /* Shadow-control XEN_DOMCTL dispatcher */ =20 diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 26798b317c..dfe264cf83 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -4157,7 +4157,6 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) if ( do_locking ) paging_unlock(v->domain); } =20 - /*************************************************************************= */ /* Functions to revoke guest rights */ =20 diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h index b94bfb4ed0..0c6aa26b9b 100644 --- a/xen/include/asm-x86/hap.h +++ b/xen/include/asm-x86/hap.h @@ -46,6 +46,9 @@ int hap_track_dirty_vram(struct domain *d, extern const struct paging_mode *hap_paging_get_mode(struct vcpu *); int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempt= ed); =20 +bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #endif /* XEN_HAP_H */ =20 /* diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h index 907c71f497..3c1f6df478 100644 --- a/xen/include/asm-x86/shadow.h +++ b/xen/include/asm-x86/shadow.h @@ -95,6 +95,10 @@ void shadow_blow_tables_per_domain(struct domain *d); int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted); =20 +/* Flush the TLB of the selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #else /* !CONFIG_SHADOW_PAGING */ =20 #define shadow_teardown(d, p) ASSERT(is_pv_domain(d)) @@ -106,6 +110,14 @@ int shadow_set_allocation(struct domain *d, unsigned i= nt pages, #define shadow_set_allocation(d, pages, preempted) \ ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) =20 +static inline bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, + struct vcpu *v), + void *ctxt) +{ + ASSERT_UNREACHABLE(); + return -EOPNOTSUPP; +} + static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all) {} =20 --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel