From nobody Mon Feb 9 19:29:57 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148731832143.97098917940048; Mon, 27 Jan 2020 10:12:11 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rM-0007xd-Ux; Mon, 27 Jan 2020 18:11:40 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rM-0007wq-0a for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:40 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 72812a2a-4130-11ea-8590-12813bfff9fa; Mon, 27 Jan 2020 18:11:32 +0000 (UTC) X-Inumbo-ID: 72812a2a-4130-11ea-8590-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148693; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AkLC2Wic1LLly2lQ0NYeFp34sD8LcMJxN6HuaR2sHis=; b=F3LmrDSqQtgquxDe0r6stT5wBbwTcfKH5kZGcw7digce6UECJJRjiy0i eNP6CuBttqVwz6SMFKVCw/z/8HWqgdIrdKryMsAiIkducNOnHNV5WweqG uyECPaB0lG74BbjWqQ8+B6Ei0VszaVHhiPv+FmAWryTW11oi9o/BpfGKe I=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: wct6E3hRolCflhxLTpIzD7eZh/JTESwHevpiJV6QvSLqUd2Jz3nvTgNbLn77nDiJBUVNoOBxMR Vk/4O0BVl3vPf5q8kTvakBJ0ADsx2m9MkR1qYC+H6QT2xgDN8IsmDkHDqpwrSAoppqhcvGZT2F MED4VcFl3Fjp/ugw3TjQYmPRmW9b1SxmZcQod2/lJXe9Aj7lVittkcMaujhIo4sxPNPxQJRwpg Z7ROa86X92LfrHlMWJQqBzSOGsKmImByVaGmRvHKGU8sjFjja6wtXnjYP9X0U92z6qTgyon53Z 4A8= X-SBRS: 2.7 X-MesageID: 11681870 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11681870" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:12 +0100 Message-ID: <20200127181115.82709-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 4/7] x86/hap: improve hypervisor assisted guest TLB flush X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Wei Liu , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The current implementation of the hypervisor assisted flush for HAP is extremely inefficient. First of all there's no need to call paging_update_cr3, as the only relevant part of that function when doing a flush is the ASID vCPU flush, so just call that function directly. Since hvm_asid_flush_vcpu is protected against concurrent callers by using atomic operations there's no need anymore to pause the affected vCPUs. Finally the global TLB flush performed by flush_tlb_mask is also not necessary, since we only want to flush the guest TLB state it's enough to trigger a vmexit on the pCPUs currently holding any vCPU state, as such vmexit will already perform an ASID/VPID update, and thus clear the guest TLB. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- xen/arch/x86/mm/hap/hap.c | 48 +++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 6894c1aa38..401eaf8026 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,32 +669,24 @@ static void hap_update_cr3(struct vcpu *v, int do_loc= king, bool noflush) hvm_update_guest_cr3(v, noflush); } =20 +static void do_flush(void *data) +{ + cpumask_t *mask =3D data; + unsigned int cpu =3D smp_processor_id(); + + ASSERT(cpumask_test_cpu(cpu, mask)); + cpumask_clear_cpu(cpu, mask); +} + bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { static DEFINE_PER_CPU(cpumask_t, flush_cpumask); cpumask_t *mask =3D &this_cpu(flush_cpumask); struct domain *d =3D current->domain; + unsigned int this_cpu =3D smp_processor_id(); struct vcpu *v; =20 - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - cpumask_clear(mask); =20 /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ @@ -705,20 +697,22 @@ bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, str= uct vcpu *v), if ( !flush_vcpu(ctxt, v) ) continue; =20 - paging_update_cr3(v, false); + hvm_asid_flush_vcpu(v); =20 cpu =3D read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) + if ( cpu !=3D this_cpu && is_vcpu_dirty_cpu(cpu) ) __cpumask_set_cpu(cpu, mask); } =20 - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); + /* + * Trigger a vmexit on all pCPUs with dirty vCPU state in order to for= ce an + * ASID/VPIT change and hence accomplish a guest TLB flush. Note that = vCPUs + * not currently running will already be flushed when scheduled becaus= e of + * the ASID tickle done in the loop above. + */ + on_selected_cpus(mask, do_flush, mask, 0); + while ( !cpumask_empty(mask) ) + cpu_relax(); =20 return true; } --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel