From nobody Sun Apr 28 20:11:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1583256105; cv=none; d=zohomail.com; s=zohoarc; b=gzzdHFD+HKNaUiVu9uho3aKUbCmQSGaR1w5Rzk11u8/9n1O147wHLZjFLvXTH4Q1tq1TW6iy3ikbQeIZVFBV6AZmE4SoOsuYn3unjlFgHUFpLufBRoI/wNDnN6331CaDwOPbfFlluX7IF5iDaRNQciDSWXnTxZXZtaQ5hptY9sI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1583256105; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ov4NG0OAgZlaRj9xaaepltjp/Ytb0GOBz5h/0vE+ALU=; b=adwHMzj8g5NzszY4BCuKHBFpcBZnIB0ib9dX/73pb7yQ8cC0KK/OOHiVyVoH5vE18Wu6ZCzYbZ29nH/UfWc7iPcWMfqexdy4dD9dKdR8cbW2WD+dPKjq5ahLdDj28nkTVIUQpBztgTjPN1lhQRk6Vmgn92FpjBr2xF+JwuX3EEQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1583256105164876.8325661176173; Tue, 3 Mar 2020 09:21:45 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEC-0007QB-Mx; Tue, 03 Mar 2020 17:21:08 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEB-0007Q0-3i for xen-devel@lists.xenproject.org; Tue, 03 Mar 2020 17:21:07 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5ca69b04-5d73-11ea-8adc-bc764e2007e4; Tue, 03 Mar 2020 17:21:04 +0000 (UTC) X-Inumbo-ID: 5ca69b04-5d73-11ea-8adc-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1583256064; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1QAshLi++zhF5fDm/vI1ICT9O00yj9QFwHZ8PPKtRbk=; b=aXApbRftJsbWEPVjm7jtyd36rlH+qCjFOjd5Y2jIG5MAnsq3v/4CVk2r cLxzW0RaBoEQ6BXdAviyaQwGeowjKyov9bCJFw1aerDqLIPA+GhK560rL 71Dv9rjQ5+LHawHCv338CPVHGGN1U6vvWC97u6mJmZr93BEPZe9MmKhbf k=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: ZwqM2OeeTiVitwRYzVHCLR/OLxlvRyd6sN5Gt91x4E1j1mRgi+U9zZyqnRwsgeiVvyHJeUrszI 0Axf2RPWqFAfe1sUvRBXrMndBXg27ThHZJKO1TTQI8d0qxpJH6PeAt9pdNMn38nd+kxhHpt9DG +0w2Zgunmtxt+mUo9aq2iopkkYhsLFzjf3GL0lCj+XubZx6wn/+sONl4Gq/WxivNAp2Ije/prN Y+FL5iRavqWHCgnao+4p0hK7Cl8piTSnUpXbmoljo6YioSa3Yhodf8MV/hXTZEq/doCoBy/RwJ Zts= X-SBRS: 2.7 X-MesageID: 13775850 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,511,1574139600"; d="scan'208";a="13775850" From: Roger Pau Monne To: Date: Tue, 3 Mar 2020 18:20:41 +0100 Message-ID: <20200303172046.50569-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200303172046.50569-1-roger.pau@citrix.com> References: <20200303172046.50569-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v6 1/6] x86/hvm: allow ASID flush when v != current X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Current implementation of hvm_asid_flush_vcpu is not safe to use unless the target vCPU is either paused or the currently running one, as it modifies the generation without any locking. Fix this by using atomic operations when accessing the generation field, both in hvm_asid_flush_vcpu_asid and other ASID functions. This allows to safely flush the current ASID generation. Note that for the flush to take effect if the vCPU is currently running a vmexit is required. Compilers will normally do such writes and reads as a single instruction, so the usage of atomic operations is mostly used as a safety measure. Note the same could be achieved by introducing an extra field to hvm_vcpu_asid that signals hvm_asid_handle_vmenter the need to call hvm_asid_flush_vcpu on the given vCPU before vmentry, this however seems unnecessary as hvm_asid_flush_vcpu itself only sets two vCPU fields to 0, so there's no need to delay this to the vmentry ASID helper. This is not a bugfix as no callers that would violate the assumptions listed in the first paragraph have been found, but a preparatory change in order to allow remote flushing of HVM vCPUs. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Jan Beulich --- xen/arch/x86/hvm/asid.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/asid.c b/xen/arch/x86/hvm/asid.c index 8e00a28443..63ce462d56 100644 --- a/xen/arch/x86/hvm/asid.c +++ b/xen/arch/x86/hvm/asid.c @@ -83,7 +83,7 @@ void hvm_asid_init(int nasids) =20 void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid) { - asid->generation =3D 0; + write_atomic(&asid->generation, 0); } =20 void hvm_asid_flush_vcpu(struct vcpu *v) @@ -121,7 +121,7 @@ bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *as= id) goto disabled; =20 /* Test if VCPU has valid ASID. */ - if ( asid->generation =3D=3D data->core_asid_generation ) + if ( read_atomic(&asid->generation) =3D=3D data->core_asid_generation ) return 0; =20 /* If there are no free ASIDs, need to go to a new generation */ @@ -135,7 +135,7 @@ bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *as= id) =20 /* Now guaranteed to be a free ASID. */ asid->asid =3D data->next_asid++; - asid->generation =3D data->core_asid_generation; + write_atomic(&asid->generation, data->core_asid_generation); =20 /* * When we assign ASID 1, flush all TLB entries as we are starting a n= ew --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 20:11:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1583256106; cv=none; d=zohomail.com; s=zohoarc; b=RMHi9pagjB/sdNhxOgwyJ5ufuMFx+Em6mldc80wVipmLcVo9ztL9L6RCS81QrteVSwh3v42FJPGqpnqYoYj9Xnm4QoBBLzO9Si5C8Y+NqMZTeBg2nMbLLklDpGY8PNNl1pBm4IN8rWwKd3kqgWYvtU3dB+dNXm3wZHH+eLM/hN4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1583256106; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=xXUTyrrZoV4DMzLvd3w1jyPfasg2KVQ7KndK3cziNZc=; b=glOFqeOOb6hrIIKSYLX7lmZj5jWrb/VvrPtwp/MNkd8BDEgGM1u7dlMgbl20GcGW6Z/hA1GIRhI/0SuxmI6681zxSAkGf3Y3ir1ih/GSOKm60xyaTx+D9W62E7/QNIEOyQddgMaL/Pufq8Ks5MK+vtSQp7nDtSl329jaRIM7a4E= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1583256106597306.31515829376053; Tue, 3 Mar 2020 09:21:46 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEG-0007Rc-4q; Tue, 03 Mar 2020 17:21:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEE-0007R0-MU for xen-devel@lists.xenproject.org; Tue, 03 Mar 2020 17:21:10 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 5e5e0cf2-5d73-11ea-a1d1-12813bfff9fa; Tue, 03 Mar 2020 17:21:07 +0000 (UTC) X-Inumbo-ID: 5e5e0cf2-5d73-11ea-a1d1-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1583256068; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RvKGTTAMFbfbu2wkCeN52kNt/+inpH4s6rTLJxQYrgc=; b=bpnfAzRo1Jj6MH5XHgkXEz8Pyvuc/EQAfGUeocuksMLPpMWoEQknk8cZ uWXnhp7F70bDfQFXI5ZgY4yHhScshb2T81CwnVg7mdFgglUdg4mt8Ys+7 NBSl4SfRDTnFmVlURcHocb0Kp1HiP4j3z1i3uMvx0EiiuFrMwuVpAET8c 8=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: De1icEufwIgbLLB259t/CoI4EdBRTUEr1qGahlW6sOYPUdRuMixCDhrTnQXkLw31Zt/0KUmkaf hsA5HFYk9n53zVv89cH2eOTPh1uFKBLS94N+ptqWsuhkd8E5UiAmIqe8QkK6uVqIS9GVl9wmQ1 KqJZAelsrGE8pai38KfONm9s8Bs4k6c0Ck4ZIbuPjLcM1B1+XqnkiBV0ZrqfPrZ+YCZ53f+rr3 +3eypH/+Bxrff9XLUUF/V8wmKb7JVUhoC0hF6JO2neUKyVylqRHJ490my44mDBy57b2kWPbVtY ltY= X-SBRS: 2.7 X-MesageID: 13528282 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,511,1574139600"; d="scan'208";a="13528282" From: Roger Pau Monne To: Date: Tue, 3 Mar 2020 18:20:42 +0100 Message-ID: <20200303172046.50569-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200303172046.50569-1-roger.pau@citrix.com> References: <20200303172046.50569-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v6 2/6] x86/paging: add TLB flush hooks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Andrew Cooper , Paul Durrant , Tim Deegan , George Dunlap , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add shadow and hap implementation specific helpers to perform guest TLB flushes. Note that the code for both is exactly the same at the moment, and is copied from hvm_flush_vcpu_tlb. This will be changed by further patches that will add implementation specific optimizations to them. No functional change intended. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Tim Deegan Reviewed-by: Jan Beulich --- Changes since v5: - Make the flush tlb operation a paging_mode hook. Changes since v3: - Fix stray newline removal. - Fix return of shadow_flush_tlb dummy function. --- xen/arch/x86/hvm/hvm.c | 56 +-------------------------- xen/arch/x86/hvm/viridian/viridian.c | 2 +- xen/arch/x86/mm/hap/hap.c | 58 ++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/common.c | 55 ++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/multi.c | 1 + xen/arch/x86/mm/shadow/private.h | 4 ++ xen/include/asm-x86/hvm/hvm.h | 3 -- xen/include/asm-x86/paging.h | 10 +++++ 8 files changed, 130 insertions(+), 59 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index db5d7b4d30..a2abad9f76 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3988,60 +3988,6 @@ static void hvm_s3_resume(struct domain *d) } } =20 -bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), - void *ctxt) -{ - static DEFINE_PER_CPU(cpumask_t, flush_cpumask); - cpumask_t *mask =3D &this_cpu(flush_cpumask); - struct domain *d =3D current->domain; - struct vcpu *v; - - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - - cpumask_clear(mask); - - /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ - for_each_vcpu ( d, v ) - { - unsigned int cpu; - - if ( !flush_vcpu(ctxt, v) ) - continue; - - paging_update_cr3(v, false); - - cpu =3D read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) - __cpumask_set_cpu(cpu, mask); - } - - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); - - return true; -} - static bool always_flush(void *ctxt, struct vcpu *v) { return true; @@ -4052,7 +3998,7 @@ static int hvmop_flush_tlb_all(void) if ( !is_hvm_domain(current->domain) ) return -EINVAL; =20 - return hvm_flush_vcpu_tlb(always_flush, NULL) ? 0 : -ERESTART; + return paging_flush_tlb(always_flush, NULL) ? 0 : -ERESTART; } =20 static int hvmop_set_evtchn_upcall_vector( diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridi= an/viridian.c index cd8f210198..977c1bc54f 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -609,7 +609,7 @@ int viridian_hypercall(struct cpu_user_regs *regs) * A false return means that another vcpu is currently trying * a similar operation, so back off. */ - if ( !hvm_flush_vcpu_tlb(need_flush, &input_params.vcpu_mask) ) + if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) ) return HVM_HCALL_preempted; =20 output.rep_complete =3D input.rep_count; diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 3d93f3451c..5616235bd8 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,6 +669,60 @@ static void hap_update_cr3(struct vcpu *v, int do_lock= ing, bool noflush) hvm_update_guest_cr3(v, noflush); } =20 +static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask =3D &this_cpu(flush_cpumask); + struct domain *d =3D current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu =3D read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + const struct paging_mode * hap_paging_get_mode(struct vcpu *v) { @@ -781,6 +835,7 @@ static const struct paging_mode hap_paging_real_mode = =3D { .update_cr3 =3D hap_update_cr3, .update_paging_modes =3D hap_update_paging_modes, .write_p2m_entry =3D hap_write_p2m_entry, + .flush_tlb =3D flush_tlb, .guest_levels =3D 1 }; =20 @@ -792,6 +847,7 @@ static const struct paging_mode hap_paging_protected_mo= de =3D { .update_cr3 =3D hap_update_cr3, .update_paging_modes =3D hap_update_paging_modes, .write_p2m_entry =3D hap_write_p2m_entry, + .flush_tlb =3D flush_tlb, .guest_levels =3D 2 }; =20 @@ -803,6 +859,7 @@ static const struct paging_mode hap_paging_pae_mode =3D= { .update_cr3 =3D hap_update_cr3, .update_paging_modes =3D hap_update_paging_modes, .write_p2m_entry =3D hap_write_p2m_entry, + .flush_tlb =3D flush_tlb, .guest_levels =3D 3 }; =20 @@ -814,6 +871,7 @@ static const struct paging_mode hap_paging_long_mode = =3D { .update_cr3 =3D hap_update_cr3, .update_paging_modes =3D hap_update_paging_modes, .write_p2m_entry =3D hap_write_p2m_entry, + .flush_tlb =3D flush_tlb, .guest_levels =3D 4 }; =20 diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index cba3ab1eba..121ddf1255 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -3357,6 +3357,61 @@ out: return rc; } =20 +/* Fluhs TLB of selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask =3D &this_cpu(flush_cpumask); + struct domain *d =3D current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu =3D read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + /*************************************************************************= */ /* Shadow-control XEN_DOMCTL dispatcher */ =20 diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 26798b317c..b6afc0fba4 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -4873,6 +4873,7 @@ const struct paging_mode sh_paging_mode =3D { .update_cr3 =3D sh_update_cr3, .update_paging_modes =3D shadow_update_paging_modes, .write_p2m_entry =3D shadow_write_p2m_entry, + .flush_tlb =3D shadow_flush_tlb, .guest_levels =3D GUEST_PAGING_LEVELS, .shadow.detach_old_tables =3D sh_detach_old_tables, #ifdef CONFIG_PV diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/priv= ate.h index 3217777921..e8b028a365 100644 --- a/xen/arch/x86/mm/shadow/private.h +++ b/xen/arch/x86/mm/shadow/private.h @@ -814,6 +814,10 @@ static inline int sh_check_page_has_no_refs(struct pag= e_info *page) ((count & PGC_allocated) ? 1 : 0) ); } =20 +/* Flush the TLB of the selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #endif /* _XEN_SHADOW_PRIVATE_H */ =20 /* diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 24da824cbf..aae00a7860 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -334,9 +334,6 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64= _t value, signed int cr0_pg); unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restor= e); =20 -bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), - void *ctxt); - int hvm_copy_context_and_params(struct domain *src, struct domain *dst); =20 #ifdef CONFIG_HVM diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h index 7544f73121..051161481c 100644 --- a/xen/include/asm-x86/paging.h +++ b/xen/include/asm-x86/paging.h @@ -140,6 +140,9 @@ struct paging_mode { unsigned long gfn, l1_pgentry_t *p, l1_pgentry_t = new, unsigned int level); + bool (*flush_tlb )(bool (*flush_vcpu)(void *ctxt, + struct vcpu= *v), + void *ctxt); =20 unsigned int guest_levels; =20 @@ -397,6 +400,13 @@ static always_inline unsigned int paging_max_paddr_bit= s(const struct domain *d) return bits; } =20 +static inline bool paging_flush_tlb(bool (*flush_vcpu)(void *ctxt, + struct vcpu *v), + void *ctxt) +{ + return paging_get_hostmode(current)->flush_tlb(flush_vcpu, ctxt); +} + #endif /* XEN_PAGING_H */ =20 /* --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 20:11:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1583256107; cv=none; d=zohomail.com; s=zohoarc; b=khAfG+EdgXVexHmv94igOwllVp300SB6q1R/jbIQpjj7moSQmxHAPxz7kfEKI5tCJXHn1nMMYOxLZBOpzSiMtfv7ZpOS7Ctei4mYRJzrDyhqHjxtfxmp+JUhjTRgTY72GW8B4MMHl1XP3BRHFSr4j/59zKY7M1NcwKzsLlxM+es= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1583256107; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=HEKE0FxmMrz/qkg8NQtuHJsIPLuua+rztdqz3gKtwyE=; b=ZZimRj2BlbetL8UgEnoSJYx0X4oXVaPspkPWhAqj3VMeAulbXJJ7RAibIlq8rlevDnPlbMjlBcP2d0aJd0a97s87B/h9SVjgEJS/gbp0tW348Na2BGPAEqqQf12CejJPGb4QqbvzJ4lFkRmaug5adfsUdDIg7ektaBy1L/8tf04= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1583256107518728.6106604481818; Tue, 3 Mar 2020 09:21:47 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEH-0007Sd-Df; Tue, 03 Mar 2020 17:21:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEG-0007Rb-4W for xen-devel@lists.xenproject.org; Tue, 03 Mar 2020 17:21:12 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5f2e7f86-5d73-11ea-8adc-bc764e2007e4; Tue, 03 Mar 2020 17:21:08 +0000 (UTC) X-Inumbo-ID: 5f2e7f86-5d73-11ea-8adc-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1583256068; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MalM86RjrPQUnwY4sBJxSGGZjzP4bL/dwcUV6mQZ1gY=; b=BspMrN4XvtwXk9tID8WnbMzy9RtxXfSx/ed6WkSDMW0yYR7w2wsSApK6 q5e93CC7WUfwQ/j894snWkGsvSyTPLre/+lySgdbYl8nwQHy6atXaMcTs wz/Tm7j1Fkqi6m6R8YfPf40ow55HeZ0TqSAuJ8pilNclbg1hk5h9OfOWA A=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: Z7qnXbnv0xLNPw1KJ0v+aObLC/betIvE/+FJpJbG3U2tWRFVfKtMod86yvSBPsJzfWxmPFrDB+ 2pzYa5ciNCyY3VyY09oX2NeMS9PcKnwWV02UuqT3ShTLIZsocA0EQU1bY2RBfDxGBnfzkiaxZn KndUTXZ/Tu4iCeMsRoLvi+ATJ9lqbGGbNIQr5eMShGlMZH4aUXlOESMxOdu9ZwIlsIj+cXocfS M7v+3GZyT0tK6Cu/jCXpGw6ll5l+k9cU+eMv1MvzD0mJSSr5ZZctUMmWOxi8Sd0slV45OLseLP QEk= X-SBRS: 2.7 X-MesageID: 13775852 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,511,1574139600"; d="scan'208";a="13775852" From: Roger Pau Monne To: Date: Tue, 3 Mar 2020 18:20:43 +0100 Message-ID: <20200303172046.50569-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200303172046.50569-1-roger.pau@citrix.com> References: <20200303172046.50569-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v6 3/6] x86/hap: improve hypervisor assisted guest TLB flush X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , George Dunlap , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The current implementation of the hypervisor assisted flush for HAP is extremely inefficient. First of all there's no need to call paging_update_cr3, as the only relevant part of that function when doing a flush is the ASID vCPU flush, so just call that function directly. Since hvm_asid_flush_vcpu is protected against concurrent callers by using atomic operations there's no need anymore to pause the affected vCPUs. Finally the global TLB flush performed by flush_tlb_mask is also not necessary, since we only want to flush the guest TLB state it's enough to trigger a vmexit on the pCPUs currently holding any vCPU state, as such vmexit will already perform an ASID/VPID update, and thus clear the guest TLB. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Reviewed-by: Jan Beulich --- Changes since v5: - Remove custom synchronization as on_selected_cpus already take care of it. - s/handle_flush/dummy_flush/. - Update comment on dummy_flush helper. Changes since v3: - s/do_flush/handle_flush/. - Add comment about handle_flush usage. - Fix VPID typo in comment. --- xen/arch/x86/mm/hap/hap.c | 46 ++++++++++++++++----------------------- 1 file changed, 19 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 5616235bd8..8dbbcc3676 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,32 +669,24 @@ static void hap_update_cr3(struct vcpu *v, int do_loc= king, bool noflush) hvm_update_guest_cr3(v, noflush); } =20 +/* + * Dummy function to use with on_selected_cpus in order to trigger a vmexi= t on + * selected pCPUs. When the VM resumes execution it will get a new ASID/VP= ID + * and thus a clean TLB. + */ +static void dummy_flush(void *data) +{ +} + static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { static DEFINE_PER_CPU(cpumask_t, flush_cpumask); cpumask_t *mask =3D &this_cpu(flush_cpumask); struct domain *d =3D current->domain; + unsigned int this_cpu =3D smp_processor_id(); struct vcpu *v; =20 - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - cpumask_clear(mask); =20 /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ @@ -705,20 +697,20 @@ static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, = struct vcpu *v), if ( !flush_vcpu(ctxt, v) ) continue; =20 - paging_update_cr3(v, false); + hvm_asid_flush_vcpu(v); =20 cpu =3D read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) + if ( cpu !=3D this_cpu && is_vcpu_dirty_cpu(cpu) ) __cpumask_set_cpu(cpu, mask); } =20 - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); + /* + * Trigger a vmexit on all pCPUs with dirty vCPU state in order to for= ce an + * ASID/VPID change and hence accomplish a guest TLB flush. Note that = vCPUs + * not currently running will already be flushed when scheduled becaus= e of + * the ASID tickle done in the loop above. + */ + on_selected_cpus(mask, dummy_flush, mask, 0); =20 return true; } --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 20:11:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1583256110; cv=none; d=zohomail.com; s=zohoarc; b=EuiPEZ0Nl5L6T35L9Dg7TZm3VAEWxStd4/am6fkktadD/uPbhX3nnBXDP+HWktUF9c8L4XGNYfVR1XOX+/BU7uI+FfSAA4KK77olc0KPvO06B8jVZS/0CtGBKglkCqsrwJ94Q62IlRr2wqAyMMJ8xfnzXJGtp7U4WsCn4LfVxkU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1583256110; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=TodIWOTrMu2K8g0Z/DkKzQrs8MAzVK5+Ut6Hb8/AB4w=; b=b7xlhedWRuKSKFcl3HyvuFlIRm6AKuknJgeJ0prtxuEjEqGvG4xqwMvZ+SoW3kUh3D94O8LnZ0f+p1a7aLWnsn2Lev7vbB6CLRJNa09fvM7usxYJcTsvX8l5UugJuoe6MYP74utTlz7SIqvs3BU8B01lAyv21yl6BFDTEnk2lgE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1583256110967243.51732237137116; Tue, 3 Mar 2020 09:21:50 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEM-0007VL-BR; Tue, 03 Mar 2020 17:21:18 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEL-0007UT-5C for xen-devel@lists.xenproject.org; Tue, 03 Mar 2020 17:21:17 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 613115f0-5d73-11ea-8efe-bc764e2007e4; Tue, 03 Mar 2020 17:21:12 +0000 (UTC) X-Inumbo-ID: 613115f0-5d73-11ea-8efe-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1583256072; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1MwfPs7y6ugtPFWLPHPt9RsaDeN5C6EXDJJ5ifTrNgk=; b=N99gCbHOMkw30sEzqnvQ40Evr1dfIy4SczeXriEFl2YUc2rBUdl84jgX CKYiVM/r6XCaKPdsfBbe1c8N0zTtfNchFkdVtKs+j2NqS6i/mvevcWJW5 7ODafSL7FjUYFh+ExqKRJg6U2xGpxBrQmjHLlhJEJtA+UOtiop8IgJAMO 0=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: Pt9HlkkPdPE13ohA5MBP+ImZV6r7UUJHCQCvFBeI76Xs6DgWyWO6ZdRcadlP0WM+p1TT1TvNfK +aT+EzTaa7+CG4wj6oQMnNQoJqIS+odjkO0Bv3aTrQifyiSgU4CO0tksmbzEUH3kbqxf/MRgcB wLUSGjy0z8izIRSbXEWug+LnbAD+jXLSy3hNp5TjRgZlqvsLHFRSHZShuAsxkdtMFfZPX44sL4 R/ZbEKz/sVu5xIQSLvJljBvrj/r6VDj5BeaniZXS9q8Pq7q0Yh43tywrNyYSeW6ZqDFzvNwXwH wmI= X-SBRS: 2.7 X-MesageID: 13972859 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,511,1574139600"; d="scan'208";a="13972859" From: Roger Pau Monne To: Date: Tue, 3 Mar 2020 18:20:44 +0100 Message-ID: <20200303172046.50569-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200303172046.50569-1-roger.pau@citrix.com> References: <20200303172046.50569-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v6 4/6] x86/tlb: introduce a flush guests TLB flag X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Andrew Cooper , Tim Deegan , George Dunlap , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Introduce a specific flag to request a HVM guest TLB flush, which is an ASID/VPID tickle that forces a guest linear to guest physical TLB flush for all HVM guests. This was previously unconditionally done in each pre_flush call, but that's not required: HVM guests not using shadow don't require linear TLB flushes as Xen doesn't modify the guest page tables in that case (ie: when using HAP). Note that shadow paging code already takes care of issuing the necessary flushes when the shadow page tables are modified. In order to keep the previous behavior modify all shadow code TLB flushes to also flush the guest linear to physical TLB, in order to keep the previous behavior. I haven't looked at each specific shadow code TLB flush in order to figure out whether it actually requires a guest TLB flush or not, so there might be room for improvement in that regard. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Tim Deegan --- Changes since v5: - Rename FLUSH_GUESTS_TLB to FLUSH_HVM_ASID_CORE. - Clarify commit message. - Define FLUSH_HVM_ASID_CORE to 0 when !CONFIG_HVM. --- xen/arch/x86/flushtlb.c | 5 +++-- xen/arch/x86/mm/shadow/common.c | 18 +++++++++--------- xen/arch/x86/mm/shadow/hvm.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 16 ++++++++-------- xen/include/asm-x86/flushtlb.h | 6 ++++++ 5 files changed, 27 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c index 03f92c23dc..c1305c7e6b 100644 --- a/xen/arch/x86/flushtlb.c +++ b/xen/arch/x86/flushtlb.c @@ -59,8 +59,6 @@ static u32 pre_flush(void) raise_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ); =20 skip_clocktick: - hvm_flush_guest_tlbs(); - return t2; } =20 @@ -221,6 +219,9 @@ unsigned int flush_area_local(const void *va, unsigned = int flags) do_tlb_flush(); } =20 + if ( flags & FLUSH_HVM_ASID_CORE ) + hvm_flush_guest_tlbs(); + if ( flags & FLUSH_CACHE ) { const struct cpuinfo_x86 *c =3D ¤t_cpu_data; diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index 121ddf1255..aa750eafae 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -363,7 +363,7 @@ static int oos_remove_write_access(struct vcpu *v, mfn_= t gmfn, } =20 if ( ftlb ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); =20 return 0; } @@ -939,7 +939,7 @@ static void _shadow_prealloc(struct domain *d, unsigned= int pages) /* See if that freed up enough space */ if ( d->arch.paging.shadow.free_pages >=3D pages ) { - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASI= D_CORE); return; } } @@ -993,7 +993,7 @@ static void shadow_blow_tables(struct domain *d) pagetable_get_mfn(v->arch.shadow_table[i]),= 0); =20 /* Make sure everyone sees the unshadowings */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); } =20 void shadow_blow_tables_per_domain(struct domain *d) @@ -1102,7 +1102,7 @@ mfn_t shadow_alloc(struct domain *d, if ( unlikely(!cpumask_empty(&mask)) ) { perfc_incr(shadow_alloc_tlbflush); - flush_tlb_mask(&mask); + flush_mask(&mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); } /* Now safe to clear the page for reuse */ clear_domain_page(page_to_mfn(sp)); @@ -2290,7 +2290,7 @@ void sh_remove_shadows(struct domain *d, mfn_t gmfn, = int fast, int all) =20 /* Need to flush TLBs now, so that linear maps are safe next time we * take a fault. */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); =20 paging_unlock(d); } @@ -3005,7 +3005,7 @@ static void sh_unshadow_for_p2m_change(struct domain = *d, unsigned long gfn, { sh_remove_all_shadows_and_parents(d, mfn); if ( sh_remove_all_mappings(d, mfn, _gfn(gfn)) ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CO= RE); } } =20 @@ -3045,7 +3045,7 @@ static void sh_unshadow_for_p2m_change(struct domain = *d, unsigned long gfn, } omfn =3D mfn_add(omfn, 1); } - flush_tlb_mask(&flushmask); + flush_mask(&flushmask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); =20 if ( npte ) unmap_domain_page(npte); @@ -3332,7 +3332,7 @@ int shadow_track_dirty_vram(struct domain *d, } } if ( flush_tlb ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); goto out; =20 out_sl1ma: @@ -3402,7 +3402,7 @@ bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, = struct vcpu *v), } =20 /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); + flush_mask(mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); =20 /* Done. */ for_each_vcpu ( d, v ) diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c index 1e6024c71f..509162cdce 100644 --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -591,7 +591,7 @@ static void validate_guest_pt_write(struct vcpu *v, mfn= _t gmfn, =20 if ( rc & SHADOW_SET_FLUSH ) /* Need to flush TLBs to pick up shadow PT changes */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); =20 if ( rc & SHADOW_SET_ERROR ) { diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index b6afc0fba4..667fca96c7 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3066,7 +3066,7 @@ static int sh_page_fault(struct vcpu *v, perfc_incr(shadow_rm_write_flush_tlb); smp_wmb(); atomic_inc(&d->arch.paging.shadow.gtable_dirty_version); - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); } =20 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) @@ -3575,7 +3575,7 @@ static bool sh_invlpg(struct vcpu *v, unsigned long l= inear) if ( mfn_to_page(sl1mfn)->u.sh.type =3D=3D SH_type_fl1_shadow ) { - flush_tlb_local(); + flush_local(FLUSH_TLB | FLUSH_HVM_ASID_CORE); return false; } =20 @@ -3810,7 +3810,7 @@ sh_update_linear_entries(struct vcpu *v) * table entry. But, without this change, it would fetch the wrong * value due to a stale TLB. */ - flush_tlb_local(); + flush_local(FLUSH_TLB | FLUSH_HVM_ASID_CORE); } } =20 @@ -4011,7 +4011,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) * (old) shadow linear maps in the writeable mapping heuristics. */ #if GUEST_PAGING_LEVELS =3D=3D 2 if ( sh_remove_write_access(d, gmfn, 2, 0) !=3D 0 ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow); #elif GUEST_PAGING_LEVELS =3D=3D 3 /* PAE guests have four shadow_table entries, based on the @@ -4035,7 +4035,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) } } if ( flush ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); /* Now install the new shadows. */ for ( i =3D 0; i < 4; i++ ) { @@ -4056,7 +4056,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) } #elif GUEST_PAGING_LEVELS =3D=3D 4 if ( sh_remove_write_access(d, gmfn, 4, 0) !=3D 0 ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow); if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) ) { @@ -4502,7 +4502,7 @@ static void sh_pagetable_dying(paddr_t gpa) } } if ( flush ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); =20 /* Remember that we've seen the guest use this interface, so we * can rely on it using it in future, instead of guessing at @@ -4539,7 +4539,7 @@ static void sh_pagetable_dying(paddr_t gpa) mfn_to_page(gmfn)->pagetable_dying =3D true; shadow_unhook_mappings(d, smfn, 1/* user pages only */); /* Now flush the TLB: we removed toplevel mappings. */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE); } =20 /* Remember that we've seen the guest use this interface, so we diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 2cfe4e6e97..579dc56803 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -105,6 +105,12 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long c= r4); #define FLUSH_VCPU_STATE 0x1000 /* Flush the per-cpu root page table */ #define FLUSH_ROOT_PGTBL 0x2000 +#if CONFIG_HVM + /* Flush all HVM guests linear TLB (using ASID/VPID) */ +#define FLUSH_HVM_ASID_CORE 0x4000 +#else +#define FLUSH_HVM_ASID_CORE 0 +#endif =20 /* Flush local TLBs/caches. */ unsigned int flush_area_local(const void *va, unsigned int flags); --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 20:11:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1583256116; cv=none; d=zohomail.com; s=zohoarc; b=RhJOVtUgt4WPIcTqUofWs8NH/9TVNB49TDD+Lpfk4VIk7HfXwuRmuyDaahehQJ05VdFw72xlpxLzx/1pw6bBYPrfk+wSVZT5su8AjuuqoUB3knP/dBcxjWeKkhYIMkjkoPRamnXv8aL7jbZW6FGfRwwhzHFImPv4VtYf02vhtbc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1583256116; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5hbh11Fd+X2Khiq0CSU5DSkSPnT8EL6kVBYxF576VkQ=; b=YuCz6Am0LN7SAWoA+lL2TkzHK4X1qZdoh0zmDlgHOo4qGaW6Arwy/plQBGPjo3X96VemZBQ8KNqH4NvNjLswvln4ZtukWesrGAPP2n/NJE7RT1dPxGVAAYaz1YH4WW7/0KKlM6Xqe28BdbWzn7MMQSLMtfFHfa9sHGzZFDiZyRM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1583256116631777.555916109772; Tue, 3 Mar 2020 09:21:56 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BER-0007Z3-Lx; Tue, 03 Mar 2020 17:21:23 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEQ-0007YH-4Y for xen-devel@lists.xenproject.org; Tue, 03 Mar 2020 17:21:22 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 61ad3036-5d73-11ea-8adc-bc764e2007e4; Tue, 03 Mar 2020 17:21:13 +0000 (UTC) X-Inumbo-ID: 61ad3036-5d73-11ea-8adc-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1583256073; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IVP5IILU5sQfbmMewcr+q/3jYDw6BVTmyrqOXQPyUys=; b=behPCV7UrctgoZLRZhr/sDB5p+EAE1xBsTUcsLsK3h1a4wbROqLVEPG0 Lh57Wi4MvYU38TaClON1nVhPNAdWxFcxYSkiSmpogqO7xNGkpGihieuYM lug1tdpYSpHuFViQQOE0MotN4KT7r5XuSdv5gn3ITMO12WhMFSNbD+cTj 8=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: olM4GyUtK4dW851cIO7GUwcVKgkxeSPeGMQvI7vndZqLkKMDCSC6ECo+5zZ00piBf/0ej8pV0X oBQSTGP//fxt2NHH0L8vVW0ThaTSP2aMuTUUV9ohclI5QosPZvaOIgkkrdSRSooapY8l5iYtWG QU6oCJ4JXARSMS1sp8rBF5ULQOIAiNrEj4Osl9jJFcgyzS3La67Qg7odnuh3F5Fe2QN8A1QAVF VNGtYCVR0n2jQxYd5tMbANwmKsw5pIZpkCecqZPW5Qus43DCVqxoLLCkMHdb7ZcLDYxtrn9615 tWQ= X-SBRS: 2.7 X-MesageID: 13353954 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,511,1574139600"; d="scan'208";a="13353954" From: Roger Pau Monne To: Date: Tue, 3 Mar 2020 18:20:45 +0100 Message-ID: <20200303172046.50569-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200303172046.50569-1-roger.pau@citrix.com> References: <20200303172046.50569-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v6 5/6] x86/tlb: allow disabling the TLB clock X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The TLB clock is helpful when running Xen on bare metal because when doing a TLB flush each CPU is IPI'ed and can keep a timestamp of the last flush. This is not the case however when Xen is running virtualized, and the underlying hypervisor provides mechanism to assist in performing TLB flushes: Xen itself for example offers a HVMOP_flush_tlbs hypercall in order to perform a TLB flush without having to IPI each CPU. When using such mechanisms it's no longer possible to keep a timestamp of the flushes on each CPU, as they are performed by the underlying hypervisor. Offer a boolean in order to signal Xen that the timestamped TLB shouldn't be used. This avoids keeping the timestamps of the flushes, and also forces NEED_FLUSH to always return true. No functional change intended, as this change doesn't introduce any user that disables the timestamped TLB. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Jan Beulich --- xen/arch/x86/flushtlb.c | 19 +++++++++++++------ xen/include/asm-x86/flushtlb.h | 17 ++++++++++++++++- 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c index c1305c7e6b..3a70b6327a 100644 --- a/xen/arch/x86/flushtlb.c +++ b/xen/arch/x86/flushtlb.c @@ -32,6 +32,9 @@ u32 tlbflush_clock =3D 1U; DEFINE_PER_CPU(u32, tlbflush_time); =20 +/* Signals whether the TLB flush clock is in use. */ +bool __read_mostly tlb_clk_enabled =3D true; + /* * pre_flush(): Increment the virtual TLB-flush clock. Returns new clock v= alue. *=20 @@ -82,12 +85,13 @@ static void post_flush(u32 t) static void do_tlb_flush(void) { unsigned long flags, cr4; - u32 t; + u32 t =3D 0; =20 /* This non-reentrant function is sometimes called in interrupt contex= t. */ local_irq_save(flags); =20 - t =3D pre_flush(); + if ( tlb_clk_enabled ) + t =3D pre_flush(); =20 if ( use_invpcid ) invpcid_flush_all(); @@ -99,7 +103,8 @@ static void do_tlb_flush(void) else write_cr3(read_cr3()); =20 - post_flush(t); + if ( tlb_clk_enabled ) + post_flush(t); =20 local_irq_restore(flags); } @@ -107,7 +112,7 @@ static void do_tlb_flush(void) void switch_cr3_cr4(unsigned long cr3, unsigned long cr4) { unsigned long flags, old_cr4; - u32 t; + u32 t =3D 0; =20 /* Throughout this function we make this assumption: */ ASSERT(!(cr4 & X86_CR4_PCIDE) || !(cr4 & X86_CR4_PGE)); @@ -115,7 +120,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) /* This non-reentrant function is sometimes called in interrupt contex= t. */ local_irq_save(flags); =20 - t =3D pre_flush(); + if ( tlb_clk_enabled ) + t =3D pre_flush(); =20 old_cr4 =3D read_cr4(); ASSERT(!(old_cr4 & X86_CR4_PCIDE) || !(old_cr4 & X86_CR4_PGE)); @@ -167,7 +173,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) if ( cr4 & X86_CR4_PCIDE ) invpcid_flush_all_nonglobals(); =20 - post_flush(t); + if ( tlb_clk_enabled ) + post_flush(t); =20 local_irq_restore(flags); } diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 579dc56803..724455ae0c 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -21,10 +21,21 @@ extern u32 tlbflush_clock; /* Time at which each CPU's TLB was last flushed. */ DECLARE_PER_CPU(u32, tlbflush_time); =20 -#define tlbflush_current_time() tlbflush_clock +/* TLB clock is in use. */ +extern bool tlb_clk_enabled; + +static inline uint32_t tlbflush_current_time(void) +{ + /* Returning 0 from tlbflush_current_time will always force a flush. */ + return tlb_clk_enabled ? tlbflush_clock : 0; +} =20 static inline void page_set_tlbflush_timestamp(struct page_info *page) { + /* Avoid the write if the TLB clock is disabled. */ + if ( !tlb_clk_enabled ) + return; + /* * Prevent storing a stale time stamp, which could happen if an update * to tlbflush_clock plus a subsequent flush IPI happen between the @@ -67,6 +78,10 @@ static inline void tlbflush_filter(cpumask_t *mask, uint= 32_t page_timestamp) { unsigned int cpu; =20 + /* Short-circuit: there's no need to iterate if the clock is disabled.= */ + if ( !tlb_clk_enabled ) + return; + for_each_cpu ( cpu, mask ) if ( !NEED_FLUSH(per_cpu(tlbflush_time, cpu), page_timestamp) ) __cpumask_clear_cpu(cpu, mask); --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 20:11:48 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1583256111; cv=none; d=zohomail.com; s=zohoarc; b=gE9JYd5ic5ul5xGrGrWO2rbnsvzG/oOrRwVWbKFFs5LaG3vS7yNwgX6YTlxVPBm/FQZzjJz3XXVL2sHqML97DRLFs+sbBWAVLFLtLG/mWmxdzlwKoVTICB/WIcLI+aAaVGsrdv/hdIwKvPONGSAMlxaNdHcmCvtNcSIFwYmpqS0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1583256111; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=XK07gtcr4FFpn3AOFx1u3+PwkWWJhfldYCMVyLcrN/c=; b=gFm0jEe/1cZRd9rZ3iXGd9h4Gk16jaiQ+CYB/VRpudaeO6P9hWoPN43/UOFqI2pPdxoeMMjzs6FmV3diqJkwAnrE2QvTjiVf/UseSKwnStYkwQ4FFd+7ulfjqtOu6Gy43iCRbe1DkSZ442OjePg7q5nyg43u7rSHE5XX8MkaPfY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1583256111810464.16423619368504; Tue, 3 Mar 2020 09:21:51 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEL-0007Uk-PF; Tue, 03 Mar 2020 17:21:17 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j9BEJ-0007Tz-VQ for xen-devel@lists.xenproject.org; Tue, 03 Mar 2020 17:21:15 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 6289f0c0-5d73-11ea-a1d1-12813bfff9fa; Tue, 03 Mar 2020 17:21:15 +0000 (UTC) X-Inumbo-ID: 6289f0c0-5d73-11ea-a1d1-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1583256075; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/he7gQnXcLGmEPzoAzv190fLkWE9d2jERrgUKJ9RJBI=; b=DUYfwA+DLd0hNmjPHWbVIVRa+Cd1X+zhjFjHoHCiFhBCewmm10vVBf4R E1zUzautOe5/gLTuZDiMQpPtnAebl/BX76IDJ1aMpdhDFeGmj1CaZxR3S fiwaLNReaT4yzTDoRcKNXKq4fbx1OKaASY5K2zToGxUzkCjWcG4kx+Nnk 8=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: Wk3TFmHYt3a9A8gI/IhUMK2LKVMk7Xv3KXsbXMUf3ZpmE57qPEr98xhUVvRUq9GSlV+e0DD2LP LjaoQdqt3nxlw6NnN9QvaSK8SIvDp+zNaYTAullM0qJE3R2ODBB8VsznRxZxRkpHSV5Zm8GtHE xTiCZ9H3QP2W7U4AzaiXbNLOp+bKeghgCuon7GgcIJXFl9C2h2WgQr2NHSKgcK61WPkvatQW/f MuO4zdjE1luVCRoiZxNfG64DueGofBODph52UuBC1ZVm170/r0N+1uqcg3RRmcuoxmEQAcI1kO 4oM= X-SBRS: 2.7 X-MesageID: 13692004 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,511,1574139600"; d="scan'208";a="13692004" From: Roger Pau Monne To: Date: Tue, 3 Mar 2020 18:20:46 +0100 Message-ID: <20200303172046.50569-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200303172046.50569-1-roger.pau@citrix.com> References: <20200303172046.50569-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v6 6/6] x86/tlb: use Xen L0 assisted TLB flush when available X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes. This greatly increases the performance of TLB flushes when running with a high amount of vCPUs as a Xen guest, and is specially important when running in shim mode. The following figures are from a PV guest running `make -j32 xen` in shim mode with 32 vCPUs and HAP. Using x2APIC and ALLBUT shorthand: real 4m35.973s user 4m35.110s sys 36m24.117s Using L0 assisted flush: real 1m2.596s user 4m34.818s sys 5m16.374s The implementation adds a new hook to hypervisor_ops so other enlightenments can also implement such assisted flush just by filling the hook. Note that the Xen implementation completely ignores the dirty CPU mask and the linear address passed in, and always performs a global TLB flush on all vCPUs. This is a limitation of the hypercall provided by Xen. Also note that local TLB flushes are not performed using the assisted TLB flush, only remote ones. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Reviewed-by: Jan Beulich --- Changes since v5: - Clarify commit message. - Test for assisted flush at setup, do this for all hypervisors. - Return EOPNOTSUPP if assisted flush is not available. Changes since v4: - Adjust order calculation. Changes since v3: - Use an alternative call for the flush hook. Changes since v1: - Add a L0 assisted hook to hypervisor ops. --- xen/arch/x86/guest/hypervisor.c | 14 ++++++++++++++ xen/arch/x86/guest/xen/xen.c | 6 ++++++ xen/arch/x86/smp.c | 7 +++++++ xen/include/asm-x86/guest/hypervisor.h | 17 +++++++++++++++++ 4 files changed, 44 insertions(+) diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hyperviso= r.c index 647cdb1367..e46de42ded 100644 --- a/xen/arch/x86/guest/hypervisor.c +++ b/xen/arch/x86/guest/hypervisor.c @@ -18,6 +18,7 @@ * * Copyright (c) 2019 Microsoft. */ +#include #include #include =20 @@ -51,6 +52,10 @@ void __init hypervisor_setup(void) { if ( ops.setup ) ops.setup(); + + /* Check if assisted flush is available and disable the TLB clock if s= o. */ + if ( !hypervisor_flush_tlb(cpumask_of(smp_processor_id()), NULL, 0) ) + tlb_clk_enabled =3D false; } =20 int hypervisor_ap_setup(void) @@ -73,6 +78,15 @@ void __init hypervisor_e820_fixup(struct e820map *e820) ops.e820_fixup(e820); } =20 +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order) +{ + if ( ops.flush_tlb ) + return alternative_call(ops.flush_tlb, mask, va, order); + + return -EOPNOTSUPP; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index e74fd1e995..3bc01c8723 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -324,12 +324,18 @@ static void __init e820_fixup(struct e820map *e820) pv_shim_fixup_e820(e820); } =20 +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int o= rder) +{ + return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); +} + static const struct hypervisor_ops __initconstrel ops =3D { .name =3D "Xen", .setup =3D setup, .ap_setup =3D ap_setup, .resume =3D resume, .e820_fixup =3D e820_fixup, + .flush_tlb =3D flush_tlb, }; =20 const struct hypervisor_ops *__init xg_probe(void) diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index bcead5d01b..1d9fec65de 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -268,6 +269,12 @@ void flush_area_mask(const cpumask_t *mask, const void= *va, unsigned int flags) if ( (flags & ~FLUSH_ORDER_MASK) && !cpumask_subset(mask, cpumask_of(cpu)) ) { + if ( cpu_has_hypervisor && + !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | + FLUSH_ORDER_MASK)) && + !hypervisor_flush_tlb(mask, va, (flags - 1) & FLUSH_ORDER_MAS= K) ) + return; + spin_lock(&flush_lock); cpumask_and(&flush_cpumask, mask, &cpu_online_map); cpumask_clear_cpu(cpu, &flush_cpumask); diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/g= uest/hypervisor.h index ade10e74ea..77a1d21824 100644 --- a/xen/include/asm-x86/guest/hypervisor.h +++ b/xen/include/asm-x86/guest/hypervisor.h @@ -19,6 +19,8 @@ #ifndef __X86_HYPERVISOR_H__ #define __X86_HYPERVISOR_H__ =20 +#include + #include =20 struct hypervisor_ops { @@ -32,6 +34,8 @@ struct hypervisor_ops { void (*resume)(void); /* Fix up e820 map */ void (*e820_fixup)(struct e820map *e820); + /* L0 assisted TLB flush */ + int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int o= rder); }; =20 #ifdef CONFIG_GUEST @@ -41,6 +45,14 @@ void hypervisor_setup(void); int hypervisor_ap_setup(void); void hypervisor_resume(void); void hypervisor_e820_fixup(struct e820map *e820); +/* + * L0 assisted TLB flush. + * mask: cpumask of the dirty vCPUs that should be flushed. + * va: linear address to flush, or NULL for global flushes. + * order: order of the linear address pointed by va. + */ +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order); =20 #else =20 @@ -52,6 +64,11 @@ static inline void hypervisor_setup(void) { ASSERT_UNREA= CHABLE(); } static inline int hypervisor_ap_setup(void) { return 0; } static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); } static inline void hypervisor_e820_fixup(struct e820map *e820) {} +static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *= va, + unsigned int order) +{ + return -EOPNOTSUPP; +} =20 #endif /* CONFIG_GUEST */ =20 --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel