From nobody Thu May 2 18:52:06 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582134298; cv=none; d=zohomail.com; s=zohoarc; b=MMjBxSHKv8IsshPYoNuis9eaYUqejH34symsv7NQRZcYglboOE/U/SkDeMrCdB9rMu9hTcij7Kkq1YKN+niLf0PvL3+g3dIijWaofyyzKr0lzxBLw6QRgLTtc76fIxxhrT8KHcm4AWLIVXNREnwUBxqbOlBBMjPK/+qaZStW2B8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582134298; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=VUR+GAacUvofdrG7R7VRba7lsr9A9C0DacuL0kljsG0=; b=gMA2S0dhWCrNvZjo8V9w8KSNczX1VBd0eAwGujXZ6Fh2+pjTexU9ZkJNNySSOtQ4oJjAmW82IRa3WFC4yPd6eDqY9EnAzMSh56A35hz65J2bbC/8q6XJvZEg6Q7XnJdKbKkP/zVBdJDMWWQCRNj9Qy4Io295yXcrhdO7K36J6vU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582134298534300.5394942437149; Wed, 19 Feb 2020 09:44:58 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOW-0001lR-Aa; Wed, 19 Feb 2020 17:44:20 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOU-0001lJ-DI for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 17:44:18 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 70492f54-533f-11ea-83f8-12813bfff9fa; Wed, 19 Feb 2020 17:44:13 +0000 (UTC) X-Inumbo-ID: 70492f54-533f-11ea-83f8-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582134254; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=D518PujtfB/3ccqjET6ABFUtckNCKCt2YwbPcxuJQqY=; b=HRu4lw3LnwvOYv1Z1ILWfxnTJcok1RryBuwf6M1aV0fHSEjVGAeSBKTt fZLM9yAlwyaZ9rxWbTl6UAhVFuL9Y1f2kuJoC39n51FM4BSlIZmTmJYfI 2FJmD3F4LfkTMrd46I5ChaHeh/0SxJfhiDOrT8upLitsCix/BwoAgoZAK 4=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: dPxeC9t3nd09ayyDXKYiX1EB0P1wj7ZdW0qvnXOJZzs8hJeRTPkpJzZt+/5Sqn2TjpsFZOh0+I r15ZaU26jyABlJb9eDyr3LPHg0gExe/0hSvmnz11ugxB1B/J1MVEspIxbudMPP8nIfL+WUutf4 LsiVvNxnzRJQxY3oxGYGClyyxKT/PSj/QykC1OISH3/9a/bhrkM2xE06mF6vOwIiQApBAzAzXm iMxZ2ecapx8woG/073D5nFDXYxJy4lYo0BxVd7L5svmMc63L8PGfUWIroMarvLXbvVbL0Y5Loo lI0= X-SBRS: 2.7 X-MesageID: 12689424 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,461,1574139600"; d="scan'208";a="12689424" From: Roger Pau Monne To: Date: Wed, 19 Feb 2020 18:43:48 +0100 Message-ID: <20200219174354.84726-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200219174354.84726-1-roger.pau@citrix.com> References: <20200219174354.84726-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v5 1/7] x86/hvm: allow ASID flush when v != current X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Current implementation of hvm_asid_flush_vcpu is not safe to use unless the target vCPU is either paused or the currently running one, as it modifies the generation without any locking. Fix this by using atomic operations when accessing the generation field, both in hvm_asid_flush_vcpu_asid and other ASID functions. This allows to safely flush the current ASID generation. Note that for the flush to take effect if the vCPU is currently running a vmexit is required. Note the same could be achieved by introducing an extra field to hvm_vcpu_asid that signals hvm_asid_handle_vmenter the need to call hvm_asid_flush_vcpu on the given vCPU before vmentry, this however seems unnecessary as hvm_asid_flush_vcpu itself only sets two vCPU fields to 0, so there's no need to delay this to the vmentry ASID helper. This is not a bugfix as no callers that would violate the assumptions listed in the first paragraph have been found, but a preparatory change in order to allow remote flushing of HVM vCPUs. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Jan Beulich --- xen/arch/x86/hvm/asid.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/asid.c b/xen/arch/x86/hvm/asid.c index 8e00a28443..63ce462d56 100644 --- a/xen/arch/x86/hvm/asid.c +++ b/xen/arch/x86/hvm/asid.c @@ -83,7 +83,7 @@ void hvm_asid_init(int nasids) =20 void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid) { - asid->generation =3D 0; + write_atomic(&asid->generation, 0); } =20 void hvm_asid_flush_vcpu(struct vcpu *v) @@ -121,7 +121,7 @@ bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *as= id) goto disabled; =20 /* Test if VCPU has valid ASID. */ - if ( asid->generation =3D=3D data->core_asid_generation ) + if ( read_atomic(&asid->generation) =3D=3D data->core_asid_generation ) return 0; =20 /* If there are no free ASIDs, need to go to a new generation */ @@ -135,7 +135,7 @@ bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *as= id) =20 /* Now guaranteed to be a free ASID. */ asid->asid =3D data->next_asid++; - asid->generation =3D data->core_asid_generation; + write_atomic(&asid->generation, data->core_asid_generation); =20 /* * When we assign ASID 1, flush all TLB entries as we are starting a n= ew --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 2 18:52:06 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582134304; cv=none; d=zohomail.com; s=zohoarc; b=QsghfKP9/6wacO8rrbU+PWzPixZMie6IvKdfkOMDQgiPIozb7NNdX7U9z4O6wwzHYSz8r9NNWrWd/kRJX5jXxlxe2CDEX9bNdMPzUmuZd6DzN82vQ+/rzHKA2rYnH5B7EHbNpNjdWh4VRe87nayyOpqguY+syR8RV5opnhCb/6o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582134304; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=mxH7t4a90VOhp4CPPvIMl11HrYqfXXU9roferlIKPK0=; b=cuEcisfwEx8jurBIiS/UcHRTwRtE0RES+odDoUB+pgXN7poP3EesdwPO8oHliZ5BcvW0Ar+CT6v6HCmfJskxDTEoi+ihTnMxPRe3qdOvTlJf1oIH8m27EJSJ24KnEQ7ZvGpTGJldL1lTJacy+pqt4nHfULFkCGVuiRxu0MOWLDc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582134304312413.3711953419803; Wed, 19 Feb 2020 09:45:04 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOi-0001n8-LE; Wed, 19 Feb 2020 17:44:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOi-0001n0-62 for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 17:44:32 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7b56db92-533f-11ea-ade5-bc764e2007e4; Wed, 19 Feb 2020 17:44:31 +0000 (UTC) X-Inumbo-ID: 7b56db92-533f-11ea-ade5-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582134270; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OlOC1qLxECnr0CgnbIGTDmVVrZiqIdQR39diZKkI/1A=; b=AckQk8B4X3Nskm4IL3/OfXih0OF2Q0obMi3dkairDAYSr4FIHhH/CKEc 4UD/TGkDVg/lD2Fv5sQO+Qv28UmBCDM1eRE1FBx+KkBUPtXrQgkWnq98w djuBN2SnhKhbPOKy++qOw0YOJLSPKDe4JgsOIPC8j7joNpp92GZaiRxcz E=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 7b04rkIJMDyGHJORlZupHrx6/VWxQ5jspP4UCJegeicLkIfislfOmwwsEQBfzd6IbjjlU8+aAE aKqKPOKEW/MK46NSJdySFLLvKr1PO5vDyfsTHDPjsvh9Vk5/Pv25pdaoLccY9e5vunU0FFmuLz T73E2yG/TAVU+07ajNrJIo3SM0Z6UW5Fttk2XqCHiLCpBanN2/WIZvYlWAntrCZf20Ph46dhgC 5p75Jq+DKpVWOu2atiZbMkRcvf2xaHrWbRFGJVyCiVRcG1C9xvsU4iDahXs8sFFk2eVloNbrZ+ gGc= X-SBRS: 2.7 X-MesageID: 13064288 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,461,1574139600"; d="scan'208";a="13064288" From: Roger Pau Monne To: Date: Wed, 19 Feb 2020 18:43:49 +0100 Message-ID: <20200219174354.84726-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200219174354.84726-1-roger.pau@citrix.com> References: <20200219174354.84726-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v5 2/7] x86/paging: add TLB flush hooks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , George Dunlap , Andrew Cooper , Tim Deegan , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add shadow and hap implementation specific helpers to perform guest TLB flushes. Note that the code for both is exactly the same at the moment, and is copied from hvm_flush_vcpu_tlb. This will be changed by further patches that will add implementation specific optimizations to them. No functional change intended. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Tim Deegan --- Changes since v3: - Fix stray newline removal. - Fix return of shadow_flush_tlb dummy function. --- xen/arch/x86/hvm/hvm.c | 51 ++---------------------------- xen/arch/x86/mm/hap/hap.c | 54 ++++++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/common.c | 55 +++++++++++++++++++++++++++++++++ xen/include/asm-x86/hap.h | 3 ++ xen/include/asm-x86/shadow.h | 12 +++++++ 5 files changed, 127 insertions(+), 48 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 00a9e70b7c..4049f57232 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3990,55 +3990,10 @@ static void hvm_s3_resume(struct domain *d) bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { - static DEFINE_PER_CPU(cpumask_t, flush_cpumask); - cpumask_t *mask =3D &this_cpu(flush_cpumask); - struct domain *d =3D current->domain; - struct vcpu *v; - - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - - cpumask_clear(mask); - - /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ - for_each_vcpu ( d, v ) - { - unsigned int cpu; - - if ( !flush_vcpu(ctxt, v) ) - continue; - - paging_update_cr3(v, false); + struct domain *currd =3D current->domain; =20 - cpu =3D read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) - __cpumask_set_cpu(cpu, mask); - } - - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); - - return true; + return shadow_mode_enabled(currd) ? shadow_flush_tlb(flush_vcpu, ctxt) + : hap_flush_tlb(flush_vcpu, ctxt); } =20 static bool always_flush(void *ctxt, struct vcpu *v) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 3d93f3451c..6894c1aa38 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,6 +669,60 @@ static void hap_update_cr3(struct vcpu *v, int do_lock= ing, bool noflush) hvm_update_guest_cr3(v, noflush); } =20 +bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask =3D &this_cpu(flush_cpumask); + struct domain *d =3D current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu =3D read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + const struct paging_mode * hap_paging_get_mode(struct vcpu *v) { diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index cba3ab1eba..121ddf1255 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -3357,6 +3357,61 @@ out: return rc; } =20 +/* Fluhs TLB of selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask =3D &this_cpu(flush_cpumask); + struct domain *d =3D current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu =3D read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + /*************************************************************************= */ /* Shadow-control XEN_DOMCTL dispatcher */ =20 diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h index b94bfb4ed0..0c6aa26b9b 100644 --- a/xen/include/asm-x86/hap.h +++ b/xen/include/asm-x86/hap.h @@ -46,6 +46,9 @@ int hap_track_dirty_vram(struct domain *d, extern const struct paging_mode *hap_paging_get_mode(struct vcpu *); int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempt= ed); =20 +bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #endif /* XEN_HAP_H */ =20 /* diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h index 907c71f497..cfd4650a16 100644 --- a/xen/include/asm-x86/shadow.h +++ b/xen/include/asm-x86/shadow.h @@ -95,6 +95,10 @@ void shadow_blow_tables_per_domain(struct domain *d); int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted); =20 +/* Flush the TLB of the selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #else /* !CONFIG_SHADOW_PAGING */ =20 #define shadow_teardown(d, p) ASSERT(is_pv_domain(d)) @@ -106,6 +110,14 @@ int shadow_set_allocation(struct domain *d, unsigned i= nt pages, #define shadow_set_allocation(d, pages, preempted) \ ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) =20 +static inline bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, + struct vcpu *v), + void *ctxt) +{ + ASSERT_UNREACHABLE(); + return false; +} + static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all) {} =20 --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 2 18:52:06 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582134308; cv=none; d=zohomail.com; s=zohoarc; b=OfBvT0kVf3838MOJfh3km0HDnK8K1Cf3dE8SCvhxxlav0xbg8Mh9egLucpFWgYqCC3NyMy/WvvV+XUNQe5Z/PLxiFrSiXpz2i762IEsI8JGTFFM8kNzwnbwhtnzRHrI7GwLg8VNlTLX4kwo608/s4Ciln3eqIGFSvFJp7Uzc8O8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582134308; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=W3SIjoX7Jl9LXElvcUvDWRBsPOb/8j7gzVCP3+KrTns=; b=A6X6SP4dCWRF3yzKcw3zHj8ljkvxs5wvxcrfdb9FtSpd4wHWaFlRMY/SFuDlDEHxr5zE1UoLEO5X7hJygqwWqdc+o68GSR7rAon8Jh70brJX6TEKVctkB/9L6QBrquc9vs9n49egRrkW5ZrbnBy+sz62lquQVKRiLHTOHUOyw8U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582134308334466.580946294411; Wed, 19 Feb 2020 09:45:08 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOn-0001og-Ve; Wed, 19 Feb 2020 17:44:37 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOn-0001oH-31 for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 17:44:37 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7c335478-533f-11ea-ade5-bc764e2007e4; Wed, 19 Feb 2020 17:44:32 +0000 (UTC) X-Inumbo-ID: 7c335478-533f-11ea-ade5-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582134271; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c999TlIig+XlNCfRhttUN5F9FJD5wc7xGftU8hNQO9o=; b=bQ2VQ2jtv+YFbIy69HVWKz8KHuLzYCWhlMT0BxijP+HJi2ybrAUpUFAf W+D52JKQaqnsUBi5HAWA7XokdanOKHSi7w1qJUv8SFh1PJ65VocdEeK63 Okk+cdywnEF1ycdlvi0rZihmsgYviDuGhmb/8PzLeHePo1Jw8iV0PIALe E=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: xKb+D7RP+9sU4g8493dO24asdF53VqzmeRxkh5BgCr3Rkjclp0Qn4LJifWtiOv/eh+/qsEyTbs tpopuIrH9MuQXYCC7IW4sriC7CUzDLacEPZYYhoOsV1Jr+3s0y6Ne9HWUXcxKoIOZRRMxCITW6 E0ROXCJ3hj0CbXT7i4jcfV6xxpF12W2NhYUNOAKL3ak93/EbapxVsdi89GvPJ/K+dqbv4CPIU1 UOxCygACGL2ZcJbBh+42mXHBvzUX+Ng45I7vWt80rW9SLlAcJ0wlCtYWLLetwcpdRXHnKKTUEz zCo= X-SBRS: 2.7 X-MesageID: 13064289 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,461,1574139600"; d="scan'208";a="13064289" From: Roger Pau Monne To: Date: Wed, 19 Feb 2020 18:43:50 +0100 Message-ID: <20200219174354.84726-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200219174354.84726-1-roger.pau@citrix.com> References: <20200219174354.84726-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v5 3/7] x86/hap: improve hypervisor assisted guest TLB flush X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The current implementation of the hypervisor assisted flush for HAP is extremely inefficient. First of all there's no need to call paging_update_cr3, as the only relevant part of that function when doing a flush is the ASID vCPU flush, so just call that function directly. Since hvm_asid_flush_vcpu is protected against concurrent callers by using atomic operations there's no need anymore to pause the affected vCPUs. Finally the global TLB flush performed by flush_tlb_mask is also not necessary, since we only want to flush the guest TLB state it's enough to trigger a vmexit on the pCPUs currently holding any vCPU state, as such vmexit will already perform an ASID/VPID update, and thus clear the guest TLB. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- Changes since v3: - s/do_flush/handle_flush/. - Add comment about handle_flush usage. - Fix VPID typo in comment. --- xen/arch/x86/mm/hap/hap.c | 52 +++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 6894c1aa38..dbb61bf9c6 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,32 +669,28 @@ static void hap_update_cr3(struct vcpu *v, int do_loc= king, bool noflush) hvm_update_guest_cr3(v, noflush); } =20 +/* + * NB: doesn't actually perform any flush, used just to clear the CPU from= the + * mask and hence signal that the guest TLB flush has been done. + */ +static void handle_flush(void *data) +{ + cpumask_t *mask =3D data; + unsigned int cpu =3D smp_processor_id(); + + ASSERT(cpumask_test_cpu(cpu, mask)); + cpumask_clear_cpu(cpu, mask); +} + bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { static DEFINE_PER_CPU(cpumask_t, flush_cpumask); cpumask_t *mask =3D &this_cpu(flush_cpumask); struct domain *d =3D current->domain; + unsigned int this_cpu =3D smp_processor_id(); struct vcpu *v; =20 - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - cpumask_clear(mask); =20 /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ @@ -705,20 +701,22 @@ bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, str= uct vcpu *v), if ( !flush_vcpu(ctxt, v) ) continue; =20 - paging_update_cr3(v, false); + hvm_asid_flush_vcpu(v); =20 cpu =3D read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) + if ( cpu !=3D this_cpu && is_vcpu_dirty_cpu(cpu) ) __cpumask_set_cpu(cpu, mask); } =20 - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); + /* + * Trigger a vmexit on all pCPUs with dirty vCPU state in order to for= ce an + * ASID/VPID change and hence accomplish a guest TLB flush. Note that = vCPUs + * not currently running will already be flushed when scheduled becaus= e of + * the ASID tickle done in the loop above. + */ + on_selected_cpus(mask, handle_flush, mask, 0); + while ( !cpumask_empty(mask) ) + cpu_relax(); =20 return true; } --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 2 18:52:06 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582134314; cv=none; d=zohomail.com; s=zohoarc; b=dMel10DzlbwWt67SNuTVpw2u51yAT6+XXYcEvMsK9XoC1T41YH585H7gmcmYsHKSfmGXN/GjuUQgXXEgNDGxsTiET1Bit4PKT2DWeCMF1eeZmTbqOMAM9e8j4j112uf4+ypqFypo+LYPgXP79lkVrHUD66yCpaiKDNeE2eakVbs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582134314; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=iO7EaO4/2k9vKeh01kiHZ4wODqoNybt8qaejvNkXBLA=; b=TGiAVwYBG8NDmQWanKdwGqI6s+/ZlFL8OKu29Dg+gGsNb6rz3rfk5MVY1h67Dy8OGg2oAzYS9dMHnNcJxrZe+ehRKvu2Lo6ucZncGL7++KEoaDrKPl1ZqkXx21B3/6re4Fgg8e3/5pMGp8NVZa8YhfSDSkQEdWE4PJBaN/9VIiU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582134314193571.4218907237563; Wed, 19 Feb 2020 09:45:14 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOt-0001r3-CC; Wed, 19 Feb 2020 17:44:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOs-0001qP-2S for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 17:44:42 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7c3b84b8-533f-11ea-ade5-bc764e2007e4; Wed, 19 Feb 2020 17:44:32 +0000 (UTC) X-Inumbo-ID: 7c3b84b8-533f-11ea-ade5-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582134272; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=x3TArYSXscbfRZ2qRWGuyqK3HGaz/Vluu5sqjqF3lTw=; b=TCaPJVDFGhFsg/jMDX1oDgRgDsRktblh1Zho1tvIrVfySUlUEudpsNTx TaZtf6uZyhfSckcI/CirYmuWHvQm136jnKxNoMsQaq78KE6V5DYm0eU5B //sIiZqB5vvr1ygzBybjR72pFC+wyWqIYnX2d2nayBeMl/LjNkQxzLzQB A=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: MIrnVuNesWWi4WAIBiNaR3e4reMRe7uBh5Tc/R0zKylYTTVeFHVx13BHs+u7TXO10ytsaGDioU nHzZWXKV8wGwFclhuyos9VuXUe9Qmi+5yzcOoeTYPgjLRrQE+FojyeTyNCUlCdtu32bOb0BJer rwtEVxjX9FTEbGjTgOpUCGKqTeMkcfeBkrsVbkCssw9bxnWlyu6o4d57gZ96NOjsK68i+94f2U x4BUfxetQPRqBYqmtqg5QpTYIXQtaaGfZ6OX5n6T6+eGVDUriaotSAebB0zJhg74eACzCdcHSl Hzo= X-SBRS: 2.7 X-MesageID: 13064290 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,461,1574139600"; d="scan'208";a="13064290" From: Roger Pau Monne To: Date: Wed, 19 Feb 2020 18:43:51 +0100 Message-ID: <20200219174354.84726-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200219174354.84726-1-roger.pau@citrix.com> References: <20200219174354.84726-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v5 4/7] x86/tlb: introduce a flush guests TLB flag X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , George Dunlap , Andrew Cooper , Tim Deegan , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Introduce a specific flag to request a HVM guest TLB flush, which is an ASID/VPID tickle that forces a linear TLB flush for all HVM guests. This was previously unconditionally done in each pre_flush call, but that's not required: HVM guests not using shadow don't require linear TLB flushes as Xen doesn't modify the guest page tables in that case (ie: when using HAP). Modify all shadow code TLB flushes to also flush the guest TLB, in order to keep the previous behavior. I haven't looked at each specific shadow code TLB flush in order to figure out whether it actually requires a guest TLB flush or not, so there might be room for improvement in that regard. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Tim Deegan --- xen/arch/x86/flushtlb.c | 5 +++-- xen/arch/x86/mm/shadow/common.c | 18 +++++++++--------- xen/arch/x86/mm/shadow/hvm.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 16 ++++++++-------- xen/include/asm-x86/flushtlb.h | 2 ++ 5 files changed, 23 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c index 03f92c23dc..e7ccd4ec7b 100644 --- a/xen/arch/x86/flushtlb.c +++ b/xen/arch/x86/flushtlb.c @@ -59,8 +59,6 @@ static u32 pre_flush(void) raise_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ); =20 skip_clocktick: - hvm_flush_guest_tlbs(); - return t2; } =20 @@ -221,6 +219,9 @@ unsigned int flush_area_local(const void *va, unsigned = int flags) do_tlb_flush(); } =20 + if ( flags & FLUSH_GUESTS_TLB ) + hvm_flush_guest_tlbs(); + if ( flags & FLUSH_CACHE ) { const struct cpuinfo_x86 *c =3D ¤t_cpu_data; diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index 121ddf1255..4847f24d3b 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -363,7 +363,7 @@ static int oos_remove_write_access(struct vcpu *v, mfn_= t gmfn, } =20 if ( ftlb ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 return 0; } @@ -939,7 +939,7 @@ static void _shadow_prealloc(struct domain *d, unsigned= int pages) /* See if that freed up enough space */ if ( d->arch.paging.shadow.free_pages >=3D pages ) { - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_= TLB); return; } } @@ -993,7 +993,7 @@ static void shadow_blow_tables(struct domain *d) pagetable_get_mfn(v->arch.shadow_table[i]),= 0); =20 /* Make sure everyone sees the unshadowings */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); } =20 void shadow_blow_tables_per_domain(struct domain *d) @@ -1102,7 +1102,7 @@ mfn_t shadow_alloc(struct domain *d, if ( unlikely(!cpumask_empty(&mask)) ) { perfc_incr(shadow_alloc_tlbflush); - flush_tlb_mask(&mask); + flush_mask(&mask, FLUSH_TLB | FLUSH_GUESTS_TLB); } /* Now safe to clear the page for reuse */ clear_domain_page(page_to_mfn(sp)); @@ -2290,7 +2290,7 @@ void sh_remove_shadows(struct domain *d, mfn_t gmfn, = int fast, int all) =20 /* Need to flush TLBs now, so that linear maps are safe next time we * take a fault. */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 paging_unlock(d); } @@ -3005,7 +3005,7 @@ static void sh_unshadow_for_p2m_change(struct domain = *d, unsigned long gfn, { sh_remove_all_shadows_and_parents(d, mfn); if ( sh_remove_all_mappings(d, mfn, _gfn(gfn)) ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); } } =20 @@ -3045,7 +3045,7 @@ static void sh_unshadow_for_p2m_change(struct domain = *d, unsigned long gfn, } omfn =3D mfn_add(omfn, 1); } - flush_tlb_mask(&flushmask); + flush_mask(&flushmask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 if ( npte ) unmap_domain_page(npte); @@ -3332,7 +3332,7 @@ int shadow_track_dirty_vram(struct domain *d, } } if ( flush_tlb ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); goto out; =20 out_sl1ma: @@ -3402,7 +3402,7 @@ bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, = struct vcpu *v), } =20 /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); + flush_mask(mask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 /* Done. */ for_each_vcpu ( d, v ) diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c index a219266fa2..64077d181b 100644 --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -590,7 +590,7 @@ static void validate_guest_pt_write(struct vcpu *v, mfn= _t gmfn, =20 if ( rc & SHADOW_SET_FLUSH ) /* Need to flush TLBs to pick up shadow PT changes */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 if ( rc & SHADOW_SET_ERROR ) { diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 26798b317c..22aeb97b1e 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3066,7 +3066,7 @@ static int sh_page_fault(struct vcpu *v, perfc_incr(shadow_rm_write_flush_tlb); smp_wmb(); atomic_inc(&d->arch.paging.shadow.gtable_dirty_version); - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); } =20 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) @@ -3575,7 +3575,7 @@ static bool sh_invlpg(struct vcpu *v, unsigned long l= inear) if ( mfn_to_page(sl1mfn)->u.sh.type =3D=3D SH_type_fl1_shadow ) { - flush_tlb_local(); + flush_local(FLUSH_TLB | FLUSH_GUESTS_TLB); return false; } =20 @@ -3810,7 +3810,7 @@ sh_update_linear_entries(struct vcpu *v) * table entry. But, without this change, it would fetch the wrong * value due to a stale TLB. */ - flush_tlb_local(); + flush_local(FLUSH_TLB | FLUSH_GUESTS_TLB); } } =20 @@ -4011,7 +4011,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) * (old) shadow linear maps in the writeable mapping heuristics. */ #if GUEST_PAGING_LEVELS =3D=3D 2 if ( sh_remove_write_access(d, gmfn, 2, 0) !=3D 0 ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow); #elif GUEST_PAGING_LEVELS =3D=3D 3 /* PAE guests have four shadow_table entries, based on the @@ -4035,7 +4035,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) } } if ( flush ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); /* Now install the new shadows. */ for ( i =3D 0; i < 4; i++ ) { @@ -4056,7 +4056,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) } #elif GUEST_PAGING_LEVELS =3D=3D 4 if ( sh_remove_write_access(d, gmfn, 4, 0) !=3D 0 ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow); if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) ) { @@ -4502,7 +4502,7 @@ static void sh_pagetable_dying(paddr_t gpa) } } if ( flush ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 /* Remember that we've seen the guest use this interface, so we * can rely on it using it in future, instead of guessing at @@ -4539,7 +4539,7 @@ static void sh_pagetable_dying(paddr_t gpa) mfn_to_page(gmfn)->pagetable_dying =3D true; shadow_unhook_mappings(d, smfn, 1/* user pages only */); /* Now flush the TLB: we removed toplevel mappings. */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); } =20 /* Remember that we've seen the guest use this interface, so we diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 2cfe4e6e97..07f9bc6103 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -105,6 +105,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4); #define FLUSH_VCPU_STATE 0x1000 /* Flush the per-cpu root page table */ #define FLUSH_ROOT_PGTBL 0x2000 + /* Flush all HVM guests linear TLB (using ASID/VPID) */ +#define FLUSH_GUESTS_TLB 0x4000 =20 /* Flush local TLBs/caches. */ unsigned int flush_area_local(const void *va, unsigned int flags); --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 2 18:52:06 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582134323; cv=none; d=zohomail.com; s=zohoarc; b=UDWEL+K6uhv//mIaLKqbyenKhv2wIlrZ64/jrM39YrKBTB46TYtuhVxySo78cX7tvQMXn74vPW5Sc2p0hOI2WipI12EYTqDeOozntVCt9gfB36bA7sb81y1uixX/vzFyoq91U5QWV2spr0un6fhOkyId74kNT4NgSaZq0IRIqDQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582134323; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=YLyH8fiy4w77b9NkbQ72XUBDbEzyVlygquJdlb2xOU0=; b=hNQ5qkbP6iNrB9gHgjM5pdNy/ZEI6FF/zN8TcacapVrs3A4HW7YW0tRrpQDZUO0ogibKwolMRgopX7bhjRRmc1fkbaMWhyHoaWOYYA8optqIXTxhVOXHQGjeUfpbeLUJVrmKzoIi1KQvPTihgAGELKGwn7xl+WUnrTeIPfNYKeE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582134322845814.6127069873058; Wed, 19 Feb 2020 09:45:22 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOx-0001u3-W8; Wed, 19 Feb 2020 17:44:47 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOx-0001tc-2q for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 17:44:47 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7cd4959a-533f-11ea-ade5-bc764e2007e4; Wed, 19 Feb 2020 17:44:33 +0000 (UTC) X-Inumbo-ID: 7cd4959a-533f-11ea-ade5-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582134272; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k+kFKZ4dujIcrkjaPRXOqC+t0sgVPXVwOVOrCsPeZgc=; b=MTZcmdplBScFpF9uZfnQeggzPXvX9x+wa9xxrot0dVNbuhAs+x7BLbIC DUlbryrmBNxc/vIhAJ3rfT8ZwPoBWIk3I4Z5OF8Tn4P55lcWLcx2eEoNN zsCqiiJcAlSIX6Hcr1Y2M1W0ioqnnxB0dovyoNrKU02NjICR9N99CrUZC Q=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: RVYiFC6X6ULTUStw7QizWtEFDsI3WWheBpV/BZW9kk6lTGgZom4b1jSWrdyc/9keb5M7H8o39m E4Ab1qugR/FmD1qEKIj/U4/m6pezZbBp6iy2lzSQH3fe0vVg957uUBu4X4++HIVipYDLz5HFFP LjQbruY0uctMIr+hUSkaYUr5hqdf346cuazQ/z832nT+9N10RkQzkJ+XKTfdpSf/YUQ3pj/JLV +gVdslmEYw4wSumQH1uJUtFX0erFvM8Oz/cpp8LyCrBMjOXqTDpPJEoGj0zIWkyQ7BIDA0Xzl/ plg= X-SBRS: 2.7 X-MesageID: 13064291 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,461,1574139600"; d="scan'208";a="13064291" From: Roger Pau Monne To: Date: Wed, 19 Feb 2020 18:43:52 +0100 Message-ID: <20200219174354.84726-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200219174354.84726-1-roger.pau@citrix.com> References: <20200219174354.84726-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v5 5/7] x86/tlb: allow disabling the TLB clock X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The TLB clock is helpful when running Xen on bare metal because when doing a TLB flush each CPU is IPI'ed and can keep a timestamp of the last flush. This is not the case however when Xen is running virtualized, and the underlying hypervisor provides mechanism to assist in performing TLB flushes: Xen itself for example offers a HVMOP_flush_tlbs hypercall in order to perform a TLB flush without having to IPI each CPU. When using such mechanisms it's no longer possible to keep a timestamp of the flushes on each CPU, as they are performed by the underlying hypervisor. Offer a boolean in order to signal Xen that the timestamped TLB shouldn't be used. This avoids keeping the timestamps of the flushes, and also forces NEED_FLUSH to always return true. No functional change intended, as this change doesn't introduce any user that disables the timestamped TLB. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Acked-by: Jan Beulich --- xen/arch/x86/flushtlb.c | 19 +++++++++++++------ xen/include/asm-x86/flushtlb.h | 17 ++++++++++++++++- 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c index e7ccd4ec7b..3649900793 100644 --- a/xen/arch/x86/flushtlb.c +++ b/xen/arch/x86/flushtlb.c @@ -32,6 +32,9 @@ u32 tlbflush_clock =3D 1U; DEFINE_PER_CPU(u32, tlbflush_time); =20 +/* Signals whether the TLB flush clock is in use. */ +bool __read_mostly tlb_clk_enabled =3D true; + /* * pre_flush(): Increment the virtual TLB-flush clock. Returns new clock v= alue. *=20 @@ -82,12 +85,13 @@ static void post_flush(u32 t) static void do_tlb_flush(void) { unsigned long flags, cr4; - u32 t; + u32 t =3D 0; =20 /* This non-reentrant function is sometimes called in interrupt contex= t. */ local_irq_save(flags); =20 - t =3D pre_flush(); + if ( tlb_clk_enabled ) + t =3D pre_flush(); =20 if ( use_invpcid ) invpcid_flush_all(); @@ -99,7 +103,8 @@ static void do_tlb_flush(void) else write_cr3(read_cr3()); =20 - post_flush(t); + if ( tlb_clk_enabled ) + post_flush(t); =20 local_irq_restore(flags); } @@ -107,7 +112,7 @@ static void do_tlb_flush(void) void switch_cr3_cr4(unsigned long cr3, unsigned long cr4) { unsigned long flags, old_cr4; - u32 t; + u32 t =3D 0; =20 /* Throughout this function we make this assumption: */ ASSERT(!(cr4 & X86_CR4_PCIDE) || !(cr4 & X86_CR4_PGE)); @@ -115,7 +120,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) /* This non-reentrant function is sometimes called in interrupt contex= t. */ local_irq_save(flags); =20 - t =3D pre_flush(); + if ( tlb_clk_enabled ) + t =3D pre_flush(); =20 old_cr4 =3D read_cr4(); ASSERT(!(old_cr4 & X86_CR4_PCIDE) || !(old_cr4 & X86_CR4_PGE)); @@ -167,7 +173,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) if ( cr4 & X86_CR4_PCIDE ) invpcid_flush_all_nonglobals(); =20 - post_flush(t); + if ( tlb_clk_enabled ) + post_flush(t); =20 local_irq_restore(flags); } diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 07f9bc6103..9773014320 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -21,10 +21,21 @@ extern u32 tlbflush_clock; /* Time at which each CPU's TLB was last flushed. */ DECLARE_PER_CPU(u32, tlbflush_time); =20 -#define tlbflush_current_time() tlbflush_clock +/* TLB clock is in use. */ +extern bool tlb_clk_enabled; + +static inline uint32_t tlbflush_current_time(void) +{ + /* Returning 0 from tlbflush_current_time will always force a flush. */ + return tlb_clk_enabled ? tlbflush_clock : 0; +} =20 static inline void page_set_tlbflush_timestamp(struct page_info *page) { + /* Avoid the write if the TLB clock is disabled. */ + if ( !tlb_clk_enabled ) + return; + /* * Prevent storing a stale time stamp, which could happen if an update * to tlbflush_clock plus a subsequent flush IPI happen between the @@ -67,6 +78,10 @@ static inline void tlbflush_filter(cpumask_t *mask, uint= 32_t page_timestamp) { unsigned int cpu; =20 + /* Short-circuit: there's no need to iterate if the clock is disabled.= */ + if ( !tlb_clk_enabled ) + return; + for_each_cpu ( cpu, mask ) if ( !NEED_FLUSH(per_cpu(tlbflush_time, cpu), page_timestamp) ) __cpumask_clear_cpu(cpu, mask); --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 2 18:52:06 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582134323; cv=none; d=zohomail.com; s=zohoarc; b=byVYnKu8n+hf0p3AnOSq236YjRYpBlzl9AcMGlpSP1XR90EXNdj+XcU67YUhv590r6XKOZDzjybE75sjz5dPFA2R/NHIm28dIj84aaB2lVe3X+LBvfQchN3Y4nW4DRwzL3wiNwI2AzIW5Ls0CoCSKqmm95SV5VMZkAb7w01Y1RE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582134323; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=IA3XJP8vQDkfEJXqaIz46TLJJQTKXvrfIlm5aPgxsbg=; b=fiuwHcRHYrQ9cjRsUvQJGu0nQDI0w/oXj87AXIAEpYUSrdxElazEyxcxKEsjC8+xg6o513trn9VY+K2JfmNoDYQoDnA35IgAU/NcrzLwgCYoYludc6XkLQApk2YjbmJ/Wu/P3vlC166dMiYzpuRqBrB9A0kRjoCcnqOWlAo56D4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582134323717670.2531978907635; Wed, 19 Feb 2020 09:45:23 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOz-0001vh-G1; Wed, 19 Feb 2020 17:44:49 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOy-0001ui-HA for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 17:44:48 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 823ee3ea-533f-11ea-83f8-12813bfff9fa; Wed, 19 Feb 2020 17:44:43 +0000 (UTC) X-Inumbo-ID: 823ee3ea-533f-11ea-83f8-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582134284; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/s5u8sMdunnqNRIWop29wFYp8BPH9IKS621+zWod4l0=; b=Yp48fX4yGxgQOAzDW1IwyyNUhKTikwV0AABJ8yvMkBfLs2iu9v7nYIQQ YgRFzEV4BOY+p3emm3a1YwwdwdP3AOsaKAAx6vYMHiJhaq9awpdof0d/C umecLsd0TUg5fU6UFnuQftfQvBwBl1CldVJ8djxClPCpUqryC41fP++x2 Y=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: pB94+noT6O70wgniVX0Xb/7jddOpgpw8XvSFM/7Xo3utDH0SucIa+RK0IymhD2HAFZaDM48C1x 2n45x/2MaIoJ29MEyQd8hIeg88SBpSQ1oPH658DR9ycZJgxKJcunqkY9vcpfVVJZ8L8IyjTDUX h1i34YYxFgN01dmU/T349DMJ0KCZTpjr7LIZYnJiciiTG1od0Xe7QXYpAmgLFPlFSkCML2bEc7 hljMkynzxZO50cmCzQfwL7LPH7MqlndrLKqcf62JlNdWs4IoC9QtcAw1TPdUx5RdlrDnslX1Pd 3Is= X-SBRS: 2.7 X-MesageID: 12882318 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,461,1574139600"; d="scan'208";a="12882318" From: Roger Pau Monne To: Date: Wed, 19 Feb 2020 18:43:53 +0100 Message-ID: <20200219174354.84726-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200219174354.84726-1-roger.pau@citrix.com> References: <20200219174354.84726-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v5 6/7] xen/guest: prepare hypervisor ops to use alternative calls X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Jan Beulich , Wei Liu , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Adapt the hypervisor ops framework so it can be used with the alternative calls framework. So far no hooks are modified to make use of the alternatives patching, as they are not in any hot path. No functional change intended. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu Reviewed-by: Paul Durrant Reviewed-by: Jan Beulich --- Changes since v3: - New in this version. --- xen/arch/x86/guest/hyperv/hyperv.c | 2 +- xen/arch/x86/guest/hypervisor.c | 41 +++++++++++++++--------------- xen/arch/x86/guest/xen/xen.c | 2 +- 3 files changed, 23 insertions(+), 22 deletions(-) diff --git a/xen/arch/x86/guest/hyperv/hyperv.c b/xen/arch/x86/guest/hyperv= /hyperv.c index fabc62b0d6..70f4cd5ae0 100644 --- a/xen/arch/x86/guest/hyperv/hyperv.c +++ b/xen/arch/x86/guest/hyperv/hyperv.c @@ -199,7 +199,7 @@ static void __init e820_fixup(struct e820map *e820) panic("Unable to reserve Hyper-V hypercall range\n"); } =20 -static const struct hypervisor_ops ops =3D { +static const struct hypervisor_ops __initdata ops =3D { .name =3D "Hyper-V", .setup =3D setup, .ap_setup =3D ap_setup, diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hyperviso= r.c index 5fd433c8d4..647cdb1367 100644 --- a/xen/arch/x86/guest/hypervisor.c +++ b/xen/arch/x86/guest/hypervisor.c @@ -24,52 +24,53 @@ #include #include =20 -static const struct hypervisor_ops *__read_mostly ops; +static struct hypervisor_ops __read_mostly ops; =20 const char *__init hypervisor_probe(void) { + const struct hypervisor_ops *fns; + if ( !cpu_has_hypervisor ) return NULL; =20 - ops =3D xg_probe(); - if ( ops ) - return ops->name; + fns =3D xg_probe(); + if ( !fns ) + /* + * Detection of Hyper-V must come after Xen to avoid false positiv= e due + * to viridian support + */ + fns =3D hyperv_probe(); =20 - /* - * Detection of Hyper-V must come after Xen to avoid false positive due - * to viridian support - */ - ops =3D hyperv_probe(); - if ( ops ) - return ops->name; + if ( fns ) + ops =3D *fns; =20 - return NULL; + return ops.name; } =20 void __init hypervisor_setup(void) { - if ( ops && ops->setup ) - ops->setup(); + if ( ops.setup ) + ops.setup(); } =20 int hypervisor_ap_setup(void) { - if ( ops && ops->ap_setup ) - return ops->ap_setup(); + if ( ops.ap_setup ) + return ops.ap_setup(); =20 return 0; } =20 void hypervisor_resume(void) { - if ( ops && ops->resume ) - ops->resume(); + if ( ops.resume ) + ops.resume(); } =20 void __init hypervisor_e820_fixup(struct e820map *e820) { - if ( ops && ops->e820_fixup ) - ops->e820_fixup(e820); + if ( ops.e820_fixup ) + ops.e820_fixup(e820); } =20 /* diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index 3cf8f667a1..f151b07548 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -324,7 +324,7 @@ static void __init e820_fixup(struct e820map *e820) pv_shim_fixup_e820(e820); } =20 -static const struct hypervisor_ops ops =3D { +static const struct hypervisor_ops __initdata ops =3D { .name =3D "Xen", .setup =3D setup, .ap_setup =3D ap_setup, --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 2 18:52:06 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582134318; cv=none; d=zohomail.com; s=zohoarc; b=P04BT0wvnWSLMoYwqL7d/ElIMrFx0fCB0bPKy1s/2pWP9yJ5fh63YAriFG3aHteDJdUc29Rz5BFW1F5oQHUENct9Hkf1HD2lERoVWEmoPsQbddf1RGJ5SUOgm92F/O5cqdH7H6jV/GBYDlnOznZRRI/hOVjkv3TOvfNfGyyhl+4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582134318; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=SZgqNvsXTYg9XIQOOgQhohsXmpdPq3X/gaIrPpJU5uE=; b=VJzwLxhT+qStqKo08zDtCfSaGpWGf99h9FCZ+M/jw4DhNajrzLq6Kn51ebHCo4VKcor0tOAA73fP1ezDG+oFTawtZifoElqxfIO2XWd6+42ONcAXAULAdGXjmCnIw6FotYX6DSO5WrWecL53pgp+LKhMR9I7tYIOa2DRSxf3QrY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582134318177207.9715239075764; Wed, 19 Feb 2020 09:45:18 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOv-0001sX-MH; Wed, 19 Feb 2020 17:44:45 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4TOt-0001r7-HR for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 17:44:43 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 823ee3e6-533f-11ea-83f8-12813bfff9fa; Wed, 19 Feb 2020 17:44:42 +0000 (UTC) X-Inumbo-ID: 823ee3e6-533f-11ea-83f8-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582134283; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BkJwdZznZdB0AGtbLgSTJo64RJmp7W3UXemMXFA7G/4=; b=QmBvccojXhvIW+Ppq5srTlw7fUj6IXQNsQNZICDUOj7t1VREzRl87ihS cfuEAhPLW5jGX/SW9QVhYYsBG3jW/ETNBkjsADyzGvomyeuo0pIaxlTWv WkqPSj18TO4XM+xGsQ/TdLqBYXBeIB+Dkyw2w8GXWEYV0uu4k/kGQeIBQ Y=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: NHCsva8dvMbAoxp5CfwSf26mmDyezD4/bLAjhPqsrVxtalyGK/gzc7D+ry3dHkJjd/t19+AyFf wwmPSrC+BRa9pbdybLYiYlhtnIqyIBcH3BtzoAN3ob7FZmKfGguPHFdaE1FacAHP1xN85fHACU tueuirdB4/SHYFgN1c2ZlFQvGXFJ0HxKwD3pMi8+TPtwNNWnJnje77/kI/ipiOOQWDc0W0LW3/ uXdWsbqLOP8VSAmLH9cTnhcJ1nOfhaxLxnWKA5BBgJr3HJL2phXw+ebYdyZjCBYW7OLMkNcXgU 6jo= X-SBRS: 2.7 X-MesageID: 12882319 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,461,1574139600"; d="scan'208";a="12882319" From: Roger Pau Monne To: Date: Wed, 19 Feb 2020 18:43:54 +0100 Message-ID: <20200219174354.84726-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200219174354.84726-1-roger.pau@citrix.com> References: <20200219174354.84726-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v5 7/7] x86/tlb: use Xen L0 assisted TLB flush when available X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes. This greatly increases the performance of TLB flushes when running with a high amount of vCPUs as a Xen guest, and is specially important when running in shim mode. The following figures are from a PV guest running `make -j32 xen` in shim mode with 32 vCPUs and HAP. Using x2APIC and ALLBUT shorthand: real 4m35.973s user 4m35.110s sys 36m24.117s Using L0 assisted flush: real 1m2.596s user 4m34.818s sys 5m16.374s The implementation adds a new hook to hypervisor_ops so other enlightenments can also implement such assisted flush just by filling the hook. Note that the Xen implementation completely ignores the dirty CPU mask and the linear address passed in, and always performs a global TLB flush on all vCPUs. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- Changes since v4: - Adjust order calculation. Changes since v3: - Use an alternative call for the flush hook. Changes since v1: - Add a L0 assisted hook to hypervisor ops. --- xen/arch/x86/guest/hypervisor.c | 10 ++++++++++ xen/arch/x86/guest/xen/xen.c | 6 ++++++ xen/arch/x86/smp.c | 11 +++++++++++ xen/include/asm-x86/guest/hypervisor.h | 17 +++++++++++++++++ 4 files changed, 44 insertions(+) diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hyperviso= r.c index 647cdb1367..47e938e287 100644 --- a/xen/arch/x86/guest/hypervisor.c +++ b/xen/arch/x86/guest/hypervisor.c @@ -18,6 +18,7 @@ * * Copyright (c) 2019 Microsoft. */ +#include #include #include =20 @@ -73,6 +74,15 @@ void __init hypervisor_e820_fixup(struct e820map *e820) ops.e820_fixup(e820); } =20 +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order) +{ + if ( ops.flush_tlb ) + return alternative_call(ops.flush_tlb, mask, va, order); + + return -ENOSYS; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index f151b07548..5d3427a713 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -324,12 +324,18 @@ static void __init e820_fixup(struct e820map *e820) pv_shim_fixup_e820(e820); } =20 +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int o= rder) +{ + return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); +} + static const struct hypervisor_ops __initdata ops =3D { .name =3D "Xen", .setup =3D setup, .ap_setup =3D ap_setup, .resume =3D resume, .e820_fixup =3D e820_fixup, + .flush_tlb =3D flush_tlb, }; =20 const struct hypervisor_ops *__init xg_probe(void) diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index fac295fa6f..55d08c9d52 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -256,6 +257,16 @@ void flush_area_mask(const cpumask_t *mask, const void= *va, unsigned int flags) if ( (flags & ~FLUSH_ORDER_MASK) && !cpumask_subset(mask, cpumask_of(cpu)) ) { + if ( cpu_has_hypervisor && + !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | + FLUSH_ORDER_MASK)) && + !hypervisor_flush_tlb(mask, va, (flags - 1) & FLUSH_ORDER_MAS= K) ) + { + if ( tlb_clk_enabled ) + tlb_clk_enabled =3D false; + return; + } + spin_lock(&flush_lock); cpumask_and(&flush_cpumask, mask, &cpu_online_map); cpumask_clear_cpu(cpu, &flush_cpumask); diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/g= uest/hypervisor.h index ade10e74ea..432e57c2a0 100644 --- a/xen/include/asm-x86/guest/hypervisor.h +++ b/xen/include/asm-x86/guest/hypervisor.h @@ -19,6 +19,8 @@ #ifndef __X86_HYPERVISOR_H__ #define __X86_HYPERVISOR_H__ =20 +#include + #include =20 struct hypervisor_ops { @@ -32,6 +34,8 @@ struct hypervisor_ops { void (*resume)(void); /* Fix up e820 map */ void (*e820_fixup)(struct e820map *e820); + /* L0 assisted TLB flush */ + int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int o= rder); }; =20 #ifdef CONFIG_GUEST @@ -41,6 +45,14 @@ void hypervisor_setup(void); int hypervisor_ap_setup(void); void hypervisor_resume(void); void hypervisor_e820_fixup(struct e820map *e820); +/* + * L0 assisted TLB flush. + * mask: cpumask of the dirty vCPUs that should be flushed. + * va: linear address to flush, or NULL for global flushes. + * order: order of the linear address pointed by va. + */ +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order); =20 #else =20 @@ -52,6 +64,11 @@ static inline void hypervisor_setup(void) { ASSERT_UNREA= CHABLE(); } static inline int hypervisor_ap_setup(void) { return 0; } static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); } static inline void hypervisor_e820_fixup(struct e820map *e820) {} +static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *= va, + unsigned int order) +{ + return -ENOSYS; +} =20 #endif /* CONFIG_GUEST */ =20 --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel