From nobody Fri May 3 19:47:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148723125672.6375036565496; Mon, 27 Jan 2020 10:12:03 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rE-0007sJ-Jn; Mon, 27 Jan 2020 18:11:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rD-0007rM-3T for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:31 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6f19f33a-4130-11ea-acc1-bc764e2007e4; Mon, 27 Jan 2020 18:11:27 +0000 (UTC) X-Inumbo-ID: 6f19f33a-4130-11ea-acc1-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148687; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ekoRqZlTNVkyNGUNMvG5TZ7RCPZWUmlqG8bHImgoqfQ=; b=eJhilze8xoKyRSEReWHep2phyosqgePORtXgT6N+Wpng1hRK85ay/HbB YNnGUiIrfs08IjZYfXngBc3e24VsU870yJq+ZC/sMDn/bVgWFgZ6CYAYd ES5XxtJKDERgh7EGmSFQUmOKy2h1xjoCVgqEsXD7LWJ21G5Nb3T5V1cab o=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 8SyS6LAH3Z5NZTv2LClXQXYH/FmeQd18IoLDyymMBisw3Fl0jazPDZmOwSzkueauYljDed+loD KQwlIlXH6Xz48yb39qFN0TsS8YknMtGkJP84kX3TKn2uGlH5ZH1F4BTv6yR9qF8bboSbZapgi3 xkCkOMWYw0J8hZdGfkSAiTKXPfvakUz36Zwu7/GScl9+Jf3cmmGHX03V8m8wWqrFQawpgnUasF rGcSkuIlOM4YjeIzI6/+0APHxf1BMgzsYP86Wmg9gJ+pnxJFLiecycDKeD1Pa3ttK/GV60i5Hi ef0= X-SBRS: 2.7 X-MesageID: 11945402 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11945402" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:09 +0100 Message-ID: <20200127181115.82709-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 1/7] x86/tlb: fix NEED_FLUSH return type X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The returned type wants to be bool instead of int. No functional change intended. Signed-off-by: Roger Pau Monn=C3=A9 Acked-by: Jan Beulich Reviewed-by: Wei Liu --- xen/include/asm-x86/flushtlb.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 434821aaf3..2cfe4e6e97 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -42,7 +42,7 @@ static inline void page_set_tlbflush_timestamp(struct pag= e_info *page) * @lastuse_stamp is a timestamp taken when the PFN we are testing was las= t=20 * used for a purpose that may have caused the CPU's TLB to become tainted. */ -static inline int NEED_FLUSH(u32 cpu_stamp, u32 lastuse_stamp) +static inline bool NEED_FLUSH(u32 cpu_stamp, u32 lastuse_stamp) { u32 curr_time =3D tlbflush_current_time(); /* --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 19:47:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148725657169.4149217596074; Mon, 27 Jan 2020 10:12:05 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rD-0007rV-8e; Mon, 27 Jan 2020 18:11:31 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rB-0007r5-W3 for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:30 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 705656ee-4130-11ea-8590-12813bfff9fa; Mon, 27 Jan 2020 18:11:29 +0000 (UTC) X-Inumbo-ID: 705656ee-4130-11ea-8590-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148690; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EhWh1IA1fGUc9nzSD4bHTMjNOkVF4i7SWERxNMqfkeo=; b=GZeuohLI0YBolqkUDXSCO058J8fqLPxyohTAdtsBsDRnHCrNjrNhDjrz tv3HBOp95r5DomTBMJ/2E1Cl63p0psnrN8EQ2LzieyXw0tnJsUEpeKlhz Otkd8Rt/iKAAgecuUkpoGy3t3zffx3s97bO6o+EWAJ3GvI92BqFLVZWp/ k=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: /gNN0DGm4VE9KDrL2CG59+R5mrZ7s47eF51al3RFHokK8hcOdHQzI5B6Jgt7+jOFSoWFoJJq// ZxsZLVNwmwnf8W9ruhVjBFEL/3DrPKMrIeTo/B5jKVg42zz7UT007JFFTkH0e2t2JSiweCbfgT uuypl8d9AS6c8cE+Shlktgly5jn1K3aojxu7c1KV370QEYl5ufyIIbeYPPLVCYdAbGxyyXja7Y zICiCq+zdJ75446GiHGj0H2UAEa9l6y6otPBEKtA1e5dvPBWhhYlUEwVWO74qMMYx3F3qQD/u1 G/s= X-SBRS: 2.7 X-MesageID: 11501350 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11501350" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:10 +0100 Message-ID: <20200127181115.82709-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 2/7] x86/hvm: allow ASID flush when v != current X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Current implementation of hvm_asid_flush_vcpu is not safe to use unless the target vCPU is either paused or the currently running one, as it modifies the generation without any locking. Fix this by using atomic operations when accessing the generation field, both in hvm_asid_flush_vcpu_asid and other ASID functions. This allows to safely flush the current ASID generation. Note that for the flush to take effect if the vCPU is currently running a vmexit is required. Note the same could be achieved by introducing an extra field to hvm_vcpu_asid that signals hvm_asid_handle_vmenter the need to call hvm_asid_flush_vcpu on the given vCPU before vmentry, this however seems unnecessary as hvm_asid_flush_vcpu itself only sets two vCPU fields to 0, so there's no need to delay this to the vmentry ASID helper. This is not a bugfix as no callers that would violate the assumptions listed in the first paragraph have been found, but a preparatory change in order to allow remote flushing of HVM vCPUs. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- xen/arch/x86/hvm/asid.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/asid.c b/xen/arch/x86/hvm/asid.c index 9d3c671a5f..80b73da89b 100644 --- a/xen/arch/x86/hvm/asid.c +++ b/xen/arch/x86/hvm/asid.c @@ -82,7 +82,7 @@ void hvm_asid_init(int nasids) =20 void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid) { - asid->generation =3D 0; + write_atomic(&asid->generation, 0); } =20 void hvm_asid_flush_vcpu(struct vcpu *v) @@ -120,7 +120,7 @@ bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *as= id) goto disabled; =20 /* Test if VCPU has valid ASID. */ - if ( asid->generation =3D=3D data->core_asid_generation ) + if ( read_atomic(&asid->generation) =3D=3D data->core_asid_generation ) return 0; =20 /* If there are no free ASIDs, need to go to a new generation */ @@ -134,7 +134,7 @@ bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *as= id) =20 /* Now guaranteed to be a free ASID. */ asid->asid =3D data->next_asid++; - asid->generation =3D data->core_asid_generation; + write_atomic(&asid->generation, data->core_asid_generation); =20 /* * When we assign ASID 1, flush all TLB entries as we are starting a n= ew --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 19:47:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148727857609.4287445185099; Mon, 27 Jan 2020 10:12:07 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rH-0007tx-V9; Mon, 27 Jan 2020 18:11:35 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rG-0007tV-W1 for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:35 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 717153bc-4130-11ea-8590-12813bfff9fa; Mon, 27 Jan 2020 18:11:30 +0000 (UTC) X-Inumbo-ID: 717153bc-4130-11ea-8590-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148691; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4IrhZy3TJurc3YdlOXeix0dvMd6yAgAHgdoYd1jCw8A=; b=L6qsZBPEEBOapyLr/Bawy73f+RGD1SrIEEvrolhLm5Zc9Dx39B6PiC3Z WHJguRH4JVjlupmqqWob6SPtgjPD3PwdIPaha9kfZNvAJIH7p2c1afW67 bEcFfUqLjZQ4fD2sCKuk96QutiFtQCW3duHbo6b7IeMZSGPfjFWRwHxOI Y=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 92GTHQogjf6mfJI5Lj9YxgGU421ow+XwT0fxbFTkGzpumJWemu5AuwlFDzeHdurVHNhXtobr4k g8Th6kTY2CwcLynEtminWd2piKeKLLkg39rsqIE50ElJ+h5mOX/KGCpZZKHDdPimhimQCTIZDq x2F89p9lfW6ordHPniMMOPev0lD8kYLnS+ceM182sTEZJU4uwqDMkfBkgM5n8QPu2Fi5TIOvN5 4Sus5Ql2SJOpyckyZOEu1WmbfBYm16havZXRIpsGCtdezJ5SU5Pfs9YDhRdr9Qd2IFY9n9omsZ Wog= X-SBRS: 2.7 X-MesageID: 11501352 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11501352" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:11 +0100 Message-ID: <20200127181115.82709-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 3/7] x86/paging: add TLB flush hooks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , George Dunlap , Andrew Cooper , Tim Deegan , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add shadow and hap implementation specific helpers to perform guest TLB flushes. Note that the code for both is exactly the same at the moment, and is copied from hvm_flush_vcpu_tlb. This will be changed by further patches that will add implementation specific optimizations to them. No functional change intended. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- xen/arch/x86/hvm/hvm.c | 51 ++---------------------------- xen/arch/x86/mm/hap/hap.c | 54 ++++++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/common.c | 55 +++++++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/multi.c | 1 - xen/include/asm-x86/hap.h | 3 ++ xen/include/asm-x86/shadow.h | 12 +++++++ 6 files changed, 127 insertions(+), 49 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 0b93609a82..96c419f0ef 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3986,55 +3986,10 @@ static void hvm_s3_resume(struct domain *d) bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { - static DEFINE_PER_CPU(cpumask_t, flush_cpumask); - cpumask_t *mask =3D &this_cpu(flush_cpumask); - struct domain *d =3D current->domain; - struct vcpu *v; - - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - - cpumask_clear(mask); - - /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ - for_each_vcpu ( d, v ) - { - unsigned int cpu; - - if ( !flush_vcpu(ctxt, v) ) - continue; - - paging_update_cr3(v, false); + struct domain *currd =3D current->domain; =20 - cpu =3D read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) - __cpumask_set_cpu(cpu, mask); - } - - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); - - return true; + return shadow_mode_enabled(currd) ? shadow_flush_tlb(flush_vcpu, ctxt) + : hap_flush_tlb(flush_vcpu, ctxt); } =20 static bool always_flush(void *ctxt, struct vcpu *v) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 3d93f3451c..6894c1aa38 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,6 +669,60 @@ static void hap_update_cr3(struct vcpu *v, int do_lock= ing, bool noflush) hvm_update_guest_cr3(v, noflush); } =20 +bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask =3D &this_cpu(flush_cpumask); + struct domain *d =3D current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu =3D read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + const struct paging_mode * hap_paging_get_mode(struct vcpu *v) { diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index 6212ec2c4a..ee90e55b41 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -3357,6 +3357,61 @@ out: return rc; } =20 +/* Fluhs TLB of selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask =3D &this_cpu(flush_cpumask); + struct domain *d =3D current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu =3D read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v !=3D current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + /*************************************************************************= */ /* Shadow-control XEN_DOMCTL dispatcher */ =20 diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 26798b317c..dfe264cf83 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -4157,7 +4157,6 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) if ( do_locking ) paging_unlock(v->domain); } =20 - /*************************************************************************= */ /* Functions to revoke guest rights */ =20 diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h index b94bfb4ed0..0c6aa26b9b 100644 --- a/xen/include/asm-x86/hap.h +++ b/xen/include/asm-x86/hap.h @@ -46,6 +46,9 @@ int hap_track_dirty_vram(struct domain *d, extern const struct paging_mode *hap_paging_get_mode(struct vcpu *); int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempt= ed); =20 +bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #endif /* XEN_HAP_H */ =20 /* diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h index 907c71f497..3c1f6df478 100644 --- a/xen/include/asm-x86/shadow.h +++ b/xen/include/asm-x86/shadow.h @@ -95,6 +95,10 @@ void shadow_blow_tables_per_domain(struct domain *d); int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted); =20 +/* Flush the TLB of the selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #else /* !CONFIG_SHADOW_PAGING */ =20 #define shadow_teardown(d, p) ASSERT(is_pv_domain(d)) @@ -106,6 +110,14 @@ int shadow_set_allocation(struct domain *d, unsigned i= nt pages, #define shadow_set_allocation(d, pages, preempted) \ ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) =20 +static inline bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, + struct vcpu *v), + void *ctxt) +{ + ASSERT_UNREACHABLE(); + return -EOPNOTSUPP; +} + static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all) {} =20 --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 19:47:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148731832143.97098917940048; Mon, 27 Jan 2020 10:12:11 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rM-0007xd-Ux; Mon, 27 Jan 2020 18:11:40 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rM-0007wq-0a for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:40 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 72812a2a-4130-11ea-8590-12813bfff9fa; Mon, 27 Jan 2020 18:11:32 +0000 (UTC) X-Inumbo-ID: 72812a2a-4130-11ea-8590-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148693; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AkLC2Wic1LLly2lQ0NYeFp34sD8LcMJxN6HuaR2sHis=; b=F3LmrDSqQtgquxDe0r6stT5wBbwTcfKH5kZGcw7digce6UECJJRjiy0i eNP6CuBttqVwz6SMFKVCw/z/8HWqgdIrdKryMsAiIkducNOnHNV5WweqG uyECPaB0lG74BbjWqQ8+B6Ei0VszaVHhiPv+FmAWryTW11oi9o/BpfGKe I=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: wct6E3hRolCflhxLTpIzD7eZh/JTESwHevpiJV6QvSLqUd2Jz3nvTgNbLn77nDiJBUVNoOBxMR Vk/4O0BVl3vPf5q8kTvakBJ0ADsx2m9MkR1qYC+H6QT2xgDN8IsmDkHDqpwrSAoppqhcvGZT2F MED4VcFl3Fjp/ugw3TjQYmPRmW9b1SxmZcQod2/lJXe9Aj7lVittkcMaujhIo4sxPNPxQJRwpg Z7ROa86X92LfrHlMWJQqBzSOGsKmImByVaGmRvHKGU8sjFjja6wtXnjYP9X0U92z6qTgyon53Z 4A8= X-SBRS: 2.7 X-MesageID: 11681870 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11681870" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:12 +0100 Message-ID: <20200127181115.82709-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 4/7] x86/hap: improve hypervisor assisted guest TLB flush X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Wei Liu , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The current implementation of the hypervisor assisted flush for HAP is extremely inefficient. First of all there's no need to call paging_update_cr3, as the only relevant part of that function when doing a flush is the ASID vCPU flush, so just call that function directly. Since hvm_asid_flush_vcpu is protected against concurrent callers by using atomic operations there's no need anymore to pause the affected vCPUs. Finally the global TLB flush performed by flush_tlb_mask is also not necessary, since we only want to flush the guest TLB state it's enough to trigger a vmexit on the pCPUs currently holding any vCPU state, as such vmexit will already perform an ASID/VPID update, and thus clear the guest TLB. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- xen/arch/x86/mm/hap/hap.c | 48 +++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 6894c1aa38..401eaf8026 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,32 +669,24 @@ static void hap_update_cr3(struct vcpu *v, int do_loc= king, bool noflush) hvm_update_guest_cr3(v, noflush); } =20 +static void do_flush(void *data) +{ + cpumask_t *mask =3D data; + unsigned int cpu =3D smp_processor_id(); + + ASSERT(cpumask_test_cpu(cpu, mask)); + cpumask_clear_cpu(cpu, mask); +} + bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { static DEFINE_PER_CPU(cpumask_t, flush_cpumask); cpumask_t *mask =3D &this_cpu(flush_cpumask); struct domain *d =3D current->domain; + unsigned int this_cpu =3D smp_processor_id(); struct vcpu *v; =20 - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - cpumask_clear(mask); =20 /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).= */ @@ -705,20 +697,22 @@ bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, str= uct vcpu *v), if ( !flush_vcpu(ctxt, v) ) continue; =20 - paging_update_cr3(v, false); + hvm_asid_flush_vcpu(v); =20 cpu =3D read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) + if ( cpu !=3D this_cpu && is_vcpu_dirty_cpu(cpu) ) __cpumask_set_cpu(cpu, mask); } =20 - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v !=3D current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); + /* + * Trigger a vmexit on all pCPUs with dirty vCPU state in order to for= ce an + * ASID/VPIT change and hence accomplish a guest TLB flush. Note that = vCPUs + * not currently running will already be flushed when scheduled becaus= e of + * the ASID tickle done in the loop above. + */ + on_selected_cpus(mask, do_flush, mask, 0); + while ( !cpumask_empty(mask) ) + cpu_relax(); =20 return true; } --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 19:47:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148738807302.7100154179019; Mon, 27 Jan 2020 10:12:18 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rS-00081d-8j; Mon, 27 Jan 2020 18:11:46 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rR-00080b-0Q for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:45 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 73d4373c-4130-11ea-8590-12813bfff9fa; Mon, 27 Jan 2020 18:11:34 +0000 (UTC) X-Inumbo-ID: 73d4373c-4130-11ea-8590-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148695; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Hjk4KxkzAwddU3Mvo7idr0pO/6GvJUEb/i4HqY3LkHQ=; b=DxR7fnu0jwNUNHquzNsSds7AUNPcU3HyG0R3y4TzyXP3fdX5ErOlt7sA BFEs2YHv2JW0SaRhukuMahntcVw2ePE8w5pz3KvbytrkjY/FWsHNlruhk i6G+zj2OO+fGPbmyFf/sjmeBW1aC3m7ef3Yus73gqVrK5LBOF/oxUE6YW 8=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: qpCn84Tqyp3L6n9jG1cVsfErT1jJKVFq7C3L/qw6PwcyUgGShXILx2iywEIEpkP/Yw1v7DIrKR yv2cAdeUPlLwG6Zu59IZVIFIW41jwXLaB7WDXnPDd/IHpFEM8XIuTm9r51QJJ0DgVlHVuakJmd ZHzIO7Bax+brRu/DmAKeXXXcM5d2ecAXZbl/Hrrm7vvGLnEDGSPJ2bxtdnhh8ozebuZMixOzcw AjPrm9cPpvKXzQFXGd01nbdjHPi8Zxe/VtrAPbPqN5V47YoDhIUT0S2RGR4Ii/4csZqRt1A3k3 GY0= X-SBRS: 2.7 X-MesageID: 11501356 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11501356" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:13 +0100 Message-ID: <20200127181115.82709-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 5/7] x86/tlb: introduce a flush guests TLB flag X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , George Dunlap , Andrew Cooper , Tim Deegan , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Introduce a specific flag to request a HVM guest TLB flush, which is an ASID/VPID tickle that forces a linear TLB flush for all HVM guests. This was previously unconditionally done in each pre_flush call, but that's not required: HVM guests not using shadow don't require linear TLB flushes as Xen doesn't modify the guest page tables in that case (ie: when using HAP). Modify all shadow code TLB flushes to also flush the guest TLB, in order to keep the previous behavior. I haven't looked at each specific shadow code TLB flush in order to figure out whether it actually requires a guest TLB flush or not, so there might be room for improvement in that regard. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Wei Liu --- xen/arch/x86/flushtlb.c | 5 +++-- xen/arch/x86/mm/shadow/common.c | 18 +++++++++--------- xen/arch/x86/mm/shadow/hvm.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 16 ++++++++-------- xen/include/asm-x86/flushtlb.h | 2 ++ 5 files changed, 23 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c index 03f92c23dc..e7ccd4ec7b 100644 --- a/xen/arch/x86/flushtlb.c +++ b/xen/arch/x86/flushtlb.c @@ -59,8 +59,6 @@ static u32 pre_flush(void) raise_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ); =20 skip_clocktick: - hvm_flush_guest_tlbs(); - return t2; } =20 @@ -221,6 +219,9 @@ unsigned int flush_area_local(const void *va, unsigned = int flags) do_tlb_flush(); } =20 + if ( flags & FLUSH_GUESTS_TLB ) + hvm_flush_guest_tlbs(); + if ( flags & FLUSH_CACHE ) { const struct cpuinfo_x86 *c =3D ¤t_cpu_data; diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index ee90e55b41..2a85e10112 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -363,7 +363,7 @@ static int oos_remove_write_access(struct vcpu *v, mfn_= t gmfn, } =20 if ( ftlb ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 return 0; } @@ -939,7 +939,7 @@ static void _shadow_prealloc(struct domain *d, unsigned= int pages) /* See if that freed up enough space */ if ( d->arch.paging.shadow.free_pages >=3D pages ) { - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_= TLB); return; } } @@ -993,7 +993,7 @@ static void shadow_blow_tables(struct domain *d) pagetable_get_mfn(v->arch.shadow_table[i]),= 0); =20 /* Make sure everyone sees the unshadowings */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); } =20 void shadow_blow_tables_per_domain(struct domain *d) @@ -1102,7 +1102,7 @@ mfn_t shadow_alloc(struct domain *d, if ( unlikely(!cpumask_empty(&mask)) ) { perfc_incr(shadow_alloc_tlbflush); - flush_tlb_mask(&mask); + flush_mask(&mask, FLUSH_TLB | FLUSH_GUESTS_TLB); } /* Now safe to clear the page for reuse */ clear_domain_page(page_to_mfn(sp)); @@ -2290,7 +2290,7 @@ void sh_remove_shadows(struct domain *d, mfn_t gmfn, = int fast, int all) =20 /* Need to flush TLBs now, so that linear maps are safe next time we * take a fault. */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 paging_unlock(d); } @@ -3005,7 +3005,7 @@ static void sh_unshadow_for_p2m_change(struct domain = *d, unsigned long gfn, { sh_remove_all_shadows_and_parents(d, mfn); if ( sh_remove_all_mappings(d, mfn, _gfn(gfn)) ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); } } =20 @@ -3045,7 +3045,7 @@ static void sh_unshadow_for_p2m_change(struct domain = *d, unsigned long gfn, } omfn =3D mfn_add(omfn, 1); } - flush_tlb_mask(&flushmask); + flush_mask(&flushmask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 if ( npte ) unmap_domain_page(npte); @@ -3332,7 +3332,7 @@ int shadow_track_dirty_vram(struct domain *d, } } if ( flush_tlb ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); goto out; =20 out_sl1ma: @@ -3402,7 +3402,7 @@ bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, = struct vcpu *v), } =20 /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); + flush_mask(mask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 /* Done. */ for_each_vcpu ( d, v ) diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c index a219266fa2..64077d181b 100644 --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -590,7 +590,7 @@ static void validate_guest_pt_write(struct vcpu *v, mfn= _t gmfn, =20 if ( rc & SHADOW_SET_FLUSH ) /* Need to flush TLBs to pick up shadow PT changes */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 if ( rc & SHADOW_SET_ERROR ) { diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index dfe264cf83..7efe32c35b 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3066,7 +3066,7 @@ static int sh_page_fault(struct vcpu *v, perfc_incr(shadow_rm_write_flush_tlb); smp_wmb(); atomic_inc(&d->arch.paging.shadow.gtable_dirty_version); - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); } =20 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) @@ -3575,7 +3575,7 @@ static bool sh_invlpg(struct vcpu *v, unsigned long l= inear) if ( mfn_to_page(sl1mfn)->u.sh.type =3D=3D SH_type_fl1_shadow ) { - flush_tlb_local(); + flush_local(FLUSH_TLB | FLUSH_GUESTS_TLB); return false; } =20 @@ -3810,7 +3810,7 @@ sh_update_linear_entries(struct vcpu *v) * table entry. But, without this change, it would fetch the wrong * value due to a stale TLB. */ - flush_tlb_local(); + flush_local(FLUSH_TLB | FLUSH_GUESTS_TLB); } } =20 @@ -4011,7 +4011,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) * (old) shadow linear maps in the writeable mapping heuristics. */ #if GUEST_PAGING_LEVELS =3D=3D 2 if ( sh_remove_write_access(d, gmfn, 2, 0) !=3D 0 ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow); #elif GUEST_PAGING_LEVELS =3D=3D 3 /* PAE guests have four shadow_table entries, based on the @@ -4035,7 +4035,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) } } if ( flush ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); /* Now install the new shadows. */ for ( i =3D 0; i < 4; i++ ) { @@ -4056,7 +4056,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool no= flush) } #elif GUEST_PAGING_LEVELS =3D=3D 4 if ( sh_remove_write_access(d, gmfn, 4, 0) !=3D 0 ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow); if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) ) { @@ -4501,7 +4501,7 @@ static void sh_pagetable_dying(paddr_t gpa) } } if ( flush ) - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); =20 /* Remember that we've seen the guest use this interface, so we * can rely on it using it in future, instead of guessing at @@ -4538,7 +4538,7 @@ static void sh_pagetable_dying(paddr_t gpa) mfn_to_page(gmfn)->pagetable_dying =3D true; shadow_unhook_mappings(d, smfn, 1/* user pages only */); /* Now flush the TLB: we removed toplevel mappings. */ - flush_tlb_mask(d->dirty_cpumask); + flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_GUESTS_TLB); } =20 /* Remember that we've seen the guest use this interface, so we diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 2cfe4e6e97..07f9bc6103 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -105,6 +105,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4); #define FLUSH_VCPU_STATE 0x1000 /* Flush the per-cpu root page table */ #define FLUSH_ROOT_PGTBL 0x2000 + /* Flush all HVM guests linear TLB (using ASID/VPID) */ +#define FLUSH_GUESTS_TLB 0x4000 =20 /* Flush local TLBs/caches. */ unsigned int flush_area_local(const void *va, unsigned int flags); --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 19:47:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148731829665.221413031425; Mon, 27 Jan 2020 10:12:11 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rL-0007w4-BL; Mon, 27 Jan 2020 18:11:39 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rJ-0007us-9i for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:37 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 74b4d698-4130-11ea-b45d-bc764e2007e4; Mon, 27 Jan 2020 18:11:36 +0000 (UTC) X-Inumbo-ID: 74b4d698-4130-11ea-b45d-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148696; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o4hQMakeBAIwBozAO6WBJ4ueS9R6pAqEY27bfnaQayQ=; b=Eh/gnXDvULKuvMZ4XQ5b2RXpJ2NZbUJEb1XXuzC3R9xuiAoXovT0ph79 1MFcXnkXcs0H6XNCgJjGxSTrqjoNkn4x71yi6x/qR8gYfwMnKyAASSvEK Rs6hurv60hibF8Ctfaoqnb28o/R05WhkoiNFUJPha/6iW6TLKlqOh/0v+ g=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: enZNM9zPzi25RZHCm5FN6TxB1k1WpXR//uiDLjKo/06xZs6ut4WIY7Lab9ub/5WeQ6YRFcUpt0 Y64xpxZvc0AQgO65fD8zj4IPKzespnwoCZzt3H8xdckgdgoFrAgW3MdZfh1fX06VJaw3JCKUs/ vTN25bWrWemtwuPUpcMIgdDNMs+RfV7osUeqdgLF2sjNqKvfwQFRi3FzH7kcZisI0H46rAST6E QMZ7JCy6ynPMjOLzsB2fBwEqAWPQ+6rEo6Fs866L4Ev4hLry733p800jDSTO5HyR9TIc/6Dqcs 8S4= X-SBRS: 2.7 X-MesageID: 12110220 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="12110220" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:14 +0100 Message-ID: <20200127181115.82709-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 6/7] x86/tlb: allow disabling the TLB clock X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The TLB clock is helpful when running Xen on bare metal because when doing a TLB flush each CPU is IPI'ed and can keep a timestamp of the last flush. This is not the case however when Xen is running virtualized, and the underlying hypervisor provides mechanism to assist in performing TLB flushes: Xen itself for example offers a HVMOP_flush_tlbs hypercall in order to perform a TLB flush without having to IPI each CPU. When using such mechanisms it's no longer possible to keep a timestamp of the flushes on each CPU, as they are performed by the underlying hypervisor. Offer a boolean in order to signal Xen that the timestamped TLB shouldn't be used. This avoids keeping the timestamps of the flushes, and also forces NEED_FLUSH to always return true. No functional change intended, as this change doesn't introduce any user that disables the timestamped TLB. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/flushtlb.c | 19 +++++++++++++------ xen/include/asm-x86/flushtlb.h | 17 ++++++++++++++++- 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c index e7ccd4ec7b..3649900793 100644 --- a/xen/arch/x86/flushtlb.c +++ b/xen/arch/x86/flushtlb.c @@ -32,6 +32,9 @@ u32 tlbflush_clock =3D 1U; DEFINE_PER_CPU(u32, tlbflush_time); =20 +/* Signals whether the TLB flush clock is in use. */ +bool __read_mostly tlb_clk_enabled =3D true; + /* * pre_flush(): Increment the virtual TLB-flush clock. Returns new clock v= alue. *=20 @@ -82,12 +85,13 @@ static void post_flush(u32 t) static void do_tlb_flush(void) { unsigned long flags, cr4; - u32 t; + u32 t =3D 0; =20 /* This non-reentrant function is sometimes called in interrupt contex= t. */ local_irq_save(flags); =20 - t =3D pre_flush(); + if ( tlb_clk_enabled ) + t =3D pre_flush(); =20 if ( use_invpcid ) invpcid_flush_all(); @@ -99,7 +103,8 @@ static void do_tlb_flush(void) else write_cr3(read_cr3()); =20 - post_flush(t); + if ( tlb_clk_enabled ) + post_flush(t); =20 local_irq_restore(flags); } @@ -107,7 +112,7 @@ static void do_tlb_flush(void) void switch_cr3_cr4(unsigned long cr3, unsigned long cr4) { unsigned long flags, old_cr4; - u32 t; + u32 t =3D 0; =20 /* Throughout this function we make this assumption: */ ASSERT(!(cr4 & X86_CR4_PCIDE) || !(cr4 & X86_CR4_PGE)); @@ -115,7 +120,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) /* This non-reentrant function is sometimes called in interrupt contex= t. */ local_irq_save(flags); =20 - t =3D pre_flush(); + if ( tlb_clk_enabled ) + t =3D pre_flush(); =20 old_cr4 =3D read_cr4(); ASSERT(!(old_cr4 & X86_CR4_PCIDE) || !(old_cr4 & X86_CR4_PGE)); @@ -167,7 +173,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr= 4) if ( cr4 & X86_CR4_PCIDE ) invpcid_flush_all_nonglobals(); =20 - post_flush(t); + if ( tlb_clk_enabled ) + post_flush(t); =20 local_irq_restore(flags); } diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 07f9bc6103..9773014320 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -21,10 +21,21 @@ extern u32 tlbflush_clock; /* Time at which each CPU's TLB was last flushed. */ DECLARE_PER_CPU(u32, tlbflush_time); =20 -#define tlbflush_current_time() tlbflush_clock +/* TLB clock is in use. */ +extern bool tlb_clk_enabled; + +static inline uint32_t tlbflush_current_time(void) +{ + /* Returning 0 from tlbflush_current_time will always force a flush. */ + return tlb_clk_enabled ? tlbflush_clock : 0; +} =20 static inline void page_set_tlbflush_timestamp(struct page_info *page) { + /* Avoid the write if the TLB clock is disabled. */ + if ( !tlb_clk_enabled ) + return; + /* * Prevent storing a stale time stamp, which could happen if an update * to tlbflush_clock plus a subsequent flush IPI happen between the @@ -67,6 +78,10 @@ static inline void tlbflush_filter(cpumask_t *mask, uint= 32_t page_timestamp) { unsigned int cpu; =20 + /* Short-circuit: there's no need to iterate if the clock is disabled.= */ + if ( !tlb_clk_enabled ) + return; + for_each_cpu ( cpu, mask ) if ( !NEED_FLUSH(per_cpu(tlbflush_time, cpu), page_timestamp) ) __cpumask_clear_cpu(cpu, mask); --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 19:47:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580148739407187.205143675744; Mon, 27 Jan 2020 10:12:19 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rW-00084w-Lu; Mon, 27 Jan 2020 18:11:50 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rW-00084Q-0w for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:50 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 76a87004-4130-11ea-8590-12813bfff9fa; Mon, 27 Jan 2020 18:11:39 +0000 (UTC) X-Inumbo-ID: 76a87004-4130-11ea-8590-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148699; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/TetoyqV0z2M5Zia1xe0vXn4L0ZpCzt6DUB2hqMwgNU=; b=IjE0g9qJN+ZzancLadUMXDXCGfFcRCZo43PgIdtVLzMLw4Qrpwe6x0r8 /a6/i/RimBKbbOxHEJ4j+G3GcyPuluvin0t/mEkL1W5pVMltPz0gxB+86 WH0dB6/iBGZA/vzrBaJ9AETB/BrtOrv0PU56+mviXDWXdpkN3eqWt+up8 M=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: tqtdil2LpcoSCBJal5Ei6NAW0ZjWHS570LViM+RwrNfgtpqHg/zCWzbMrAVzPwhE7Xw6Dvbf4g tBgIORdl0zVjsXSZ9xeFm9zA03a5ph46y9cI13+YBk81Z5FHu4vaK9YeQPivtRfIoV2kt68oKK FcG9HmAyu5UuoOzcT06v8GVcCNFphjD59ELsw6FgrzfHjLakEQy+BHcQVX3L3cZHmPJczagwf1 l2krwlj7tkozkOOvsr7S34Y30Kii/DqJN+iOBmB88GfLoe/VD/YjWeOzmrnjwcTsTnGxW9wQsX XSs= X-SBRS: 2.7 X-MesageID: 11876077 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11876077" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:15 +0100 Message-ID: <20200127181115.82709-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 7/7] x86/tlb: use Xen L0 assisted TLB flush when available X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes. This greatly increases the performance of TLB flushes when running with a high amount of vCPUs as a Xen guest, and is specially important when running in shim mode. The following figures are from a PV guest running `make -j32 xen` in shim mode with 32 vCPUs and HAP. Using x2APIC and ALLBUT shorthand: real 4m35.973s user 4m35.110s sys 36m24.117s Using L0 assisted flush: real 1m2.596s user 4m34.818s sys 5m16.374s The implementation adds a new hook to hypervisor_ops so other enlightenments can also implement such assisted flush just by filling the hook. Note that the Xen implementation completely ignores the dirty CPU mask and the linear address passed in, and always performs a global TLB flush on all vCPUs. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - Add a L0 assisted hook to hypervisor ops. --- xen/arch/x86/guest/hypervisor.c | 10 ++++++++++ xen/arch/x86/guest/xen/xen.c | 6 ++++++ xen/arch/x86/smp.c | 11 +++++++++++ xen/include/asm-x86/guest/hypervisor.h | 17 +++++++++++++++++ 4 files changed, 44 insertions(+) diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hyperviso= r.c index 4f27b98740..4085b19734 100644 --- a/xen/arch/x86/guest/hypervisor.c +++ b/xen/arch/x86/guest/hypervisor.c @@ -18,6 +18,7 @@ * * Copyright (c) 2019 Microsoft. */ +#include #include #include =20 @@ -64,6 +65,15 @@ void hypervisor_resume(void) ops->resume(); } =20 +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order) +{ + if ( ops && ops->flush_tlb ) + return ops->flush_tlb(mask, va, order); + + return -ENOSYS; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index 6dbc5f953f..639a2a4b32 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -310,11 +310,17 @@ static void resume(void) pv_console_init(); } =20 +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int o= rder) +{ + return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); +} + static const struct hypervisor_ops ops =3D { .name =3D "Xen", .setup =3D setup, .ap_setup =3D ap_setup, .resume =3D resume, + .flush_tlb =3D flush_tlb, }; =20 const struct hypervisor_ops *__init xg_probe(void) diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index 65eb7cbda8..9bc925616a 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -256,6 +257,16 @@ void flush_area_mask(const cpumask_t *mask, const void= *va, unsigned int flags) if ( (flags & ~FLUSH_ORDER_MASK) && !cpumask_subset(mask, cpumask_of(cpu)) ) { + if ( cpu_has_hypervisor && + !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | + FLUSH_ORDER_MASK)) && + !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) ) + { + if ( tlb_clk_enabled ) + tlb_clk_enabled =3D false; + return; + } + spin_lock(&flush_lock); cpumask_and(&flush_cpumask, mask, &cpu_online_map); cpumask_clear_cpu(cpu, &flush_cpumask); diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/g= uest/hypervisor.h index 392f4b90ae..e230a5d065 100644 --- a/xen/include/asm-x86/guest/hypervisor.h +++ b/xen/include/asm-x86/guest/hypervisor.h @@ -19,6 +19,8 @@ #ifndef __X86_HYPERVISOR_H__ #define __X86_HYPERVISOR_H__ =20 +#include + struct hypervisor_ops { /* Name of the hypervisor */ const char *name; @@ -28,6 +30,8 @@ struct hypervisor_ops { void (*ap_setup)(void); /* Resume from suspension */ void (*resume)(void); + /* L0 assisted TLB flush */ + int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int o= rder); }; =20 #ifdef CONFIG_GUEST @@ -36,6 +40,14 @@ const char *hypervisor_probe(void); void hypervisor_setup(void); void hypervisor_ap_setup(void); void hypervisor_resume(void); +/* + * L0 assisted TLB flush. + * mask: cpumask of the dirty vCPUs that should be flushed. + * va: linear address to flush, or NULL for global flushes. + * order: order of the linear address pointed by va. + */ +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int order); =20 #else =20 @@ -46,6 +58,11 @@ static inline const char *hypervisor_probe(void) { retur= n NULL; } static inline void hypervisor_setup(void) { ASSERT_UNREACHABLE(); } static inline void hypervisor_ap_setup(void) { ASSERT_UNREACHABLE(); } static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); } +static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *= va, + unsigned int order) +{ + return -ENOSYS; +} =20 #endif /* CONFIG_GUEST */ =20 --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel