From nobody Thu May 2 15:43:03 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1590541374; cv=none; d=zohomail.com; s=zohoarc; b=h8wFx/Sl4HlZ1OG7XXGdBmJgTI93YUIXbjvN4ULiS/Xl8KQMRnHjpJ52edHcJq5rPOHPrg6vmUCSE0IGMrGx21Gy+G4sHTTOKKmey0KWUzIIXu8GTkbp60hUluas3YuK/Gtg0JYKRtfSemQp67PLPAI50LDFXkdhG1f+ID59C14= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1590541374; h=Content-Type:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=ffhf7xRzNjXVdK2EPKOg0RoTK65DVI1gy05R2eB4UBs=; b=lDLyTiyioNVOxLLP2ybx7N1CsdhMM2A88n0VIkh8Pj9ThrysY2KBtB8BljEm0l3u6ULrN37NQHhSWpgp1wP2zpsCZSC9I41gyRB8uHvXGATfDsG/kYeOWIZ8clDi8m3c9GRdqRUBg8TP5sUl6DC/nY3Kj+bhxBZppPy0U9QQtRI= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 159054137471075.70055125141175; Tue, 26 May 2020 18:02:54 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jdkSR-0007Am-6g; Wed, 27 May 2020 01:02:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jdkSQ-0007Ah-LE for xen-devel@lists.xenproject.org; Wed, 27 May 2020 01:02:10 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b084bf4c-9fb5-11ea-81bc-bc764e2007e4; Wed, 27 May 2020 01:02:09 +0000 (UTC) X-Inumbo-ID: b084bf4c-9fb5-11ea-81bc-bc764e2007e4 Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: +tmivBmUIBfMiRW1qfV0UofSbEzv3m76rP1iLS1598pEP9EGORPXTeeRdu1M6OlUHhrl+FBzfY +aYegj597GkgF7eMc3EIix2mAQllPLtS95enhNVVBsDYV+fpWAHOxgk1BTI63hDUgiVBi7iL1a d9Q8TquhIdegVKga9CYotVzNrZVBEDcl69RR/qU15NyvOtQ4Q8x1j3wvukmrlENviLLkSx8PYw D8fiu/NDD7G5lfov1yOBGLsJTg4aWCw8W/zf0xvkfQGpwQu+be/8R5VSYNLJqdhlwjlrpX0BY/ k0g= X-SBRS: 2.7 X-MesageID: 18801100 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.73,439,1583211600"; d="scan'208";a="18801100" From: Igor Druzhinin To: Subject: [PATCH v2] x86/svm: retry after unhandled NPT fault if gfn was marked for recalculation Date: Wed, 27 May 2020 02:01:48 +0100 Message-ID: <1590541308-11317-1-git-send-email-igor.druzhinin@citrix.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Igor Druzhinin , kevin.tian@intel.com, jbeulich@suse.com, wl@xen.org, andrew.cooper3@citrix.com, jun.nakajima@intel.com, roger.pau@citrix.com Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If a recalculation NPT fault hasn't been handled explicitly in hvm_hap_nested_page_fault() then it's potentially safe to retry - US bit has been re-instated in PTE and any real fault would be correctly re-raised next time. Do it by allowing hvm_hap_nested_page_fault to fall through in that case. This covers a specific case of migration with vGPU assigned on AMD: global log-dirty is enabled and causes immediate recalculation NPT fault in MMIO area upon access. This type of fault isn't described explicitly in hvm_hap_nested_page_fault (this isn't called on EPT misconfig exit on Intel) which results in domain crash. Signed-off-by: Igor Druzhinin --- Changes in v2: - don't gamble with retrying every recal fault and instead let hvm_hap_nested_page_fault know it's allowed to fall through in default ca= se --- xen/arch/x86/hvm/hvm.c | 6 +++--- xen/arch/x86/hvm/svm/svm.c | 7 ++++++- xen/arch/x86/hvm/vmx/vmx.c | 2 +- xen/include/asm-x86/hvm/hvm.h | 2 +- 4 files changed, 11 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 74c9f84..42bd720 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1731,7 +1731,7 @@ void hvm_inject_event(const struct x86_event *event) } =20 int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, - struct npfec npfec) + struct npfec npfec, bool fall_through) { unsigned long gfn =3D gpa >> PAGE_SHIFT; p2m_type_t p2mt; @@ -1740,7 +1740,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned l= ong gla, struct vcpu *curr =3D current; struct domain *currd =3D curr->domain; struct p2m_domain *p2m, *hostp2m; - int rc, fall_through =3D 0, paged =3D 0; + int rc, paged =3D 0; bool sharing_enomem =3D false; vm_event_request_t *req_ptr =3D NULL; bool sync =3D false; @@ -1905,7 +1905,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned l= ong gla, sync =3D p2m_mem_access_check(gpa, gla, npfec, &req_ptr); =20 if ( !sync ) - fall_through =3D 1; + fall_through =3D true; else { /* Rights not promoted (aka. sync event), work here is don= e */ diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 46a1aac..8ef3fed 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1695,7 +1695,12 @@ static void svm_do_nested_pgfault(struct vcpu *v, else if ( pfec & NPT_PFEC_in_gpt ) npfec.kind =3D npfec_kind_in_gpt; =20 - ret =3D hvm_hap_nested_page_fault(gpa, ~0ul, npfec); + /* + * US bit being set in error code indicates P2M type recalculation has + * just been done meaning that it's possible there is nothing else to = handle + * and we can just fall through and retry. + */ + ret =3D hvm_hap_nested_page_fault(gpa, ~0ul, npfec, !!(pfec & PFEC_use= r_mode)); =20 if ( tb_init_done ) { diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 11a4dd9..10f1eeb 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -3398,7 +3398,7 @@ static void ept_handle_violation(ept_qual_t q, paddr_= t gpa) else gla =3D ~0ull; =20 - ret =3D hvm_hap_nested_page_fault(gpa, gla, npfec); + ret =3D hvm_hap_nested_page_fault(gpa, gla, npfec, false); switch ( ret ) { case 0: // Unhandled L1 EPT violation diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 1eb377d..03e5f1d 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -329,7 +329,7 @@ void hvm_fast_singlestep(struct vcpu *v, uint16_t p2mid= x); =20 struct npfec; int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, - struct npfec npfec); + struct npfec npfec, bool fall_through); =20 /* Check CR4/EFER values */ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value, --=20 2.7.4