From nobody Sun Feb 8 22:49:31 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5017FC38A02 for ; Sun, 30 Oct 2022 06:29:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230431AbiJ3G3D (ORCPT ); Sun, 30 Oct 2022 02:29:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230177AbiJ3GZ2 (ORCPT ); Sun, 30 Oct 2022 02:25:28 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F1D7231; Sat, 29 Oct 2022 23:24:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667111055; x=1698647055; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7faA3B0OqUMnFM6ysYVSaG6I4oEbGhUujDKJN05XdS0=; b=VwjAt6OS+nj3GmPqO3I1QBbdqImh8IkNm3Hd/RvRGlCOkeoyJnOYgg29 Hznjel3383qCaANoJuu6XPu9syCdgk5estyuYYVYt+LRAwc/lKBg6WH6z wqG9b4/RhRspRks0/KPtImUYOkwk60yu0hG8bjAwcWUOuWjIt6xAh8xK6 BcCIu0oOPTnj/Tju81r4OJMFkVIuXfHyUaV/RdHY0EwGOzJQCge4Qphk+ PZ/Ee9Uua8l3waFRpeY90/yphCgQqTgnibZLCf03PdoqcxsMVL4Jefnj8 G0SPSlY/rqt12SF2c03JNlD9NBwjSdhY8vaLqnL0EYiB9G0QrrCgDniWK A==; X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="395037174" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="395037174" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:07 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="878393042" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="878393042" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:07 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Sean Christopherson Subject: [PATCH v10 061/108] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX Date: Sat, 29 Oct 2022 23:23:02 -0700 Message-Id: <861847305216ba97ab65ad2e0ebe5bf08e2fd71a.1667110240.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Introduce a helper to directly (pun intended) fault-in a TDP page without having to go through the full page fault path. This allows TDX to get the resulting pfn and also allows the RET_PF_* enums to stay in mmu.c where they belong. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 50d240d52697..e2a0dfbee56d 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -154,6 +154,9 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vc= pu) vcpu->arch.mmu->root_role.level); } =20 +kvm_pfn_t kvm_mmu_map_tdp_page(struct kvm_vcpu *vcpu, gpa_t gpa, + u32 error_code, int max_level); + /* * Check if a given access (described through the I/D, W/R and U/S bits of= a * page fault error code pfec) causes a permission fault with the given PTE diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 08923b64dcc8..168c84c99de3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4485,6 +4485,45 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct= kvm_page_fault *fault) return direct_page_fault(vcpu, fault); } =20 +kvm_pfn_t kvm_mmu_map_tdp_page(struct kvm_vcpu *vcpu, gpa_t gpa, + u32 error_code, int max_level) +{ + int r; + struct kvm_page_fault fault =3D (struct kvm_page_fault) { + .addr =3D gpa, + .error_code =3D error_code, + .exec =3D error_code & PFERR_FETCH_MASK, + .write =3D error_code & PFERR_WRITE_MASK, + .present =3D error_code & PFERR_PRESENT_MASK, + .rsvd =3D error_code & PFERR_RSVD_MASK, + .user =3D error_code & PFERR_USER_MASK, + .prefetch =3D false, + .is_tdp =3D true, + .nx_huge_page_workaround_enabled =3D is_nx_huge_page_enabled(vcpu->kvm), + .is_private =3D kvm_is_private_gpa(vcpu->kvm, gpa), + }; + + if (mmu_topup_memory_caches(vcpu, false)) + return KVM_PFN_ERR_FAULT; + + /* + * Loop on the page fault path to handle the case where an mmu_notifier + * invalidation triggers RET_PF_RETRY. In the normal page fault path, + * KVM needs to resume the guest in case the invalidation changed any + * of the page fault properties, i.e. the gpa or error code. For this + * path, the gpa and error code are fixed by the caller, and the caller + * expects failure if and only if the page fault can't be fixed. + */ + do { + fault.max_level =3D max_level; + fault.req_level =3D PG_LEVEL_4K; + fault.goal_level =3D PG_LEVEL_4K; + r =3D direct_page_fault(vcpu, &fault); + } while (r =3D=3D RET_PF_RETRY && !is_error_noslot_pfn(fault.pfn)); + return fault.pfn; +} +EXPORT_SYMBOL_GPL(kvm_mmu_map_tdp_page); + static void nonpaging_init_context(struct kvm_mmu *context) { context->page_fault =3D nonpaging_page_fault; --=20 2.25.1