From nobody Wed Dec 31 06:41:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0DC7C4332F for ; Tue, 7 Nov 2023 15:00:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234839AbjKGPAc (ORCPT ); Tue, 7 Nov 2023 10:00:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235051AbjKGPAX (ORCPT ); Tue, 7 Nov 2023 10:00:23 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A3971FE4; Tue, 7 Nov 2023 06:58:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369109; x=1730905109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JrzCs/OOWDx77lnxpac9qcH3fG7OrhsRaFsr0begvQk=; b=mLCPOr/uc5y1kwsgQUt9jrvE1awZnjzQUCbAN1RN/tEXY6gJ1heGefWC RvA+OKclP+rkmypirHFiIrEfTsTD2WmsxwpPRpTPg5TEIZ1U955wPhGYG yEXQqBvY7j5xlECdtHTx4d7CqwK9silHI1Cp+FEms7f8H+bsCRuzOyhy7 AFLfKfDV9NalWdWpwSwJKZAVt34d+LtvkiNoOTABvB/vCzd6OlJwAfns0 RqAakChMSEi9Ynt/hsJibevAhbZaZiYQ+hHpOyRZIR2kMTxGu5XorYkMt r1qqeIxKsHM+taoE8dZAAixitNSuDKF8qoCkOeSMecsM4RW4micgB4cyT g==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="2462239" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="2462239" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10851321" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 06:58:08 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v17 036/116] KVM: x86/mmu: Disallow fast page fault on private GPA Date: Tue, 7 Nov 2023 06:56:02 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata TDX requires TDX SEAMCALL to operate Secure EPT instead of direct memory access and TDX SEAMCALL is heavy operation. Fast page fault on private GPA doesn't make sense. Disallow fast page fault on private GPA. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f0bc0395831e..53ad71a930e8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3343,8 +3343,16 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *= vcpu, return RET_PF_CONTINUE; } =20 -static bool page_fault_can_be_fast(struct kvm_page_fault *fault) +static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault = *fault) { + /* + * TDX private mapping doesn't support fast page fault because the EPT + * entry is read/written with TDX SEAMCALLs instead of direct memory + * access. + */ + if (kvm_is_private_gpa(kvm, fault->addr)) + return false; + /* * Page faults with reserved bits set, i.e. faults on MMIO SPTEs, only * reach the common page fault handler if the SPTE has an invalid MMIO @@ -3454,7 +3462,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault) u64 *sptep; uint retry_count =3D 0; =20 - if (!page_fault_can_be_fast(fault)) + if (!page_fault_can_be_fast(vcpu->kvm, fault)) return ret; =20 walk_shadow_page_lockless_begin(vcpu); --=20 2.25.1