From nobody Mon Feb 9 03:14:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4952C38A02 for ; Sun, 30 Oct 2022 06:26:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230363AbiJ3G0b (ORCPT ); Sun, 30 Oct 2022 02:26:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229907AbiJ3GYK (ORCPT ); Sun, 30 Oct 2022 02:24:10 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11365F4; Sat, 29 Oct 2022 23:24:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667111049; x=1698647049; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sa5y0Pbexso39/n1O3X/Rxg+kJKGCxQRecKftLrSJ9c=; b=Ouo/6yjm9Sni4wg2DlOc8sfn9PDy8PkkyA5aON+WuivBP/pd8EAUm9QT uAuF9I8Sakw0BFStZMAvFo4ZXL7hyeEVaoE9va1teMyuWuzz+u5RYTBmS GMqxQrmBJMCFt8bwsZYAGIA4RhqRJLRKwcgBkfUqry3qobxGvfWgtit+M 7/bQqOkUGjMHzGdS8Sf5amRv7GRz0Ll7TAt0q/IxAwSUCSc9Vk5n8CSXg iIGjna14+2y1HPpAojd53RjuKyhKV6idrwfBJ3saM36TSe6BE39nQSdN2 zFJY0e1kOCPEvdyJakQZ+O2ct+Rcu8rEBEsARd0eZ8wrp3VX/nAeiqSG7 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="395037149" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="395037149" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:03 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="878392964" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="878392964" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:03 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack Subject: [PATCH v10 037/108] KVM: x86/mmu: Disallow fast page fault on private GPA Date: Sat, 29 Oct 2022 23:22:38 -0700 Message-Id: <6e3d747ef224b44ff1f14bcb26424a1a3c210fb9.1667110240.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata TDX requires TDX SEAMCALL to operate Secure EPT instead of direct memory access and TDX SEAMCALL is heavy operation. Fast page fault on private GPA doesn't make sense. Disallow fast page fault on private GPA. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9098f77cdaa4..09defac49bf0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3238,8 +3238,16 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu= , struct kvm_page_fault *fau return RET_PF_CONTINUE; } =20 -static bool page_fault_can_be_fast(struct kvm_page_fault *fault) +static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault = *fault) { + /* + * TDX private mapping doesn't support fast page fault because the EPT + * entry is read/written with TDX SEAMCALLs instead of direct memory + * access. + */ + if (kvm_is_private_gpa(kvm, fault->addr)) + return false; + /* * Page faults with reserved bits set, i.e. faults on MMIO SPTEs, only * reach the common page fault handler if the SPTE has an invalid MMIO @@ -3349,7 +3357,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault) u64 *sptep =3D NULL; uint retry_count =3D 0; =20 - if (!page_fault_can_be_fast(fault)) + if (!page_fault_can_be_fast(vcpu->kvm, fault)) return ret; =20 walk_shadow_page_lockless_begin(vcpu); --=20 2.25.1