From nobody Mon Feb 9 02:32:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 882B1CCA48B for ; Mon, 27 Jun 2022 21:57:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242065AbiF0V5j (ORCPT ); Mon, 27 Jun 2022 17:57:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241310AbiF0VzK (ORCPT ); Mon, 27 Jun 2022 17:55:10 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D146C64D7; Mon, 27 Jun 2022 14:54:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656366897; x=1687902897; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ROVjlk0ARTlje2VxD/e7HZB9QcLWbFBRWW0xfcydsWA=; b=n+BRSAEWxomQJ20TXYvXcT0G23LlQMQihxA4xygSaDC3xaKIWyLU9Nn7 X/ksTjHVYiKMsuXZgiRB/rUHd+mWlgpVsu1mPsLHcptEBJstqng0v+41V tmBovc0L6mMGPtZwZay8OAJnQhd7qCq/qPIpTD/+NYkdVN36losoydLvl vGUX3YoLurq5FneGC/dOiUJpD3vpzQtmsKnwmLJR1nMhiHtff4CR1U8x7 y6mvrhjSgRukbQitTr5fOwZkxPpUccxpeXMHh2vl8gt5kkTDufoBgb/ZX 5t4AOHsGIShwhQXZitD7M+p3WAREvx99XqBKOdNZwQPfu9Pe3Qb67wrWO Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10391"; a="281609551" X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="281609551" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:53 -0700 X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="657863561" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:53 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Sean Christopherson Subject: [PATCH v7 039/102] KVM: x86/mmu: Allow per-VM override of the TDP max page level Date: Mon, 27 Jun 2022 14:53:31 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson TODO: This is a transient workaround patch until the large page support for TDX is implemented. Support large page for TDX and remove this patch. At this point, large page for TDX isn't supported, and need to allow guest TD to work only with 4K pages. On the other hand, conventional VMX VMs should continue to work with large page. Allow per-VM override of the TDP max page level. In the existing x86 KVM MMU code, there is already max_level member in struct kvm_page_fault with KVM_MAX_HUGEPAGE_LEVEL initial value. The KVM page fault handler denies page size larger than max_level. Add per-VM member to indicate the allowed maximum page size with KVM_MAX_HUGEPAGE_LEVEL as default value and initialize max_level in struct kvm_page_fault with it. For the guest TD, the set per-VM value for allows maximum page size to 4K page size. Then only allowed page size is 4K. It means large page is disabled. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 1 + arch/x86/kvm/mmu/mmu_internal.h | 2 +- 3 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 39215daa8576..f4d4ed41641b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1146,6 +1146,7 @@ struct kvm_arch { unsigned long n_requested_mmu_pages; unsigned long n_max_mmu_pages; unsigned int indirect_shadow_pages; + int tdp_max_page_level; u8 mmu_valid_gen; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; struct list_head active_mmu_pages; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e0aa5ad3931d..80d7c7709af3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5878,6 +5878,7 @@ int kvm_mmu_init_vm(struct kvm *kvm) node->track_write =3D kvm_mmu_pte_write; node->track_flush_slot =3D kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); + kvm->arch.tdp_max_page_level =3D KVM_MAX_HUGEPAGE_LEVEL; kvm_mmu_set_mmio_spte_mask(kvm, shadow_default_mmio_mask, shadow_default_mmio_mask, ACC_WRITE_MASK | ACC_USER_MASK); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index bd2a26897b97..44a04fad4bed 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -244,7 +244,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu= *vcpu, gpa_t cr2_or_gpa, .is_tdp =3D likely(vcpu->arch.mmu->page_fault =3D=3D kvm_tdp_page_fault), .nx_huge_page_workaround_enabled =3D is_nx_huge_page_enabled(), =20 - .max_level =3D KVM_MAX_HUGEPAGE_LEVEL, + .max_level =3D vcpu->kvm->arch.tdp_max_page_level, .req_level =3D PG_LEVEL_4K, .goal_level =3D PG_LEVEL_4K, }; --=20 2.25.1