From nobody Mon Feb 9 08:30:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0C18EB64D7 for ; Wed, 28 Jun 2023 22:44:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232259AbjF1Wom (ORCPT ); Wed, 28 Jun 2023 18:44:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231481AbjF1Wn1 (ORCPT ); Wed, 28 Jun 2023 18:43:27 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B2F62721; Wed, 28 Jun 2023 15:43:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1687992206; x=1719528206; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=e3EqarE0usjS78tG7qfYkQZ0emCI9c7Ncg6aVMj8zgU=; b=DIaRZB++TQtuwrhzvaAiw3dvSM5SNhCG+LkWGfKbrCa+sNa1UQBwpUNg CCbsFI7JuSUvtIuwCm8rKDi36cC2CJi3oiN4kO1UUVN6e6b5MPNB5ctRH dO/1l82eORGscb7/QjLnLUQbhf1BjWX+oQ7fNI65T6RxxTeTd2k1n0xj4 DHlTz42KXo9LBZ4+IB0SdX8wB416WGFpVPHSUI56MZOwTMer8tf5YLadE OnKd9bIf27BD4hQCp1o1ZOvUT+GXIpuBhHPJ1rj7v8/YsBBWdC1fHOwVQ xje4rKmM9i/FSeQV+zPi3vEvlUJpVjjrKAm48v8AFaGzL02KuIortg3wR w==; X-IronPort-AV: E=McAfee;i="6600,9927,10755"; a="392699182" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="392699182" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2023 15:43:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10755"; a="830300023" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="830300023" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2023 15:43:25 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Michael Roth , Yuan Yao Subject: [RFC PATCH v3 10/11] KVM: x86: Add gmem hook for initializing private memory Date: Wed, 28 Jun 2023 15:43:09 -0700 Message-Id: <9256891e95298b0a26d4276e8d4afe8c28304767.1687991811.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Roth All gmem pages are expected to be 'private' as defined by a particular arch/platform. Platforms like SEV-SNP require additional operations to move these pages into a private state, so implement a hook that can be used to prepare this memory prior to mapping it into a guest. In the case of SEV-SNP, whether or not a 2MB page can be mapped via a 2MB mapping in the guest's nested page table depends on whether or not any subpages within the range have already been initialized as private in the RMP table, so this hook will also be used by the KVM MMU to clamp the maximum mapping size accordingly. Signed-off-by: Michael Roth Link: https://lore.kernel.org/r/20230612042559.375660-2-michael.roth@amd.com --- Changes v2 -> v3: - Newly added --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 11 ++++++++++- 3 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index fc65374a8bad..cce8621e3216 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -135,6 +135,7 @@ KVM_X86_OP(msr_filter_changed) KVM_X86_OP(complete_emulated_msr) KVM_X86_OP(vcpu_deliver_sipi_vector) KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); +KVM_X86_OP_OPTIONAL_RET0(gmem_prepare) =20 #undef KVM_X86_OP #undef KVM_X86_OP_OPTIONAL diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index a4af4175034b..653f208979cf 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1738,6 +1738,9 @@ struct kvm_x86_ops { * Returns vCPU specific APICv inhibit reasons */ unsigned long (*vcpu_get_apicv_inhibit_reasons)(struct kvm_vcpu *vcpu); + + int (*gmem_prepare)(struct kvm *kvm, struct kvm_memory_slot *slot, + kvm_pfn_t pfn, gfn_t gfn, u8 *max_level); }; =20 struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3795f447603c..fdd89cd8f68e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4341,6 +4341,7 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *v= cpu, struct kvm_page_fault *fault) { int order, r; + u8 max_level; =20 if (!kvm_slot_can_be_private(fault->slot)) return kvm_do_memory_fault_exit(vcpu, fault); @@ -4349,7 +4350,15 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *= vcpu, if (r) return r; =20 - fault->max_level =3D min(kvm_max_level_for_order(order), fault->max_level= ); + max_level =3D kvm_max_level_for_order(order); + r =3D static_call(kvm_x86_gmem_prepare)(vcpu->kvm, fault->slot, fault->pf= n, + fault->gfn, &max_level); + if (r) { + kvm_release_pfn_clean(fault->pfn); + return r; + } + + fault->max_level =3D min(max_level, fault->max_level); fault->map_writable =3D !(fault->slot->flags & KVM_MEM_READONLY); return RET_PF_CONTINUE; } --=20 2.25.1